Since the boycott of the U.S. News and World Report Rankings (“U.S. News Rankings”) by a group of law schools, the U.S. News Rankings have gone a bit haywire. The top law schools, all highly ranked, refused to submit data to U.S. News. Other schools followed their lead. As a result, U.S. News changed its rankings, perhaps focusing a bit more on publicly available data.
But that has led to speculation of potentially wild results. In the past, such dramatic ranking shifts were somewhat limited, but with the changing rankings, and the apparent new norm of frequent changes to methodologies, such radical changes might be in our future. It is striking, though, how the fluctuations rarely seem to affect the higher ranked schools.
Meanwhile, prospective law students, perhaps largely unaware of these fluctuations because the top-ranked law schools barely change, are likely still invested in the U.S. News Rankings for determination of what makes a good law school. A quick check assures that the schools they know are prestigious like Harvard and Yale are in the top ten, with little variation year to year.
And, despite the understanding in the legal academy about the inconsistency and other problems, the U.S. News Rankings are still used by some schools as an aspirational goal and a publication goal (with some schools offering anti-intellectual publication bonuses for high-rank placement). Others use the rankings for marketing materials to prospective faculty candidates and students.
In this essay, I list five problems with the U.S. News Rankings, and I offer a few concrete solutions.
Problem 1: Information Asymmetries between Prospective Students and U.S. News
Prospective students know little about the ranking methodology or its impact. For example, students may know what percentage of the rankings is based upon peer assessment, but not know that the peer assessment has problems, such as the monopoly power a few schools wield in the law professor labor market. Students likely do not know of relative changes in rankings over time, so are unable to determine whether a rapid change is a mere blip or a genuine trend downward or upward. In short, the prospective student may be deceived by the nature of the rankings and the changes in how rankings are calculated.
Problem 2: Information Asymmetries between the Law Schools and U.S. News
As the dominant player in the market for rankings, U.S. News has little incentive to expend resources to monitor the data that law schools provide, to correct inaccurate data, or to make algorithmic adjustments unless the results produced by its formula are egregiously false or schools flagrantly manipulate the data that they submit. In fact, the value of the rankings endures by virtue of having little change at the top of the list. Should a school experience an unexpected drop in ranking, however, dramatic effects may occur, including dean resignations. Schools seeking to climb the rankings to attract high-quality students, or faced with habitually low rankings, may succumb to pressure to manipulate data to improve their rank. For example, it has been past practice for schools to employ their former students to inflate post-graduation employment statistics.
Problem 3: Favoring Well-Endowed Schools
To the extent that U.S. News alters variables as to what makes a “good school,” it favors wealthier schools that can deploy more resources and adapt quickly to the moving goal posts. But those at the top end of the rankings do not need to worry, as they are virtually guaranteed their spot. Other schools are subject to the potential of rapid fluctuations.
Schools also make significant time investments by creating committees tasked with benchmarking competitive schools, collecting employment data from recent graduates, and grappling with the impact of how varying analyses of the information might affect the U.S. News Rankings.
Part of the methodological shift as such is the post-boycott need to access publicly available data. That need, however, may not be a great basis for the methodology put forward. It may merely be a streetlight effect on methodology road.
Problem 4: The Problem of Potential Retaliation
To the extent that the variables that inform U.S. News Rankings are subject to change, they are potentially subject to abuse. It would be easy to add variables punishing schools that complain about the ranking’s methodology.
Just two examples should suffice. When Reed College refused to provide data to U.S. News for the general college rankings, instead of omitting the institution from its ranking, U.S. News “arbitrarily assigned the lowest possible value to each of Reed’s missing variables, with the result that Reed dropped in one year from the second quartile to the bottom quartile.” In subsequent rankings, U.S. News allegedly ranked Reed College on information available from other sources, noting that the institution did not complete the requested survey. In the law school context, when Pepperdine discovered an error in its submission, it sought to correct it. As a result of its innocent mistake, its ranking plummeted for a year.
Problem 5: Lack of Competition in Rankings
There are no substitutes for U.S. News Rankings, at least for U.S. law schools. The vast majority of prospective students use them. The vast majority of schools look to them. To the extent it is a monopoly, it has a unique role in shaping legal education. Indeed, not only is it a unique ranking, much effort is made by Spivey Consulting and others to predict those rankings, rather than create new ones.
As Sahaj Sharda has pointed out in The Sling, rankings matter. And it is rife with the possibility of hijinks, both from the university side and from the side of U.S. News.
Solution: An FTC Unfair or Deceptive Act or Practice (UDAP) Rule
Section 5 of the FTC Act states that “unfair or deceptive acts or practices in or affecting commerce . . . are . . . declared unlawful.” Deceptive acts or practices require a “material representation, omission or practice that is likely to mislead a consumer acting reasonably in the circumstances.” An act or practice is “unfair” if it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”
The FTC UDAP rules cover common consumer experiences, such as wash-and-dry labels, octane ratings for gasoline, funeral prices, eyeglasses, auto fuel ratings, used cars, energy ratings, door-to-door sales, identity theft, free credit report, contact lenses and eyeglasses, and child protection. In each of these instances, informational and other barriers barred consumers from realizing important information or availing themselves of alternatives.
Why not apply the same principles to rankings? An FTC UDAP rule related to law school rankings could involve three components.
First, the FTC should require U.S. News to disclose the algorithm it uses or to explain in greater detail the amount of weight it puts on each factor that goes into its overall rankings. An algorithm is a mathematical formula that allows data factors to be compiled into an order, based on the relative importance of each input. If an end user has different values or priorities than U.S. News, the algorithm makes a major difference in the list’s outcome. Disclosing the algorithm protects the consumer from false, deceptive, and misleading information.
The FTC should mandate that any alteration of methodology should provide law schools with at least two year’s notice. At present, much of the methodology could be ex-post, surprising schools attempting to climb the rankings.
Second, the rule should eliminate conflict of interest voting and should mandate disclosure of other data. Schools are not in a position to rank themselves; nor are faculty of a school that is dominant in the production of law professors. The bulk of law professors hail from a few schools. Absent a horrific experience, it stands that the bulk of reputation scores favor those schools. This self-perpetuating cycle is not known to prospective students, and ought to be halted.
To the extent post-graduation employment is used, schools ought to be forced to disclose that number. Thus far, U.S. News has been reluctant and unable to determine the extent of such gaming. Subjecting such data submission to a UDAP rule raises the potential risk to a school that such manipulation might become subject to an investigation and an amplified public notice.
Third, the FTC’s rule should impose penalties on schools and U.S. News for violations of the rule. Another feature of a UDAP rule would be consistent deterrence, as opposed to arbitrary punishments that U.S. News might impose upon schools. If such penalties are not linked to the ranking itself, a UDAP rule would still benefit consumers of the ranking, whereas displacing the ranking of a school that misbehaves might penalize beyond the group involved in the decision to manipulate the ranking.
If the FTC were to adopt such a rule, it would bring some much-needed relief to law school applicants and the schools themselves.
The views expressed in this piece do not reflect the views of my employer.
Correction 2/28/24: Since publication, Spivey Consulting reached out to correct an entry error related to Buffalo Law School’s ranking in their post, when Spivey converted that information from their software to the blog. We have corrected the entry here as well. Other schools will still face such drops.
The news of the layoffs was stunning: Three months after consummating its $68 billion acquisition of Activision, Microsoft fired 1,900 employees in its gaming division. The relevant question, from a policy perspective, is whether these terminations reflect the exercise of newfound buying power made possible by the merger? If so, then Microsoft may have just unwittingly exposed itself to antitrust liability, as mergers can be challenged after the fact in light of clear anticompetitive effects.
The Merger Guidelines recognize that mergers in concentrated markets can create a presumption of anticompetitive effects. When studying the impact of a merger on any market, including a labor market, the starting place is to determine whether the merged firm collectively wielded market power in some relevant antitrust market. That inquiry can be informed with both direct and indirect evidence.
Direct evidence of buying power, as the name suggests, is evidence that directly shows a buyer has power to reduce wages or exclude rivals. Indirect evidence of buying power can be established by showing high market shares (plus entry barriers) in a relevant antitrust market. It bears noting that, when it comes to labor markets, high market shares are not strictly needed to infer buying power due to high search and switching costs (often absent in output markets).
Beginning with the direct evidence, Activision exhibited traits of a firm with buying power over its workers. For example, before it was acquired, Activision undertook an aggressive anti-union campaign against its workers’ efforts to organize a union. Moreover, workers at Activision complained about their employer’s intransigent position on granting raises, often demanding proof of an outside offer. A recent article in Time recounted that “Several former Blizzard employees said they only received significant pay increases after leaving for other companies, such as nearby rival Riot Games, Inc. in Los Angeles.” Activision also entered a consent decree in 2022 with the Equal Employment Opportunity Commission to resolve a complaint alleging Activision subjected its workers to sexual harassment, pregnancy discrimination, and retaliation related to sexual harassment or pregnancy discrimination.
Moving to the indirect evidence, one could posit a labor market for video game workers at AAA gaming studios. Both Microsoft and Activision are AAA studios, making them a preferred destination for industry labor. Independent studios are largely regarded as temporary stepping stones toward better positions in large video game firms.
To estimate the merged firm’s combined share in the relevant labor market, in a forthcoming paper, Ted Tatos and I study CareerBuilder’s Supply and Demand data, filtering on the term “video game” in the United States to recover job applications and postings over the last two years. The table summarizes the results of our search in the Spring 2022, a few months after the Microsoft-Activision deal was announced. Our analysis conservatively includes small employers that workers at a AAA studio such as Activision likely would not consider to be a reasonable substitute.
Job Postings Among Top Studios in Video Game Industry – CareerBuilder Data
Company Name | Number of Job Postings | Percent of Postings | Corporate Entity |
Activision Blizzard, Inc. | 1,270 | 26.0% | Microsoft |
Electronic Arts Inc. | 856 | 17.5% | |
Rockstar Games, Inc. | 287 | 5.9% | Take-Two |
Ubisoft, Inc. | 258 | 5.3% | |
2k, Inc. | 143 | 2.9% | Take-Two |
Zenimax Media Inc. | 128 | 2.6% | Microsoft |
Epic Games, Inc. | 112 | 2.3% | |
Lever Inc | 106 | 2.2% | |
Wb Games Inc. | 101 | 2.1% | |
Survios, Inc. | 100 | 2.0% | |
Riot Games, Inc. | 91 | 1.9% | Tencent |
Zynga Inc. | 84 | 1.7% | Take-Two |
Funcom Inc | 79 | 1.6% | Tencent |
2k Games, Inc. | 74 | 1.5% | Take-Two |
Complete Networks, Inc. | 65 | 1.3% | |
Gearbox Software | 58 | 1.2% | Embracer |
Digital Extremes Ltd | 43 | 0.9% | Tencent |
Naughty Dog, Inc. | 43 | 0.9% | Sony |
Mastery Game Studios, LLC | 26 | 0.5% | |
Crystal Dynamics Inc | 25 | 0.5% | Embracer |
Skillz Inc. | 25 | 0.5% | |
Microsoft Corporation | 24 | 0.5% | Microsoft |
Others | 887 | 18.2% | |
TOTAL | 4,885 | 100.0% |
As indicated in the first row, Activision lies at the top in number of job postings in the CareerBuilder data, with 26.0 percent. Prior to the Activision acquisition, Microsoft accounted for 3.1 percent of job postings (the sum of Zenimax Media and Microsoft rows). Based on these figures, Microsoft’s acquisition of Activision significantly increased concentration (by more than 150 points) in an already concentrated market (post-merger HHI above 1,200). This finding implies that the merger could lead to anticompetitive effects in the relevant labor market, including layoffs.
It bears noting that the HHI thresholds established in the 2023 Merger Guidelines (Guideline 1) were most likely developed with product markets in mind. Indeed, the Guidelines recognize in a separate section (Guideline 10) that labor markets are more vulnerable to the exercise of pricing power than output markets: “Labor markets frequently have characteristics that can exacerbate the competitive effects of a merger between competing employers. For example, labor markets often exhibit high switching costs and search frictions due to the process of finding, applying, interviewing for, and acclimating to a new job.” High switching costs are also present in the video game industry: Almost 90 percent of workers at AAA studios in the CareerBuilder Resume data indicate that they did not want to relocate, making them more vulnerable to an exercise of market power than the HHI analysis above implies.
As any student of economics recognizes, a monopsonist not only reduces wages below competitive levels, but also restricts employment relative to the competitive level. So the immediate firing of 1,900 workers is consistent with the exercise of newfound monopsony power. In technical terms, the layoffs could reflect a change in the residual labor supply curve faced by the merged firm.
Why would Microsoft exercise its newfound buying power this way? To begin, many Microsoft workers, prior to the merger, could have switched to Activision in response to a wage cut. Indeed, we were able find in the CareerBuilder data that a substantial fraction of former Microsoft workers left Microsoft Game Studios to work for Activision. (More details on the churn rate to come in our forthcoming paper.) Post-merger, Microsoft was able to internalize this defection, weakening the bargaining position of its employees, and putting downward pressure on wages. In other words, Microsoft is more disposed to cutting Activision jobs than would a standalone Activision. Moreover, by withholding Activision titles from competing multi-game subscription services—the FTC’s primary theory of harm in its litigation, now under appeal—Microsoft can give an artificial boost to its platform division. This input foreclosure strategy would compel Microsoft to downsize its gaming division and thus its gaming division workers.
Alternative Explanations Don’t Ring True
The contention that these 1,900 layoffs flowed from the merger, as opposed to some other force, is supported in the economic literature in other labor markets. A recent paper by Prager and Schmitt (2021) studied the effect of a competition-reducing hospital merger on the wages of hospital staff. Consistent with economic theory, the merger had a substantial negative effect on wages for workers whose skills are much more useful in hospitals than elsewhere (e.g., nurses). In contrast, the merger had no discernable effect on wages for workers whose skills are equally useful in other settings (e.g., custodians). As Hemphill and Rose (2018) explain in their seminal Yale Law Journal article, “A merger of competing buyers can exacerbate the merged firm’s incentive to buy less in order to drive down input prices.”
Microsoft has its defenders in academia. According to Joost van Dreunen, a New York University professor who studies the gaming business, the video game industry is “suffering through a winter right now. If everybody around you is cutting their overhead and you don’t, you’re going to invoke the wrath of your shareholders at some point.” (emphasis added) This point—which sounds like it was fed by Microsoft’s PR firm—is intended to suggest that the firings would have occurred absent the merger. But there are two problems with this narrative. First, Microsoft’s gaming revenues are booming (up nine percent in the first quarter of its 2024 fiscal year), which makes industry comparables challenging. What were the layoffs among video game firms that also grew revenues by nine percent? Second, video programmers and artists are not “overhead,” such as HR workers or accountants. (Apologies to those workers.) Thus, their firing cannot be attributed to some redundancy in deliverables.
Microsoft’s own press statement about the layoffs vaguely states that it has “identified areas of overlap” across Activision and its former gaming unit. But that explanation is just as consistent with the labor-market harm articulated here as with the “eliminating redundancy” efficiency.
Bobby Kotick, the former CEO of Activision, received a $400 million golden parachute at the end of the year for selling his company to Microsoft. That comes to about $210,500 per fired employee, or about two years’ worth of severance for each worker laid off. Too bad those resources were so regressively assigned.
Larry Summers and other corporate apologists asserted for over a year that the Federal Reserve would have to engineer a recession to bring down prices. But as inflation continues to fall with no corresponding rise in unemployment, doomsayers’ insistence on the need to throw millions of people out of work to restore price stability has been discredited. Although the United States is on track to achieve a soft landing once thought improbable, don’t give Fed Chair Jerome Powell credit; disinflation without mass joblessness is happening despite his move to jack up interest rates, not because of it. And while the Fed is expected to begin lowering interest rates later this year, Powell should still be regarded as a hazard to the health of our polity and our planet.
Just a few weeks ago, Powell told security to “close the fucking door” on a group of climate campaigners who interrupted a speech he was giving. Powell’s palpable contempt for the protesters was another reminder that President Joe Biden should never have renominated the former private equity executive to lead the Fed. The magnitude of Biden’s mistake has become increasingly clear in the roughly two years since he made it.
Put bluntly, Powell is doing a bang-up job of hastening the end of civilized life on Earth. First, his refusal to use the U.S. central bank’s regulatory authority to rein in the financing of fossil fuels is locking in more destructive warming. Second, his prolonged campaign of interest rate hikes is hindering the greening of the economy at a pivotal moment when there is no time to waste. Last but not least, the high interest rate environment Powell has created is improving Donald Trump’s 2024 electoral prospects—and given Trump’s coziness with the fossil fuel industry, his election would be a death knell for the climate.
Nevertheless, we have yet to hear a mea culpa from prominent Powell cheerleaders, who argued that the Fed Chair’s pre-2022 dovishness outweighed his regulatory deficiencies. What has become painfully clear is that Powell’s actual hawkishness is undermining the investment incentives of Biden’s green economic agenda.
Biden tapped Powell for a second four-year term despite opposition from public interest groups, including Public Citizen and the Revolving Door Project, where my colleague Max Moran identified several better candidates. The recent anniversary of Powell’s renomination should invite critical reflection on the arguments made by his supporters and detractors alike during the drawn-out battle to staff Biden’s Fed. Struggles to reshape financial regulation will only grow more fierce in the coming years, and the left needs to be prepared to fight for central bank leaders who are committed to advancing whole-of-government responses to the intertwined climate and inequality crises.
What were people thinking? Reassessing the cases for and against Powell
As evidence mounts that rate hikes imposed by Powell (and many of his central banking peers abroad) are making global climate apartheid more likely, it’s worth revisiting why many establishment liberals and even some progressives advocated on his behalf in the summer and fall of 2021—and why others on the left sounded the alarm.
According to Powell’s defenders at the time, the Fed Chair’s response to the Covid crisis demonstrated that he would strive, unlike his predecessors, to fulfill both parts of the institution’s dual mandate: maintaining low inflation and pursuing full employment. Furthermore, they insisted, Powell’s GOP affiliation would allow him to do so while retaining the support of congressional Republicans, the corporate media, and Wall Street.
Powell’s opponents welcomed the chair’s dovish approach to monetary policy from 2018 to 2021, though they simultaneously acknowledged his history of changing positions based on political whims. They remained unconvinced, however, that Powell was the only candidate who would give maximizing employment equal priority as keeping inflation below the Fed’s arbitrary and untenable 2 percent target. Lael Brainard, then the only Democratic member of the Federal Reserve Board of Governors, could be expected to do that and perform better at other, equally important aspects of the job, they argued, regardless of whether right-wing lawmakers backed her.
Obviously, the notion that Powell’s purported commitment to full employment would lead the Fed to keep interest rates low was quickly brought into disrepute. Just one week after Biden renominated him, the Fed chair had already changed his tune. And in early 2022, Powell launched the most drastic and sustained campaign of rate hikes in decades, earning comparisons to Paul Volcker.
But Powell’s critics, especially those concerned with climate justice, didn’t need the benefit of hindsight to see that the incumbent was a problematic pick. They had already argued convincingly that Powell’s weaknesses on financial regulation should be disqualifying. The passage of time has revealed how wrong Powell’s supporters were to dismiss progressives’ warnings about Powell’s ethical failures as well as his penchant for deregulation, which reared its ugly head with the 2023 collapse of Silicon Valley Bank and Signature Bank.
Robinson Meyer, the founder of climate media outlet Heatmap and contributor to the New York Times, was an early Powell supporter. His piece, titled “The Planet Needs Jerome Powell,” is an emblematic pro-Powell article published by The Atlantic in September 2021, amid the lengthy fight over Biden’s pick for Fed chair. Meyer admonished the climate left for its supposed lack of seriousness about the Fed’s role in macroeconomic management. According to Meyer’s narrow interpretation (shared by neoliberal blogger Matt Yglesias), the Federal Reserve as an institution is basically reducible to monetary policy and has little of consequence to do with financial regulation.
The demand from “regulation hawks” for a central bank leader who would ramp up Wall Street oversight was misguided, Meyer suggested, because the Fed’s actions on this front “won’t directly reduce carbon pollution.” “Employment hawks,” on the other hand, were right to focus on Powell’s dovishness, he added, because keeping interest rates low to spur green investment is the best a central banker can do on climate. It’s a sad irony that the Fed’s ensuing imposition of rate hikes has undermined the decarbonization effort that Meyer said Powell was best suited to oversee (more on that later).
Contra Meyer, financial regulation is a key aspect of the Fed’s work. If the central bank were to earnestly address the climate emergency’s threats to the financial system (and financiers’ threats to the climate), it would lead banks and other lenders to cease new investment in fossil fuels, an increasingly risky asset class that is not only highly destructive but also likely to become stranded. The continued financing of greenhouse gas emissions makes predatory subprime lending look tame by comparison.
Powell has refused to curb lending to planet-wrecking fossil fuels
Future historians will be at pains to explain why the world’s 60 largest private banks provided more than $5.5 trillion in financing to the fossil fuel industry from 2016 to 2022, including over $1.5 trillion after 2021—the year the International Energy Agency declared that investments in new coal, oil, and gas production are incompatible with its net-zero by 2050 pathway.
Those historians might also ask why regulators allowed Wall Street to pour vast sums of money into ecologically destabilizing and soon-to-be-outdated infrastructure during this crucial decade. At a time when transformative interventions are necessary, the Treasury Department has opted to release voluntary principles for net-zero financing and investment, while the Securities and Exchange Commission is finalizing rules that would require companies to report some of their greenhouse gas emissions and make other climate-related disclosures. Meanwhile, all the Fed has done so far is bail out fossil fuel companies at the beginning of the Covid pandemic and publish—alongside the Federal Deposit Insurance Corporation and the Office of the Comptroller of the Currency—weak guidance for climate risk management at big banks.
As watchdogs observed earlier this year, the Fed’s proposals are “much vaguer than the detailed expectations laid out by global peers.” This is unconscionable, especially because Powell and other top U.S. regulators have already been empowered by Congress to rein in reckless lending by “too-big-to-fail” or systemically important financial institutions.
Specifically, Section 121 of the Dodd-Frank Act instructs the Federal Reserve to determine whether a bank holding company or nonbank SIFI poses a “grave threat to the financial stability of the United States.” With the approval of the Financial Stability Oversight Council (FSOC), the Fed “can take a host of actions, including imposing limitations on an institution’s activities, prohibiting activities, or forcing asset divestiture,” Graham Steele, former Assistant Secretary for Financial Institutions at the Treasury Department, explained in a landmark 2020 report published before he joined the Biden administration. “While this authority contains some built-in procedural complexity, a Federal Reserve determined to mitigate climate risks should use it to force the largest, most systemic bank holding companies, insurers, and asset managers to divest of their climate change-causing assets.”
The Fed not only has the authority to minimize climate-related financial risks, but doing so falls squarely within its core responsibilities, regardless of Powell’s insistence to the contrary. The Fed is tasked with macroprudential regulation (i.e., managing systemic financial risks), and the existential threat of climate change by definition endangers economic stability. To ignore it is a clear dereliction of duty.
It’s not hard to imagine the outsized positive impact that a progressive leader of the Fed could have on shutting down planned increases in fossil fuel combustion. Consider, for instance, that just four U.S.-based financial giants—JP Morgan Chase, Citi, Wells Fargo, and Bank of America, all of which are SIFIs—account for roughly one-quarter of the aforementioned lending to coal, oil, and gas firms, much of which is bound to end up as stranded assets.
The Fed Chair has not only failed to halt fossil fuel expansion, but also has simultaneously inhibited the buildout of a more sustainable economy by embarking on an unwarranted campaign of interest rate hikes. Powell’s alleged dovishness turned out to be remarkably shallow, and it remains true that better Fed Chair candidates dismissed by Meyer and ignored by Biden were more dedicated to the Fed’s full employment mandate.
Powell has imposed transition-impeding interest rate hikes
Since the start of 2022, Powell has raised the federal funds rate from 0.08% to 5.33%, increasing the costs of borrowing enough to stymie the green economic transition while doing little to alleviate inflation (the professed reason for the rate hikes).
It has become ever more apparent over time that rising interest rates are hampering efforts to decarbonize energy supplies and electrify transportation, housing, and other key sectors. High interest rates have had the dual effect of rolling back productive investment and lowering consumer demand, causing substantial drops in the stocks of major solar, wind, and other renewables-based companies; undermining the deployment of offshore wind projects; delaying the construction of electric vehicle (EV) factories; and slowing the installation of heat pumps.
In effect, Powell is exercising veto power over the Inflation Reduction Act and ruining “the economics of clean energy,” as David Dayen explained recently in The Prospect. President Biden’s signature climate legislation contains hundreds of billions of dollars in subsidies for green industrialization, but repeated interest rate hikes have driven up financing costs enough to outweigh them. As Dayen noted, this is especially the case because the law’s reliance on tax credits requires upfront investment decisions.
It’s worth stressing here that while inflation has declined significantly since its June 2022 peak, Powell’s crusade had little to do with it. The Fed Chair made clear that his goal with interest rate hikes was to “get wages down” (and thus suppress demand) by ramping up unemployment. Fortunately, inflation diminished even in the absence of an uptick in joblessness. The upshot is that while Powell surely wants credit for taming inflation without provoking a recession, he doesn’t deserve it. His chief accomplishment has been to unnecessarily stifle the nascent shift to a greener economy, an ominous development with negative ramifications.
Powell is boosting Trump’s electoral chances
Powell—a lifelong Republican and former private equity bigwig—isn’t just thwarting the green economic transition right now. His obdurate leadership of the U.S. central bank is increasing the costs of housing, automobiles, financed consumer durables, and credit card debt—contributing to widespread anger about the state of the economy even as “Bidenomics” delivers low unemployment and much-needed wage compression. Economic discontent is helping Donald Trump’s 2024 election chances and thus hurting humanity’s long-term prospects for averting the worst consequences of the climate crisis.
It’s a cruel irony that Powell’s interest rate hikes have inflicted real-world harms while being incapable of addressing their purported target: inflation. That’s because the cost-of-living crisis of the past two years didn’t result from a wage-price spiral, as promised by Larry Summers; it was fueled by sellers’ inflation, or corporate profiteering, and exacerbated by the elimination of the pandemic-era welfare state. When the onset of Covid and Russia’s invasion of Ukraine upended international supply chains—rendered fragile through decades of neoliberal globalization—corporations bolstered by preceding rounds of consolidation capitalized on both crises to justify price hikes that outpaced the increased costs of doing business. That safety-net measures enacted in the wake of the coronavirus crisis were allowed to expire only made the situation worse.
Given that the inflation saga of the past two years is inseparable from preexisting patterns of market concentration, progressives have argued against job-threatening rate hikes (note that jacking up unemployment is the only mechanism through which the Fed could lower inflation; for more, see my colleague’s deep dive on the matter) and for a more relevant mix of policies, including a windfall profits tax, stronger antitrust enforcement, and temporary price controls. Unlike the blunt instrument that Powell has been wielding ineffectively, those tailored solutions—the last two of which are within the Biden administration’s ambit—have the potential to dilute the power of price-gouging corporations without hurting workers.
Although inflation is easing, prices remain elevated compared with people’s historic expectations, and rising rents and debts continue to overwhelm households. Biden needs to use his bully pulpit to advocate for a government crackdown on corporate villains. The outcome of the next election—and the fate of U.S. democracy and the planet writ large—depend on it.
Tight monetary policy is making Trump’s return more likely. That makes Biden’s decision to renominate a Trump appointee whose main priority (to allegedly attack profit-driven inflation with the ill-equipped tool of interest rate hikes) conflicts so sharply with the White House’s own stated industrial policy goals (to spur investment in green technologies) all the more nonsensical.
Biden already has to contend with obstructionism from congressional Republicans (and a handful of corporate Democrats) as well as the Supreme Court’s far-right majority. Now, thanks to his own unforced error, the president has to deal with obstructionism from his hand-picked Fed leader—a former partner at the Carlyle Group, one of the world’s most notorious union-busting and fossil fuel-investing private equity firms.
Moving Forward
It bears repeating that careful observers of the Fed are right to worry about climate change—and to stress the agency’s rulemaking authorities and obligations—because nothing else poses a greater threat to economic well-being. What the planet needs more than anything is for Powell to start taking his entire job seriously.
Powell’s inaction is hardly surprising given his January 2023 declaration that the Fed is not and never will be a “climate policymaker.” But Powell’s assertion could not be more wrongheaded; central bankers around the world are key climate policymakers whether or not they identify as such. The United Nations warned ahead of COP28 that the world is currently on pace for a “hellish” 3°C (or about 5.4° Fahrenheit) of warming by 2100. Do Powell and his colleagues seriously think that such a calamity wouldn’t imperil macroeconomic performance?
Frankly, millions of people’s lives and billions of dollars of property are already being destroyed by an ostensibly “safe” amount of climate change (the world is roughly 1.3°C warmer now compared with preindustrial averages). More than doubling extant temperature rise by century’s end would unleash increasingly frequent and severe extreme weather disasters, inflict trillions of dollars in monetary damages, and cause incalculable amounts of pain, including significant losses of lives, livelihoods, cultural artifacts, and biodiversity. A world beset by intensifying heatwaves, droughts, wildfires, storms, and floods will be a world full of ruined cities, factories, and farms. It will also be a far more expensive place to live.
Despite Powell’s apparently steadfast commitment to maintaining price stability, he is actively undermining the possibility of steady prices in the long-run. Again, this isn’t for a lack of tools. It’s for a lack of political willpower. While some of the Fed’s foreign counterparts are currently exploring or implementing mandatory disclosure rules, more stringent climate stress tests of banks’ assets, and direct investment or lending policies that prioritize green enterprises, the U.S. is falling further behind.
Powell should have listened to those activists he dismissed recently because they are right—it’s past time for the Fed to protect the climate from the havoc wreaked by the financial system and vice versa. If Powell won’t do everything in his power to restrain fossil fuel financing and incentivize green investment, then Biden should explore whether he can fire him for cause and appoint someone who will.
Kenny Stancil is a Senior Researcher at the Revolving Door Project.
Last week, the Consumer Financial Protection Bureau and the Biden White House proposed to limit prices on overdraft protection by banks. This is smart policy and is backed by sound economics.
While inflation ran hot in 2022 and 2023, talk of price controls bubbled to the surface, even in economic circles. University of Massachusetts economist Isabella Weber pointed out that price controls were used successfully during World War II, and deployed effectively by Germany more recently to handle spiraling natural gas prices. Health policy professors interviewed last week by the New York Times noted that other countries, including Canada and France, use price controls to limit inflation in pharmaceuticals. Just a few years ago, price controls were a dirty word in economics—the only imaginable exception being for natural monopolies, where a price cap was set in a way to permit a normal rate of return.
Given the shifting attitudes towards price controls, the CFPB’s overdraft-fee proposal is likely to receive a friendlier reception among economists. The financial watchdog estimates that banks collect about $9 billion annually in overdraft fees, and customers who pay overdraft fees pay about $150 on average every year. Overdraft fees averaged $35 per event in recent years, and a bank can assess multiple fees for one overcharge episode whenever multiple checks bounce due to the overdraft. Moreover, banks engage in practices that induce overdrafts (or greater fees), such as refusing to deposit a check without a ten-day hold or engaging in “high-to-low reordering” (processing a large debit before smaller transactions, even if the latter are posted earlier).
The CFPB’s proposed rule would require banks to justify their overdraft fees on the basis of the bank’s incremental costs; if the banks could not do so, then the fee would be regulated at some price between $3 and $14. The rule would apply to large banks only, which some commentators have pointed out misses some of the worst offenders.
Vulnerable Aftermarkets
Economists recognize that market forces are especially weak when it comes to disciplining the price of ancillary or aftermarket services. The classic teaching example is movie theater popcorn. Customers do not have the price of popcorn on the top of their minds when choosing among theaters; the movie choice and the drive time are paramount. And when they arrive at a theater, customers are not likely to reverse their ticket transaction and find a new theater in response to sky-high popcorn prices; the switching costs would be too steep. The same is true for the price of other common aftermarket services, such as movie rental in hotels and printer cartridges.
Overdraft protection can be understood as an ancillary service to standard checking account services. As one economist at the Federal Reserve put it, “Most bank fees represent an example of add-on or aftermarket fees. Aftermarkets can be found in many industries such as printers (for toner), computers (software), razors (blades) and many others.” When someone is shopping around for where to set up their checking account, they will primarily consider the bank’s reputation and geographic footprint, the proximity of a physical office to their home, and the interest rate offered on savings. Overdraft protection likely is not top of mind, and even if it were, the bank won’t prominently display its overdraft fee on its webpage. Economists have learned through experiments that sending repeat messages to customers with a propensity to incur an overdraft fee was effective, consistent with customers having limited attention.
Indeed, I learned of Bank of America’s overdraft fee ($10) only by invoking the help tab on its website, and then looking through several documents that contained the term “overdraft.” The fee is buried in a document titled “Personal Schedule of Fees.” Given the high costs of switching banks, when a customer is hit with an exorbitant overdraft fee, there is little chance the customer will terminate the relationship—that is, the traditional forces that discipline supracompetitive prices are absent.
The American Bankers Association (ABA) rushed out a statement in opposition to the CFPB proposal, claiming that overdraft fee caps “would make it significantly harder for banks to offer overdraft protection to customers.” (This would only be true if the cap were set below the incremental cost of providing the service.) In support of its opposition, the ABA cited a Morning Consult survey, showing that 88 percent of respondents “find their bank’s overdraft protection valuable,” and 77 percent who have paid an overdraft fee in the past year “were glad their bank covered their overdraft payment, rather than returning or declining payment.”
There’s no doubt bank customers value overdraft protection and detest the notion of bouncing checks to multiple vendors. The relevant economic question, however, is whether market forces can be counted on to price overdraft protection at competitive levels (i.e., near marginal costs). So this survey was a bit of misdirection.
A relevant survey, by contrast, would ask bank customers whether they considered a bank’s overdraft fee when choosing with which company to bank, and whether they would consider switching banks upon learning of the bank’s high overdraft fee. If the answer to either of those questions is no, then bank customers are vulnerable to excessive pricing on overdraft protection.
Who Bears The Burden Matters
The typical customer who bears the burden of excessive overdraft fees is low-income, which means a policy of tolerating overcharges here is highly regressive. Consumer Reports notes that eight percent of bank customers, mostly lower-income, account for nearly three quarters of revenues from overdraft fees. According to a CFPB survey released in December 2023, among households that frequently incurred overdraft fees, 81 percent reported difficulty paying a bill at least once in the past year, another indication of poverty. The CFPB survey also notes that “[w]hile just 10% of households with over $175,000 in income were charged an overdraft or an NSF fee in the previous year, the share is three times higher (34%) among households making less than $65,000.”
When deciding whether to impose price controls of the kind contemplated in the CFPB proposal, the economic straights of the typical overdraft fee payor is important. Economists recognize that customers is aftermarkets are generally vulnerable to high prices, but do not counsel an intervention in each of these markets. A middle-class family that overpays for popcorn at a movie theater does not engender much sympathy; if the price is too high, then can abstain without much consequence. Similarly, learning that an upper-class family overpaid for in-room dining at a boutique hotel similarly does not tug at the heartstrings. But a low-income family that pays a $35 overdraft fee could be missing out on other important things like meals, and is in no position to refuse the service; refusing to comply might jeopardize their credit or banking relationship.
The Element of Surprise
In addition to the weak market forces disciplining the price of aftermarket services, bank customers are particularly vulnerable to exploitation given their lack of knowledge about the fees. The same CFPB survey mentioned above showed that, among those who paid an overdraft fee, only 22 percent of households expected their most recent overdraft fee—that is, for many customers (almost half), the overdraft fee came as a surprise. In discussing the fairness of surprise fees, Nobel prize winner Angus Deaton notes in his new book, Economics in America, that “If you need an ambulance, you are not in the best position to find the best service or to bargain over prices; instead you are helpless and the perfect victim for a predator.” Neoliberal economists might ignore these teachings, and instead trust the market to deliver competitive prices for ambulance services and overdraft fees. But anyone with a modicum of understanding of power imbalances and information asymmetries will quickly recognize that an intervention here is well grounded in economics.
The FTC just secured a big win in its IQVIA/Propel case, the agency’s fourth blocked merger in as many weeks. This string of rapid-fire victories quieted a reactionary narrative that the agency is seeking to block too many deals and also should win more of its merger challenges. (“The food here is terrible, and the portions are too small!”) But the case did a lot more than that.
Blocking Anticompetitive Deals Is Good—Feel Free to Celebrate!
First and foremost, this acquisition, based on my read of the public court filings, was almost certainly illegal. Blocking a deal like this is a good thing, and it’s okay to celebrate when good things happen—despite naysayers grumbling about supporters not displaying what they deem the appropriate level of “humility.” Matt Stoller has a lively write-up explaining the stakes of the case. In a nutshell, it’s dangerous for one company to wield too much power over who gets to display which ads to healthcare professionals. Kudos to the FTC caseteam for securing this win.
Judge Ramos Gets It Right
A week ago, the actual opinion explaining Judge Ramos’s decision dropped. It’s a careful, thorough analysis that makes useful statements throughout—and avoids some notorious antitrust pitfalls. Especially thoughtful was his treatment of the unique standard that applies when the FTC asks to temporarily pause a merger pending its in-house administrative proceeding. Federal courts are supposed to play a limited role that leaves the final merits adjudication to the agency. That said, it’s easy for courts to overreach, like Judge Corley’s opinion in Microsoft/Activision that resolved several important conflicts in the evidence—exactly what binding precedent said not to do. This may seem a little wonky, but it’s playing out against the backdrop of a high-stakes war against administrative agencies. So although “Judge Does His Job” isn’t going to make headlines, it’s refreshing to see Judge Ramos’s well-reasoned approach.
The IQVIA decision is also great on market definition, another area where judges sometimes get tripped up. Judge Ramos avoided the trap defendants laid with their argument that all digital advertising purveyors must be included in the same relevant market because they all compete to some extent. That’s not the actual legal question—which asks only about “reasonable” substitutes—and the opinion rightly sidestepped it. We can expect to see similar arguments made by Big Tech companies in future trials, so this holding could be useful to both DOJ and FTC as they go after Meta, Google, and Amazon.
How Does This Decision Fit Into the Broader Project of Reinvigorating Antitrust?
One core goal shared by current agency leadership appears to be making sure that antitrust can play a role in all markets—whether they’re as traditional as cement or as fast-moving as VR fitness apps.
The cornerstone of IQVIA’s defense was that programmatic digital advertising to healthcare professionals is a nascent, fast-moving market, so there’s no need for antitrust enforcement. This has long been page one of the anti-enforcement playbook, as it was in previous FTC merger challenges like Meta/Within. But, in part because the FTC won the motion to dismiss in that case, we have some very recent—and very favorable—law on the books rejecting this ploy.
Sure enough, Judge Ramos’s IQVIA opinion built on that foundation. He cited Meta/Within multiple times to reject these defendants’ similar arguments that market nascency provides an immunity shield against antitrust scrutiny. “While there may be new entrants into the market going forward,” Judge Ramos explained, “that does not necessarily compel the conclusion that current market shares are unreliable.” Instead, the burden is on defendants to prove historical shifts in market shares are so significant that they make current data “unusable for antitrust analysis.” His opinion is clear, and clearly persuasive—DOJ and a group of state AGs already submitted it as supplemental authority in their challenge to JetBlue’s proposed tie-up of Spirit Airlines.
A second goal that appears to be top-of-mind for the new wave of enforcers is putting all of their legal tools back on the table. Here again, the IQVIA win fits into the broader vision for a reinvigorated antitrust enterprise.
Just a few weeks before this decision, the FTC got a groundbreaking Fifth Circuit opinion on its challenge to the Illumina/GRAIL deal. Illumina had argued that the Supreme Court’s vertical-merger liability framework is no longer good law because it’s too old. In other words, the tool had gotten so dusty that high-powered defense attorneys apparently felt comfortable arguing it was no longer usable. That happened in Meta/Within as well: Meta argued both of the FTC’s legal theories involving potential competition were “dead-letter doctrine.” But in both cases, the FTC won on the substance—dusting off three unique anti-merger tools in the process.
IQVIA adds yet another: the “30% threshold” presumption from Philadelphia National Bank. Like Meta and Illumina before it, IQVIA argued strenuously that the legal tool itself was invalid because it had long been out of favor with the political higher-ups at federal agencies. But yet again, the judge rejected that argument out of hand. The 30% presumption is alive and well, vindicating the agencies’ decision to put it back into the 2023 Merger Guidelines.
Stepping back, we’re starting to see connections and cumulative effects. The FTC won a motion to dismiss in Meta/Within, lost on the injunction, but made important case law in the process. IQVIA picked up right where that case left off, and this time, the FTC ran the table.
Positive projects take time. It’s easier to tear down than to build. And both agencies remain woefully under-resourced. But change—real, significant change—is happening. In the short run, it’s impressive that four mergers were blocked in a month. In the long run, it’s important that four anti-merger tools are now back on the table.
John Newman is a professor at the University of Miami School of Law. He previously served as Deputy Director at the FTC’s Bureau of Competition.
For his first two years as Secretary of Transportation, Pete Buttigieg demurred on critical transportation regulation, especially oversight of airlines. Year three has seen a welcome about-face. After letting airlines run amok, Buttigieg and the Department of Transportation (DOT) have finally started taking them to task, including issuing a precedent-shattering fine to Southwest, fighting JetBlue’s proposed merger with Spirit, and—according to news just this morning— scrutinizing unfair and deceptive practices in frequent flier programs. With last week’s announcement of Alaska Airlines’ agreement to purchase Hawaiian Airlines for $1.9 billion, it is imperative that Buttigieg and his DOT keep up the momentum.
Alaska and Hawaiian Airlines are probably the two oddest of the United States’ twelve scheduled passenger airlines, as different as their namesake states are from the lower 48. But the oddity of this union—spurred on by Hawaiian’s financial situation, with Alaska taking on $900 million of Hawaiian’s debt—does nothing to counteract the myriad harms that it would pose to competition.
Although there’s relatively little overlap in flight routes between Alaska and Hawaiian, the geography of the overlap matters. As our friends at The American Prospect have pointed out, Alaska Airlines is Hawaiian’s “main head-to-head competitor from the West Coast to the Hawaiian Islands.”
Alaska flies directly between Hawaii’s four main airports and Anchorage, Seattle, Portland, San Francisco, San Jose, Los Angeles, and San Diego. Hawaiian has direct flights to and from those same four airports and Seattle, Portland, Sacramento, San Francisco, San Jose, Oakland, Los Angeles, Long Beach, Ontario, and San Diego.
The routes where the two airlines currently compete most, according to route maps from FlightConnections, are the very lucrative West Coast (especially California) to Hawaii flights, as shown below in the table.
Competition on Routes from the West Coast to Hawaii
Airports | Alaska | Hawaiian | Other Competitors |
Anchorage | Yes | No | None |
Seattle* | Yes | Yes | Delta |
Portland* | Yes | Yes | None |
Sacramento | No | Yes | Southwest |
Oakland | No | Yes | Southwest |
San Francisco* | Yes | Yes | United |
San Jose* | Yes | Yes | Southwest |
Los Angeles* | Yes | Yes | American, United, Southwest |
Long Beach | No | Yes | Southwest |
Ontario, CA | No | Yes | None |
San Diego | No | Yes | Southwest |
Phoenix | No | Yes | American, Southwest |
Las Vegas | No | Yes | Southwest |
Salt Lake City | No | Yes | Delta |
Dallas | No | Yes | American |
New York | No | Yes (JFK) | Delta (JFK), United (Newark) |
Boston | No | Yes | None |
As the table shows, five major Hawaiian routes overlap with Alaska Airlines’ offerings: direct flights between Honolulu and Seattle, Portland, San Jose, Los Angeles, and San Diego. This is no coincidence—it was one of the major selling points Alaska Airlines outlined on a call with Wall Street analysts, arguing that the merger would give them half of the $8 billion market in West Coast to Hawaii travel. Four of those routes will also face very little competition from other airlines. Delta is the only other major airline that flies between Seattle and any Hawaii destination, while Southwest is the only other option to fly direct between Hawaii and San Jose or Hawaii and San Diego.
And there is no competing service at all between Hawaii and Portland, where the only options are Alaska and Hawaiian. For this route, the merger is a merger to monopoly. Selling off a landing slot at Portland International Airport would not necessarily restore the loss in actual competition, as the buyer of the slot would be under no obligation to recreate the Portland-Hawaii route.
The merger clearly reduces actual competition on those five overlapping routes. But the merger could also lead to a reduction in potential competition in any route that is currently served by one but was planned to be served by the other. For example, if discovery reveals that Alaska planned to serve the Sacramento to Hawaii route (currently served by Hawaiian) absent the merger, then the merger would eliminate this competition.
But wait! There’s more. Alaska is also a member of the OneWorld Alliance, basically a cabal of international airlines that cooperate to help each other outcompete nonmembers. American Airlines is also a OneWorld member, meaning that Hawaiian will also no longer compete with American once it’s brought under Alaska’s ownership.
The proposed acquisition deal also has implications for the aviation industry more broadly, because concentration tends to beget more concentration. After Delta was allowed to merge with Northwestern, American and United both pursued mergers (with US Airways and Continental, respectively) under the pretense that they needed to get bigger to continue to compete with Delta. Basically, their argument was “you let them do it!”
Similarly, one way to view Alaska’s acquisition of Hawaiian is as a direct response to the proposed JetBlue-Spirit merger. If JetBlue-Spirit goes through, Alaska suddenly loses its spot in the top five biggest US-based airlines. But it gets the spot right back if it buys Hawaiian. This is how midsize carriers go extinct. From a lens that treats continued corporate mergers and lax antitrust enforcement as a given,
Alaska and Hawaiian can argue that their merger will actually keep things more competitive, if you squint the right way. They can claim that together, they will be able to compete more with Southwest’s aggressive expansion into the Hawaii and California markets and help them go toe-to-toe with United and Delta across the US west.
The problem is that this becomes a self-fulfilling prophecy—it assumes that companies will continue to merge and grow, such that the only way to make the market more competitive is to create other larger companies. This kind of thinking has led to the miserable state of flying today. Currently, the United States has the fewest domestic airlines since the birth of the aviation industry a century ago. There are only twelve scheduled passenger airlines–with significantly less competition at the route-level–and there hasn’t been significant entry in fourteen years (since Virgin America, which was later bought by Alaska, launched). This is not a recipe for a healthy competitive industry.
Moreover, twelve airlines actually makes the situation sound better than it is. As indicated by the table above, most airports are only served by a fraction of those airlines and antitrust agencies consider the relevant geographic market to be the route. Plus, of those twelve, only four (United, Delta, American, Southwest) have a truly national footprint. Those two factors combined mean that there is very limited competition in all but the largest airports and most flown routes.
In reality, the answer is better antitrust enforcement, which would enable Alaska and Hawaiian to compete with the bigger carriers not by allowing them to merge, but by breaking up the bigger carriers and forcing them to compete. Buttigieg’s DOT and other federal regulatory agencies can use their existing regulatory powers to do so (and brag about it). As airline competition has dwindled, passengers have faced worse conditions, higher prices, and less route diversity. Creating more competition would increase the odds that a company would shock the system by reintroducing larger standard seat sizes, better customer service, lower prices, new routes, or more—and would defend consumers against corporate price gouging, a goal Biden has recently been touting in public appearances.
Last winter, Buttigieg faced the biggest storm of his political career, between Southwest’s absolute collapse during the holidays and an FAA meltdown followed by the East Palestine train debacle. His critics, including us at the Revolving Door Project, pinned a lot of the blame on him and his DOT. Since then, he has responded in a big way. He started by hiring Jen Howard, former chief of staff to FTC Chair Lina Khan, as chief competition officer. They quickly got to work opposing JetBlue’s merger with Spirit Airlines and worked hard to truly bring Southwest to task for their holiday meltdown last year. Just days ago, the DOT announced that it had assessed Southwest a $140 million dollar fine, good for thirty times larger than any prior civil penalty given to an airline, on top of doling out more than $600 million dollars in refunds, rebookings, and other measures to make up for their mistreatment of consumers. Moving forward, the settlement requires Southwest to provide greater remuneration, including paying inconvenienced passengers $75 over and above their reimbursements as compensation for their trouble. Once implemented, this will be an industry-leading compensation policy.
This is exactly the kind of enforcement that we called for nearly a year ago, when we pointed out that the lack of major penalties abetted airline complacency, where carriers did not feel like they needed to follow the law and provide quality service because no one was going to make them. This year has been a wakeup call, both to DOT and to the airline industry about how rigorous oversight can force companies to run a tighter ship—or plane, as the case may be.
Buttigieg took big steps this year, but the Alaska-Hawaiian merger highlights the need for the DOT to remain vigilant. This merger may not be as facially monopolistic as past ones, but it does highlight that airlines are still caught up in their usual games of trying to cut costs and drive up profits by absorbing their competition. Regulators must be equally committed to their roles of catching and punishing wrongdoing, and in the long-term, restructuring firms to create a truly competitive environment that will serve the public interest.
Dylan Gyauch-Lewis is a Senior Researcher at the Revolving Door Project. He leads RDP’s transportation research and helps coordinate the Economic Media Project.
Right before Thanksgiving, Josh Sisco wrote that the Federal Trade Commission is investigating whether the $9.6 billion purchase of Subway by private equity firm Roark Capital creates a sandwich shop monopoly, by placing Subway under the same ownership as Jimmy John’s, Arby’s, McAlister’s Deli, and Schlotzky’s. The acquisition would allow Roark to control over 40,000 restaurants nationwide. Senator Elizabeth Warren amped up the attention by tweeting her disapproval of the merger, prompting the phrase “Big Sandwich” to trend on Twitter.
Fun fact: Roark is named for Howard Roark, the protagonist in Ayn Rand’s novel The Fountainhead, which captures the spirit of libertarianism and the anti-antitrust movement. Ayn Rand would shrug off this and presumably any other merger!
It’s a pleasure reading pro-monopoly takes on the acquisition. Jonah Goldberg writes in The Dispatch that sandwich consumers can easily switch, in response to a merger-induced price hike, to other forms of lunch like pizza or salads. (Similar screeds appear here and here.) Jonah probably doesn’t understand the concept, but he’s effectively arguing that the relevant product market when assessing the merger effects includes all lunch products, such that a hypothetical monopoly provider of sandwiches could not profitably raise prices over competitive levels. Of course, if a consumer prefers a sandwich, but is forced to eat a pizza or salad to evade a price hike, her welfare is almost certainly diminished. And even distant substitutes like salads might appear to be closer to sandwiches when sandwiches are priced at monopoly levels.
The Brown Shoe factors permit courts to assess the perspective of industry participants when defining the contours of a market, including the merging parties. Subway’s franchise agreement reveals how the company perceives its competition. The agreement defines a quick service restaurant that would be “competitive” for Subway as being within three miles of one of its restaurants and deriving “more than 20% of its total gross revenue from the sale of any type of sandwiches on any type of bread, including but not limited to sub rolls and other bread rolls, sliced bread, pita bread, flat bread, and wraps.” The agreement explicitly mentions by name Jimmy John’s, McAlister’s Deli and Schlotzky’s as competitors. This evidence supports a narrower market.
Roark’s $9.6 billion purchase of Subway exceeded the next highest bid by $1.35 billion—from TDR Capital and Sycamore Partners at $8.25 billion—an indication that Roark is willing to pay a substantial premium relative to other bidders, perhaps owing to Roark’s existing restaurant holdings. The premium could reflect procompetitive merger synergies, but given what the economic literature has revealed about such purported benefits, the more likely explanation of the premium is that Roark senses an opportunity to exercise newfound market power.
To assess Roark’s footprint in the restaurant business, I downloaded the Nation’s Restaurant News (NRN) database of sales and stores for the top 500 restaurant chains. If one treats all chain restaurants as part of the relevant product market, as Jonah Goldberg prefers, with total sales of $391.2 billion in 2022, then Roark’s pre-merger share of sales (not counting Subway) is 10.8 percent, and its post-merger share of sales is 13.1 percent. These numbers seem small, especially the increment to concentration owing to the merger.
Fortunately, the NRN data has a field for fast-food segment. Both Subway and Jimmy John’s are classified as “LSR Sandwich/Deli,” where LSR stands for limited service restaurants, which don’t offer table service. By comparison, McDonald’s, Panera, and Einstein are classified under “LSR Bakery/Café”. If one limits the data to the LSR Sandwich/Deli segment, total sales in 2022 fall from $391.1 billion to $26.3 billion. Post-merger, Roark would own four of the top six sandwich/deli chains in America. It bears noting that imposing this filter eliminates several of Roark’s largest assets—e.g., Dunkin’ Donuts (LSR Coffee), Sonic (LSR Burger), Buffalo Wild Wings (FSR Sports Bar)—from the analysis.
Restaurant Chains in LSR Sandwich/Deli Sector, 2022
Chain | Sales (Millions) | Units | Share of Sales |
Subway* | 9,187.9 | 20,576 | 34.9% |
Arby’s* | 4,535.3 | 3,415 | 17.2% |
Jersey Mike’s | 2,697.0 | 2,397 | 10.3% |
Jimmy John’s* | 2,364.5 | 2,637 | 9.0% |
Firehouse Subs | 1,186.7 | 1,187 | 4.5% |
McAlister’s Deli* | 1,000.4 | 524 | 3.8% |
Charleys Philly Steaks | 619.8 | 642 | 2.4% |
Portillo’s Hot Dogs | 587.1 | 72 | 2.2% |
Jason’s Deli | 562.1 | 245 | 2.1% |
Potbelly | 496.1 | 429 | 1.9% |
Wienerschnitzel | 397.3 | 321 | 1.5% |
Schlotzsky’s* | 360.8 | 323 | 1.4% |
Chicken Salad Chick | 284.1 | 222 | 1.1% |
Penn Station East Coast | 264.3 | 321 | 1.0% |
Mr. Hero | 157.9 | 109 | 0.6% |
American Deli | 153.2 | 204 | 0.6% |
Which Wich | 131.3 | 226 | 0.5% |
Capriotti’s | 122.6 | 142 | 0.5% |
Nathan’s Famous | 119.1 | 272 | 0.5% |
Port of Subs | 112.9 | 127 | 0.4% |
Togo’s | 107.7 | 162 | 0.4% |
Biscuitville | 107.5 | 68 | 0.4% |
Cheba Hut | 95.0 | 50 | 0.4% |
Primo Hoagies | 80.4 | 94 | 0.3% |
Cousins Subs | 80.1 | 93 | 0.3% |
Ike’s Place | 79.3 | 81 | 0.3% |
D’Angelo | 75.4 | 83 | 0.3% |
Dog Haus | 73 | 58 | 0.3% |
Quiznos Subs | 57.8 | 165 | 0.2% |
Lenny’s Sub Shop | 56.3 | 62 | 0.2% |
Sandella’s | 51 | 52 | 0.2% |
Erbert & Gerbert’s | 47.4 | 75 | 0.2% |
Goodcents | 47.3 | 66 | 0.2% |
Total | 26,298.60 | 230,629 | 100.0% |
Source: Nation’s Restaurant News (NRN) database of sales and stores for the top 500 restaurant chains. Note: * Owned by Roark
With this narrower market definition, Roark’s pre-merger share of sales (not counting Subway) is 31.4 percent, and its post-merger share of sales is 66.3 percent. These shares seem large, and the standard measure of concentration—which sums the square of the market shares—goes from 2,359 to 4,554, which would create the inference of anticompetitive effects under the 2010 Merger Guidelines.
One complication to the merger review is that Roark wouldn’t have perfect control of the sandwich pricing by its franchisees. Franchisees often are free to set their own prices, subject to suggestions (and market studies) by the franchise. So while Roark might want (say) a Jimmy John’s franchisee to raise sandwich prices after the merger, that franchisee might not internalize the benefits to Roark of diversion of some its customers to Subway. With enough money at stake, Roark could align its franchisees’ incentives with the parent company, by, for example, creating profit pools based on the profits of all of Roark’s sandwich investments.
Another complication is that Roark does not own 100 percent of its restaurants. Roark is the majority-owner of Inspire Brands. In July 2011, Roark acquired 81.5 percent of Arby’s Restaurant Group. Roark purchased Wendy’s remaining 12.3 percent holding of Inspire Brands in 2018. To the extent Roark’s ownership of any of the assets mentioned above is partial, a modification to the traditional concentration index could be performed, along the lines spelled out by Salop and O’Brien. (For curious readers, they show in how the change in concentration is a function of the market shares of the acquired and acquiring firms plus the fraction of the profits of the acquired firm captured by the acquiring firm, which varies according to different assumption about corporate control.)
When defining markets and assessing merger effects, it is important to recognize that, in many towns, residents will not have access to the fully panoply of options listed in the top 500 chains. (Credit to fellow Sling contributor Basel Musharbash for making this point in a thread.) So even if one were to conclude that the market was larger than LSR Sandwich/Deli chains, it wouldn’t be the case that residents could chose from all such restaurants in the (expanded) relevant market. Put differently, if you live in a town where your only options are Subway, Jimmy John’s, and McDonald’s, the merger could significantly concentrate economic power.
Although this discussion has focused on the harms to consumers, as Brian Callaci points out, the acquisition could allow Roark to exercise buying power vis-à-vis the sandwich shops suppliers. And Helaine Olen explains how the merger could enhance Roark’s power over franchise owners. The DOJ recently blocked a book-publisher merger based on a theory of harm to input providers (publishers), indicating that consumers no longer sit alone atop the antitrust hierarchy.
While it’s too early to condemn the merger, monopoly-loving economists and libertarians who mocked the concept of Big Sandwich should recognize that there are legitimate economic concerns here. It all depends on how you slice the market!
Seven years ago, Einer Elhauge published a call to arms. In a provocative essay in the Harvard Law Review, he urged the antitrust agencies to bring enforcement actions against what he called horizontal shareholding and what we now call common ownership. Common ownership raises antitrust concerns because investors own shares in two or more competitors. While the investors do not control any of the competitors, their joint ownership may be sufficient to cause the competitors to raise prices or otherwise compete less aggressively.
Elhauge’s most powerful argument was that empirical evidence confirmed the hypothesized effect. In two elaborate studies, Jose Azar and co-authors found that increases in common ownership were associated with significantly higher prices in both the airline industry and the banking industry. Given this evidence and Elhauge’s endorsement, other scholars soon wrote supporting articles. Herb Hovenkamp and Fiona Scott Morton, Fiona Scott Morton, Eric Posner and E. Glenn Weyl, were among them.
This initial enthusiasm did not, however, lead to action. There have been no cases and there is no enforcement program. The Department of Justice’s and Federal Trade Commission’s proposed Merger Guidelines do mention common ownership and state that the enforcement agencies have “concerns” with it. But the draft Guidelines do not analyze common ownership in any detail. They do not explain how it might cause anticompetitive effects and what its procompetitive justifications might be. They do not outline any circumstances in which the agencies might challenge common ownership.
This essay suggests that the enforcement agencies ought to take a more muscular approach to common ownership. The Guidelines ought to give it a higher priority and identify the kinds of evidence that might lead to a lawsuit. In what follows, I explain what deflated the initial proposals, argue that those considerations no longer justify the near complete abandonment of interest in common ownership, and outline the evidence that could support a test case.
The Initial Attacks
Two principal arguments derailed the initial proposals. First, critics claimed that the empirical support for the theory was thin and flawed. And initially, the critics were right about the limited support: it consisted of just two studies—the Azar papers mentioned above. Moreover, methodological issues were raised about both studies.
The second attack was perhaps more devastating. Critics argued that no one had explained how common ownership could lead to higher prices or other anticompetitive effects. Of course, a common owner could orchestrate a cartel among the firms whose shares it held. But short of outright collusion—and no one had found evidence of outright collusion—how could this happen? What were the causal mechanisms?
Scott Hemphill and Marcel Kahan analyzed a range of potential mechanisms and concluded that all were either implausible or untested. Lucian Bebchuk, Alma Cohen & Scott Hirst asserted that big index funds charge such low fees that they would not gain any meaningful revenue by pressuring firms to adopt less competitive strategies. Douglas Ginsburg and Keith Klovers stressed that big funds hold shares not only in competitors, but also in vertically related firms, and those vertical investments would undercut their incentive to reduce competition in the relevant market.
These two attacks—on econometrics and governance—sapped the momentum of the initial proposals. Neither the Department of Justice nor the Federal Trade Commission decided to confront common ownership.
The Response
Seven years later, however, the grounds for devoting attention to common ownership are much stronger. There are now fifteen studies that find that higher levels of common ownership are associated with higher prices. Moreover, according to Elhauge, “only two of these empirical studies have been disputed, and the critiques of those two empirical studies have been rebutted at length.” Both the sheer number of studies and their improved methodologies suggest that the evidentiary basis for a challenge to common ownership may now be adequate.
Meanwhile, the corporate governance assault on the theory no longer appears to be so devastating. For one thing, the concern with vertical investments appears to be overstated. It is not clear that funds are as heavily invested in vertically related firms as they are in horizontal competitors. And even if they were, the funds would then be common owners upstream as well as downstream, which would heighten their ability to extract a supracompetitive return from the entire vertical chain.
Second, while index funds do earn small percentage fees, the costs of restricting competition among the firms they hold may be even lower. For example, when funds vote their shares, it costs no more to vote against directors who favor aggressive competition than to vote in their favor.
Third, it now appears that there are a variety of tactics that common owners could plausibly employ to transmit their interest in reduced competition. For instance, many funds regularly communicate with the managements of the firms they hold. They could use those opportunities to press for less discounting, less investment in new capacity, and more emphasis on compensation structures based on industry profits rather than firm-specific profits.
Likewise, funds could withhold their votes when firms or Board candidates propose strategies likely to disrupt the industry consensus. They could vote against hedge funds whose aim is to force the firm to compete more directly against rivals. As Elhauge describes:
[I]n 2015, there was a control contest over management of DuPont, whose main competitor was Monsanto. The fifth largest shareholder of DuPont, the Trian Fund, had no significant shareholdings in Monsanto and launched a control contest designed to replace Dupont’s managers with managers who would behave more competitively against Monsanto. This control contest failed, with the decisive votes to defeat it being cast by the top four shareholders of DuPont (Vanguard, BlackRock, State Street, and Capital Research), who were horizontal shareholders whose financial stake in Monsanto was about twice as high as their financial stake in DuPont.
And active funds (as opposed to index funds) could sell their shares when management embarks on an aggressive campaign to take sales from competitors.
In short, over the last seven years, the case for challenging common ownership has grown. There is much more empirical evidence of adverse effects and significantly greater reason to believe that investment funds can induce corporate officers and directors to curtail their competitive zeal. These developments call for a more active approach to common ownership in the Guidelines.
The Draft Merger Guidelines
The Guidelines ought to analyze common ownership in more depth and explain when it might be challenged. The analysis is straightforward. There is now a substantial literature on the competitive concerns with common ownership, the empirical evidence supporting those concerns, the potential benefits of common ownership, and the hurdles that may prevent common owners from influencing corporate management. The Guidelines can easily describe the major elements of the analysis.
The Guidelines should also identify the kinds of evidence that may warrant a challenge. Three categories seem especially appropriate. First, the relevant market—the market in which the commonly owned firms compete—should be highly concentrated. The agencies can use the same HHI threshold they employ elsewhere in the Guidelines to denote a highly concentrated market (1800).
Second, the level of common ownership should be substantial. The econometric studies measure common ownership by MHHI and the Guidelines should use the level of MHHI that the studies find is likely to be associated with higher prices or other anticompetitive harm.
Third, there should be direct evidence of anticompetitive effects. One could argue, as does Elhauge, that there is no need for direct evidence. The structural evidence (high HHI and substantial MHHI) should be sufficient. After all, fifteen studies have found an association between structural evidence and anticompetitive effects. But the first case challenging common ownership would be a test case and a test case is more likely to succeed—a skeptical court is more likely to accept a novel theory—if there is some direct evidence of harm.
The government could present evidence that in the relevant market higher levels of common ownership are associated with elevated prices, reduced innovation, or other anticompetitive consequences. That evidence could come from an empirical study, a company document, or a journalistic investigation. But given the number of supporting studies that already exist, such direct evidence of impact may not be necessary. What may be essential, in the first case at least, is evidence that a common owner took an affirmative step to dampen competition, such as a direct communication between the owner and an executive, a comment on an earnings call, or a vote against increased competition. The government ought to offer, if possible, a direct connection between common ownership and anticompetitive harm.
If a test case succeeds, the agencies may pursue an enforcement program against common ownership. At that point, the agencies ought to give guidance to investment funds on the relevant markets that are of greatest concern. Eric Posner and co-authors have proposed a method for providing such notice. At this point, however, it is more important that the Guidelines assign a higher priority to common ownership and describe the circumstances that are most likely to result in an action.
John B. Kirkwood is a Professor at Seattle University School of Law and a member of the American Law Institute.
How many times have you heard from an antitrust scholar or practitioner that merely possessing a monopoly does not run afoul of the antitrust laws? That a violation requires the use of a restraint to extend that monopoly into another market, or to preserve the original monopoly to constitute a violation? Here’s a surprise.
Both a plain reading and an in-depth analysis of the text of Section 2 of the Sherman Act demonstrate that this law’s violation does not require anticompetitive conduct, and that it does not have an efficiencies defense. Section 2 of the Sherman Act was designed to impose sanctions on any firm that monopolizes or attempts to monopolize a market. Period. With no exceptions for firms that are efficient or for firms that did not engage in anticompetitive conduct.
This is the conclusion one should reach if one were a judge analyzing the Sherman Act using textualist principles. Like most of the people reading this article I’m not a textualist. But many judges and Supreme Court Justices are, so this method of statutory interpretation must be taken quite seriously today.
To understand how to read the Sherman Act as a textualist, one must first understand the textualist method of statutory interpretation. This essay presents a textualist analysis of Section 2 that is a condensation of a 92-page law review article, titled “The Sherman Act Is a No-Fault Monopolization Statute: A Textualist Demonstration.” My analysis demonstrates that Section 2 is actually a no-fault statute. Section 2 requires courts to impose sanctions on monopolies and attempts to monopolize without inquiring into whether the defendant engaged in anticompetitive conduct or whether it was efficient.
A Brief Primer on Textualism
As most readers know, a traditionalist approach to statutory interpretation analyzes a law’s legislative history and interprets it accordingly. The floor debates in Congress and relevant Committee reports affect how courts interpret a law, especially in close cases or cases where the text is ambiguous. By contrast, textualism only interprets the words and phrases actually used in the relevant statute. Each word and phrase is given its fair, plain, ordinary, and original meaning at the time the statute was enacted.
Justice Scalia and Bryan Garner, a professor at SMU’s Dedman School of Law, wrote a 560-page book explaining and analyzing textualism. Nevertheless, a basic textualist analysis can be described relatively simply. To ascertain the meaning of the relevant words and phrases in the statute, textualism relies mostly upon definitions contained in reliable and authoritative dictionaries of the period in which the statute was enacted. These definitions are supplemented by analyzing the terms as they were used in contemporaneous legal treatises andcases. Crucially, textualism ignores statutes’ legislative history. In the words of Justice Scalia, “To say that I used legislative history is simply, to put it bluntly, a lie.”
Textualism does not attempt to discern what Congress “intended to do” other than by plainly examining the words and phrases in statutes. A textualist analysis does not add or subtract from the statute’s exact language and does not create exceptions or interpret statutes differently in special circumstances. Nor should a textualist judge insert his or her own policy preferences into the interpretation. No requirement should be read into a law unless, of course, it is explicitly contained in the legislation. No exemption should be inferred to achieve some overall policy goal Congress arguably had unless, of course, the text demands it.
As Justice Scalia wrote, “Once the meaning is plain, it is not the province of a court to scan its wisdom or its policy.” Indeed, if a court were to do so this would be the antithesis of textualism. There are some complications relevant to a textualist analysis of Section 2, but they do not change the results that follow.
A Textualist Analysis of Section 2 of the Sherman Act
A straightforward textualist interpretation of Section 2 demonstrates that a violation does not require anticompetitive conduct and applies regardless whether the firm achieved its position through efficient behavior.
Section 2 of the Sherman Act makes it unlawful for any person to “monopolize, or attempt to monopolize . . . any part of the trade or commerce among the several States . . . .” There is nothing, no language in Section 2, requiring anticompetitive conduct or creating an exception for efficient monopolies. A textualist interpretation of Section 2 therefore needs only to determine what the terms “monopolize” and “attempt to monopolize” meant in 1890. This examination demonstrates that these terms meant the same things they mean today if they are “fairly,” “ordinarily,” or “plainly” interpreted, free from the legal baggage that has grown up around them by a multitude of court decisions.
What Did “Monopolize” Mean in 1890?
When the Sherman Act was passed the word “monopolize” simply meant to acquire a monopoly. The term was not limited to monopolies acquired or preserved by anticompetitive conduct, and it did not exclude firms that achieved their monopoly due to efficient behavior.
As noted earlier, Justice Scalia was especially interested in the definitions of key terms in contemporary dictionaries. Scalia and Garner believe that six dictionaries published between 1851 to 1900 are “useful and authoritative.” All six were checked for definitions of “monopolize”. The principle definition in each for “monopolize” was simply that a firm had acquired a monopoly. None required anticompetitive conduct for a firm to “monopolize” a market, or excluded efficient monopolies.
For example, the 1897 edition of Century Dictionary and Cyclopedia defined “monopolize” as: “1. To obtain a monopoly of; have an exclusive right of trading in: as, to monopolize all the corn in a district . . . . ”
Serendipitously, a definition of “monopolize” was given in the Sherman Act’s legislative debates, just before the final vote on the Bill. Although normally a textualist does not care about anything uttered during a congressional debate, Senator Edmund’s remarks should be significant to a textualist because he quotes from a contemporary dictionary that Scalia considered useful and reliable. “[T]he best answer I can make to both my friends is to read from Webster’s Dictionary the definition of the verb “to monopolize”: He went on:
1. To purchase or obtain possession of the whole of, as a commodity or goods in market, with the view to appropriate or control the exclusive sale of; as, to monopolize sugar or tea.
There was no requirement of anticompetitive conduct, or exception for a monopoly efficiently gained.
These definitions are essentially the same as those in the 1898 and 1913 editions of Webster’s Dictionary. The four other dictionaries of the period Scalia & Garner considered reliable also contained essentially identical definitions. The first edition of the Oxford English Dictionary, from 1908, also contained a similar definition of “monopolize:”
1 . . . . To get into one’s hands the whole stock of (a particular commodity); to gain or hold exclusive possession of (a trade); . . . . To have a monopoly. . . . 2 . . . . To obtain exclusive possession or control of; to get or keep entirely to oneself.
Not only does the 1908 Oxford English Dictionary equate “monopolize” with “monopoly,” but nowhere does it require a monopolist to engage in anticompetitive conduct.
Moreover, all but one of the definitions in Scalia’s preferred dictionaries do not limit monopolies to firms making every sale in a market. They roughly correspond to the modern definition of “monopoly power,” by defining “monopolize” as the ability to control a market. The 1908 Oxford English Dictionary defined “monopolize” in part as “To obtain exclusive possession or control of.” The Webster’s Dictionary defined monopolize as “with the view to appropriate or control the exclusive sale of.” Stormonth defined monopolize as “one who has command of the market.” Latham defined monopolize as “ to have the sole power or privilege of vending.…” And Hunter & Morris defined monopolize as “to have exclusive command over.”
In summary, every one of Scalia’s preferred period dictionaries defined “monopolize” as simply to gain all the sales of a market or the control of a market. A textualist analysis of contemporary legal treatises and cases yields the same result. None required conduct we would today characterize as anticompetitive, or exclude a firm gaining a monopoly by efficient means.
A Textualist Analysis of “Attempt to Monopolize”
A textualist interpretation of Section 2 should analyze the word “attempt” as it was used in the phrase “attempt to monopolize” circa 1890. However, no unexpected or counterintuitive result comes from this examination. Circa 1890 “attempt” had its colloquial 21st Century meaning, and there was no requirement in the statute that an “attempt to monopolize” required anticompetitive conduct or excluded efficient attempts.
The “useful and authoritative” 1897 Century Dictionary and Cyclopedia defines “attempt” as:
1. To make an effort to effect or do; endeavor to perform; undertake; essay: as, to attempt a bold flight . . . . 2. To venture upon: as, to attempt the sea.— 3. To make trial of; prove; test . . . . .
The 1898 Webster’s Dictionary gives a similar definition: “Attempt . . . 1. To make trial or experiment of; to try. 2. To try to move, subdue, or overcome, as by entreaty.’ The Oxford English Dictionary, which defined “attempt” in a volume published in 1888, similarly reads: “1. A putting forth of effort to accomplish what is uncertain or difficult….”
However, the word “attempt” in a statute did have a specific meaning under the common law circa 1890. It meant “an intent to do a particular criminal thing, with an act toward it falling short of the thing intended.” One definition stated that the act needed to be “sufficient both in magnitude and in proximity to the fact intended, to be taken cognizance of by the law that does not concern itself with things trivial and small.” But no source of the period defined the magnitude or nature of the necessary acts with great specificity (indeed, a precise definition might well be impossible).
It is noteworthy that in 1881 Oliver Wendell Holmes wrote about the attempt doctrine in his celebrated treatise, The Common Law:
Eminent judges have been puzzled where to draw the line . . . the considerations being, in this case, the nearness of the danger, the greatness of the harm, and the degree of apprehension felt. When a man buys matches to fire a haystack . . . there is still a considerable chance that he will change his mind before he comes to the point. But when he has struck the match . . . there is very little chance that he will not persist to the end . . .
Congress’s choice of the phrase “attempt to monopolize” surely built upon the existing common law definitions of an “attempt” to commit robbery and other crimes. Although the meaning of a criminal “attempt” to violate a law has evolved since 1890, a textualist approach towards an “attempt to monopolize” should be a “fair” or “ordinary” interpretation of these words as they were used in 1890, ignoring the case law that has arisen since then. It is clear that acts constituting mere preparation or planning should be insufficient. Attempted monopolization should also require the intent to take over a market and at least one serious act in furtherance of this plan.
But “attempted monopolization” under Section 2 should not require the type of conduct we today consider anticompetitive, or exempt efficient conduct. Because current case law only imposes sanctions under Section 2 if a court decides the firm engaged in anticompetitive conduct,this case law was wrongly decided. It should be overturned, as should the case law that excuses efficient attempts.
Moreover, attempted monopolization’s current “dangerous probability” requirement should be modified significantly. Today it is quite unusual for a court to find that a firm illegally “attempted to monopolize” if it possessed less than 50 percent of a market.But under a textualist interpretation of Section 2, suppose a firm with only a 30 percent market share seriously tried to take over a relevant market. Isn’t a firm with a 30 percent market share often capable of seriously attempting to monopolize a market? And, of course, attempted monopolization shouldn’t have an anticompetitive conduct requirement or an efficiency exception.
Textualists Should Be Consistent, Even If That Means More Antitrust Enforcement
Where did the exception for efficient monopolies come from? How did the requirement that anticompetitive conduct is necessary for a Section 2 violation arise? They aren’t even hinted at in the text of the Sherman Act. Shouldn’t we recognize that conservative judges simply made up the anticompetitive conduct requirement and efficiency exception because they thought this was good policy? This is not textualism. It’s the opposite of textualism.
No fault monopolization embodies a love for competition and a distaste for monopoly so strong that it does not even undertake a “rule of reason” style economic analysis of the pros and cons of particular situations. It’s like a per se statute insofar as it should impose sanctions on all monopolies and attempts to monopolize. At the remedy stage, of course, conduct-oriented remedies often have been, and should continue to be, found appropriate in Section 2 cases.
The current Supreme Court is largely textualist, but also extremely conservative. Would it decide a no-fault case in the way that textualism mandates?
Ironically, when assessing the competitive effects of the Baker Hughes merger, (then) Judge Thomas changed the language of the statute from “may be substantially to lessen competition” to “will substantially lessen competition,” despite considering himself to be a textualist. So much for sticking to the language of the statute!
Until recently, textualism has only been used to analyze an antitrust law a modest number of times. This is ironic because, even though textualism has historically only been championed by conservatives, a textualist interpretation of the antitrust laws should mean that the antitrust statutes will be interpreted according to these laws’ original aggressive, populist and consumer-oriented language.
Robert Lande is the Venable Professor of Law Emeritus at the University of Baltimore Law School.
After more than a year of aggressive rate hikes, the Federal Reserve has now held them steady after each of the past two Federal Open Market Committee meetings. After peaking at levels not seen in decades, inflation has leveled off in the three-to-four percent range for months now. On top of that, job openings, and consumption all seem to have slowed notably. All of this adds new context to the debate between proponents and opponents of Fed hawkishness.
When elevated inflation first became a serious concern following macroeconomic shocks—from a global pandemic, huge recession fighting policy, and (later) the Russian invasion of Ukraine—economists and pundits quickly split into two broad camps on what was happening. On the one side, there were those who saw high inflation as a passing issue due to serious disruptions caused by giant exogenous shocks. That group, dubbed “team transitory,” believed that this bout of inflation was not due to overstimulation of the economy. On the other side were those who insisted inflation was being driven by the demand side; they argued that the fiscal stimulus had been too large, and that the job market in the recovery was too strong. That view relied on the idea that prices were responding to elevated demand from excess savings, rather than price shocks in the supply chain or corporate price manipulation.
In retrospect, the evidence shows that team transitory was right (although additional shocks to the macroeconomy kept inflation high longer than most of them predicted). And yet, despite the mounting evidence and the early signs of economic cooling, the Fed has not reversed course. A big part of why is a compulsion to try and get inflation to two percent. But that fixation now poses a serious threat to our economic well being.
Back at the start of this year, I wrote a piece covering the Federal Reserve’s two percent inflation target and why taking it as gospel is misguided. Since then, there has been considerable discussion about whether the target rate should be changed, with the case for abandoning two percent made in The Financial Times last spring by Columbia University’s Adam Tooze. Following that, the FT published a letter to the editor arguing against Tooze’s point, Harvard economist Jason Furman agreed that it was worth reconsidering, and former Treasury Secretary (and sleazy fintech businessman) Larry Summers thoroughly dissented.
As Nobel Laureate Paul Krugman has written, the primary concern about easing the two percent target revolves around nebulous fretting about the credibility of the Fed. As he put it, proponents “fear that if they ease off at, say, 3 percent inflation, markets and the public will wonder whether they will eventually accept 4 percent, then 5 percent and so on.” Such concerns seem rather oblivious to the Fed’s extremely strong (potentially too strong) inflation-fighting strategy. Surely, they’ve built up enough of an inflation hawk reputation that they can take a slight hit. Moreover, despite inflation running slightly high still, financial markets seem at ease with the current level and inflation expectations have remained anchored. All of this makes warnings of a loss of faith in the Fed seem like a bit of a reach.
Before we get into the weeds, it’s worth explaining the origins of the two percent figure. What’s the importance of that specific figure? As I wrote, “the target is more tradition than science.“ The exact figure originated from a television interview with the New Zealand Finance Minister in the 1980s. At the time, New Zealand was experiencing serious inflation, nearly ten percent, and the government wanted to give the central bank a codified target. Since then, two percent as a target has become the norm among rich countries. However, not every central bank that uses it as a baseline clings to it as aggressively as the Fed; a number of them, including the Bank of Canada and the Reserve Bank of Australia, use a more flexible version of the target. In Canada, the target is two percent plus or minus one percent. In Australia, it’s two to three percent. A quick glance shows that such ranges are not at all unusual. Until relatively recently, the Fed’s target was similarly flexible by de facto.
Indeed, the exact figure was only officially adopted by the Fed under then-Chair Ben Bernanke in 2012 (though it had been tacitly endorsed since 1996). In the 1990s, future Fed Chair Janet Yellen was among those who pushed for a higher target rate to allow more discretion on the Fed’s part and guard against deflation.
Additionally, as Krugman has explained, two percent also became typical because it functioned as something of a compromise between economists who wanted absolute price stability (a zero percent inflation target) and those who wanted positive rates to give central banks more room to fight recessions by allowing for a lower real interest rate.
There are arguments about why such a target is good, but practically none of them are specific to two percent. Because inflation measurements tend to skew higher than the true level, it can be important to have a positive target even if the goal is to have functionally no inflation. Certainly, in order to have stable prices, we must have a target that’s relatively low. But that explains why two percent is preferable to, say, ten percent, not why it’s any better than slightly higher inflation. In fact, work by scholars at the University of Massachusetts shows that three to four percent levels don’t constrain growth and can be conducive to stronger economic performance than inflation of two percent.
There is a very good reason, however, why the target needs to be a low positive number. If the target is zero percent or lower, then there is a higher risk of deflation, where people’s money becomes more valuable, which can trigger a recession because the return generated by parking assets deters people and firms from spending and investing, instead opting to sit on their money. This then tanks the “velocity” of money, an econ term for how freely money circulates in the economy; a healthy economy needs money to be moving.
A positive target also allows for monetary policy to better fight recessions. While theoretically possible, banks (including central banks) don’t offer nominal negative interest rates. If they were to, no one would keep their money there (which in turn means they wouldn’t really be impacted by rate changes) unless they were forced to. What central banks can do is create negative real interest rates, but only if inflation is more than zero. (Real interest rates are equal to nominal rates minus the inflation rate.)
On the other hand, there are reasons why holding on tightly to two percent is bad policy. To start, it commits the Fed to prioritizing aggressive inflation fighting over the other half of its mandate: maintaining full employment. Committing to such a low target and refusing to reevaluate is a promise to sacrifice jobs in order to reach such a low level of inflation. Particularly given that there is no strong empirical evidence that two percent is hugely preferable to three or four percent, there is no reason for the Fed to be creating conflict between its dual mandates that otherwise need not exist. This is further exacerbated by the de-linking of employment and inflation captured in the Phillips Curve. In the United States, the relationship simply does not hold. As a result, higher rates from the Fed can force investment and employment down, but without making a dent in inflation.
The obvious counter to such an argument is that the rate hikes haven’t triggered a recession, spiked unemployment, or seriously undermined investment. To the extent that this is true, that can, in itself, be a reason to stop relying on high interest rates to lower inflation; employment is the mechanism by which rate hikes would be expected to influence inflation. The fact that inflation fell without a recession or mass unemployment clearly demonstrates that keeping rates high in pursuit of a two percent target is misguided.
Remember, also, that when the rate hikes started the economy was very strong. And new job openings have fallen since then. Given the significant lag between rate changes and observable macroeconomic adjustments, it’s entirely possible that we are heading in that direction and it’s just taking a while. Regardless, maintaining high rates that risk undermining the labor market and the broader economy still isn’t worth it when it isn’t achieving any meaningful policy goals.
Additionally, given the trend of Secular Stagnation, there’s reason to believe that slightly higher inflation is necessary in order to fight future recessions without hitting the zero lower bound. In an economy with secular stagnation, negative real interest rates become more important because nominal interest rates will stay low most of the time to encourage investment rather than savings. Ironically, this theory was popularized by Larry Summers, who is now one of the champions of inane defenses of central banking as usual.
And, as Adam Tooze pointed out, higher interest rates being deployed to push inflation down can also stress banks and depress developing economies. The Fed’s elevated interest rates create a higher cost of borrowing, undermining banks’ ability to cover any current shortfalls. As the mantra goes, banks borrow short and lend long. All fixed-rate loans that they made before the rate hikes can be locked into a lower rate than the bank can borrow at, meaning they lose more in interest payments than they earn. And if there’s a bank collapse, that can easily spark a financial crisis and lead to a recession.
Developing countries, meanwhile, are going to get loans on much worse terms—that might be difficult to pay off—while rates are high. That in turn could undermine their ability to build out infrastructure and new industries, causing lost income. In all, this means gains from global trade will be lower than they could be, keeping poverty, underdevelopment, and global inequality worse than they might otherwise be.
So why is the Fed’s credibility, rather than resting on good policy, tied intimately to a target of two percent? Former investment banker Stephen King wrote in response to Tooze, “choosing to raise targets when inflation has persistently surprised on the upside smacks of no more than short-run political opportunism.” Similarly, Summers wrote that:
…the chairman needs to respond explicitly or implicitly to the growing chorus suggesting that the Fed should adjust its inflation target. For years, the Fed has been firm in its commitment to 2 percent. Of course, there are legitimate academic arguments about the merits of having a numerical target and, if so, what it should be. But timing and context are crucial.
But their argument runs counter to what is supposed to be the bedrock of Fed credibility: a commitment to following the data. Although both King and Summers concede that there are good academic arguments for changing the target, they argue that now is not the time. But the opposite is true—changing the target now is ideal because it would epitomize the Fed’s commitment to following the evidence and maintaining its dual mandate. All of the best available evidence shows that monetary policy cannot possibly be responsible for disinflation. The only theoretical mechanism for it to have done so would be via the employment rate, which remains strong.
To continue to obsess over two percent simply commits Powell to a course of action that will betray half of the Fed’s mandate and runs counter to the best evidence available. No one seriously advocating a change is calling for a hairpin U-turn. Indeed, they can even follow Furman’s step-by-step guide on how to properly change course.
Additionally, there are reasons why abandoning or altering the two percent target very soon is appropriate, beyond the general issues outlined above. For one, the harm of a recession right now would likely be worse for ordinary people because of the extremely high interest rates. If a recession were to begin before the Fed starts lowering rates, then the job loss and decreased economic mobility that comes with it would also be paired with a very high cost to borrow. That means that people who don’t have significant savings and lose their jobs will find it more costly to use credit cards, personal loans, or home equity to fill the gap until they find work.
On top of that the high interest rates are a barrier to people buying houses, which has multiple downstream impacts. To start, it has locked many out of using a home to build equity, which is one of the biggest forms of wealth building in the American economy (and eliminates one possible form of borrowing for a lot of folks). It also forces more people to live in rental properties, which see rent increases because of additional demand (plus rent is already high because of residential price fixing). To round things out, it hurts people who already have homes as well. High rates can make it prohibitively expensive for people with houses they own to move, even if they would have more opportunities somewhere else. Between getting less money from selling their home and extreme mortgage interest rates, moving would probably mean either lowering their standard of living or becoming a renter, unless they moved to somewhere with a much lower cost of living.
The Fed has even seemingly acknowledged that the target is less than ideal; they frame the target as a long term average of two percent inflation. But that doesn’t actually increase flexibility because it only enables them to ease inflation fighting in the present to the extent that they’re confident that inflation will be below two percent in the future to average things out. A much better and simpler solution would be to revise the target upwards to three percent or create a range of two percent plus or minus one percent, either of which would no longer call for elevated interest rates and both of which have international precedents.
When Paul Volcker’s war on inflation ended not quite half a century ago, the inflation level was still four percent. And the following Reagan years are remembered for a robust economy featuring a historic presidential re-election. The hyperfixation on getting down to two percent accomplishes little—well, unless returning Trump to office is one of Powell’s goals—and risks a whole lot. It exposes banks to huge interest rate risk, makes it harder for developing countries to build themselves up, limits housing options, makes people more vulnerable if a recession does come, and creates an ever-present threat of causing mass unemployment or major cuts to economic investment. Meanwhile, virtually all of the good parts of the target will still apply—some even more so—with a slightly higher or more flexible target.
Dylan Gyauch-Lewis is a researcher at the Revolving Door Project.
Over 100 years ago, Congress responded to railroad and oil monopolies’ stranglehold on the economy by passing the United States’ first-ever antitrust laws. When those reforms weren’t enough, Congress created the Federal Trade Commission to protect consumers and small businesses from predation. Today, unchecked monopolies again threaten economic competition and our democratic institutions, so it’s no surprise that the FTC is bringing a historic antitrust suit against one of the biggest fish in the stream of commerce: Amazon.
Make no mistake: modern-day monopolies, particularly the Big Tech giants (Amazon, Apple, Alphabet, and Meta), are active threats to competition and consumers’ welfare. In 2020, the House Antitrust Subcommittee concluded an extensive investigation into Big Tech’s monopolistic harms by condemning Amazon’s monopoly power, which it used to mistreat sellers, bully retail partners, and ruin rivals’ businesses through the use of sellers’ data. The Subcommittee’s report found that, as both the operator of and participant in its marketplace, Amazon functions with “an inherent conflict of interest.”
The FTC’s lawsuit builds off those findings by targeting Amazon’s notorious practice of “self-preferencing,” in which the company gathers private data on what products users are purchasing, creates its own copies of those products, then lists its versions above any competitors on user searches. Moreover, by bullying sellers looking to discount their products on other online marketplaces, Amazon has forced consumers to fork over more money than what they would have in a truly-competitive environment.
But perhaps the best evidence of Amazon’s illegal monopoly power is how hard the company has worked for years to squash any investigation into its actions. For decades, Amazon has relied on the classic ‘revolving door’ strategy of poaching former FTC officials to become its lobbyists, lawyers, and senior executives. This way, the company can use their institutional knowledge to fight the agency and criticize strong enforcement actions. These “revolvers” defend the business practices which their former FTC colleagues argue push small businesses past their breaking points. They also can help guide Amazon’s prodigious lobbying efforts, which reached a corporate record in 2022 amidst an industry wide spending spree in which “the top tech companies spent nearly $70 million on lobbying in 2022, outstripping other industries including pharmaceuticals and oil and gas.”
Amazon’s in-house legal and policy shops are absolutely stacked full of ex-FTC officials and staffers. In less than two years, Amazon absorbed more than 28 years of FTC expertise with just three corporate counsel hires: ex-FTC officials Amy Posner, Elisa Kantor Perlman and Andi Arias. The company also hired former FTC antitrust economist Joseph Breedlove as its principal economist for litigation and regulatory matters (read: the guy we’re going to call as an expert witness to say you shouldn’t break us up) in 2017.
It goes further than that. Last year, Amazon hired former Senate Judiciary Committee staffer Judd Smith as a lobbyist after he previously helped craft legislation to rein in the company and other Big Tech giants. Amazon also contributed more than $1 million to the “Competitiveness Coalition,” a Big Tech front group led by former Sen. Scott Brown (R-MA). The coalition counts a number of right-wing, anti-regulatory groups among its members, including the Competitive Enterprise Institute, a notorious purveyor of climate denialism, and National Taxpayers Union, an anti-tax group regularly gifted op-ed space in Fox News and the National Review.
This goes to show the lengths to which Amazon will go to avoid oversight from any government authority. True, the FTC has finally filed suit against Amazon, and that is a good thing. But Amazon, throughout their pursuance of ever growing monopoly power, hired their team of revolvers precisely for this moment. These ex-officials bring along institutional knowledge that will inform Amazon’s legal defense. They will likely know the types of legal arguments the FTC will rely on, how the FTC conducted its pretrial investigations, and the personalities of major players in the case.
This knowledge is invaluable to Amazon. It’s like hiring the assistant coach of an opposing team and gaining access to their playbook — you know what’s coming before it happens and you can prepare accordingly. Not only that, but this stream of revolvers makes it incredibly difficult to know the dedication of some regulators towards enforcing the law against corporate behemoths. How is the public expected to trust its federal regulators to protect them from monopoly power when a large swath of its workforce might be waiting for a monopoly to hire them? (Of course, that’s why we need both better pay for public servants as well as stricter restrictions on public servants revolving out to the corporations they were supposedly regulating.)
While spineless revolvers make a killing defending Amazon, the actual people and businesses affected by their strong arming tactics are applauding the FTC’s suit. Following the FTC’s filing, sellers praised the Agency on Amazon’s Seller Central forum, calling it “long overdue” and Amazon’s model as a “race to the bottom.” One commenter even wrote they will be applying to the FTC once Amazon’s practices force them off the platform. This is the type of revolving we may be able to support. When the FTC is staffed with people who care more about reigning in monopolies than receiving hefty paychecks from them in the future (e.g., Chair Lina Khan), we get cases that actually protect consumers and small businesses.
The FTC’s suit against Amazon signals that the federal government will no longer stand by as monopolies hollow-out the economy and corrupt the inner-workings of our democracy, but the revolvers will make every step difficult. They will be in the corporate offices and federal courtrooms advising Amazon on how best to undermine their former employer’s legal standing. They will be in the media, claiming to be objective as a former regulator, while running cover for Amazon’s shady practices that the business press will gobble up. The prevalence of these revolvers makes it difficult for current regulators to succeed while simultaneously undermining public trust in a government that should work for people, not corporations. Former civil servants who put cash from Amazon over the regulatory mission to which they had once been committed are turncoats to the public good. They should be scorned by the public and ignored by government officials and media alike.
Andrea Beaty is Research Director at the Revolving Door Project, focusing on anti-monopoly, executive branch ethics and housing policy. KJ Boyle is a research intern with the Revolving Door Project. Max Moran is a Fellow at the Revolving Door Project. The Revolving Door Project scrutinizes executive branch appointees to ensure they use their office to serve the broad public interest, rather than to entrench corporate power or seek personal advancement.
The Federal Trade Commission has accused Amazon of illegally maintaining its monopoly, extracting supra-competitive fees on merchants that use Amazon’s platform. If and when the fact-finder determines that Amazon violated the antitrust laws, we propose structural remedies to address the competitive harms. Behavioral remedies have fallen out of favor among antitrust scholars. But the success of a structural remedy cannot be taken for granted.
To briefly review the bidding, the FTC’s Complaint alleges that Amazon prevents merchants from steering customers to a lower-cost platform—that is, a platform that charges a lower take rate—by offering discounts off the price it charges on Amazon. Amazon threatens merchants’ access to the Buy Box if merchants are caught charging a lower price outside of Amazon, a variant of a most-favored-nation (MFN) restriction. In other words, Amazon won’t allow merchants to share any portions of its savings with customers as an inducement to switch platforms; doing so would put downward pressure on Amazon’s take rate, which has climbed from 35 to 45 percent since 2020 per ILSR.
The Complaint also alleges that Amazon ties its fulfillment services to access to Amazon Prime. Given the importance of Amazon Prime to survival on Amazon’s Superstore, Amazon’s policy is effectively conditioning a merchant’s access to its Superstore on an agreement to purchase Amazon’s fulfillment, often at inflated rates. Finally, the Complaint alleges that Amazon gives its own private-label brands preference in search results.
These are classic exclusionary restraints that, in another era, would be instinctively addressed via behavioral remedies. Ban the MFN, ban the tie-in, and ban the self-preferencing. But that would be wrongheaded, as doing so would entail significant oversight by enforcement authorities. As the DOJ Merger Remedies Manual states, “conduct remedies typically are difficult to craft and enforce.” To the extent that a remedy is fully conduct-based, it should be disfavored. The Remedies Manual appears to approve of conduct relief to facilitate structural relief, “Tailored conduct relief may be useful in certain circumstances to facilitate effective structural relief.”
Instead, there should be complete separation of the fulfillment services from the Superstore. In a prior piece for The Sling, we discussed two potential remedies for antitrust bottlenecks—the Condo and the Coop. In what follows, we explain that the Condo approach is a potential remedy for the Amazon platform bottleneck and the Coop approach a good remedy for the fulfillment center system. Our proposed remedy has the merit of allowing for market mechanisms to function to bypass the need for continued oversight after structural remedies are deployed.
Breaking Up Is Hard To Do
Structural remedies to monopolization have, in the past, created worry about continued judicial oversight and regulation. “No one wants to be Judge Greene.” He spent the bulk of his remaining years on the bench having his docket monopolized by disputes arising from the breakup of AT&T. Breakup had also been sought in the case of Microsoft. But the D.C. Circuit, citing improper communications with the press prior to issuance of Judge Jackson’s opinion and his failure to hold a remedy hearing prior to ordering divestiture of Microsoft’s operating system from the rest of the company, remanded the case for determination of remedy to Judge Kollar-Kotelly.
By that juncture of the proceeding, a new Presidential administration brought a sea change by opposing structural remedies not only in this case but generally. Such an anti-structural policy conflicts with the pro-structural policy set forth in Standard Oil and American Tobacco—that the remedy for unlawful monopolization should be restructuring the enterprises to eliminate the monopoly itself. The manifest problem with the AT&T structural remedy and the potential problem with the proposed remedy in Microsoft is that neither removed the core monopoly power that existed, thus retaining incentives to engage in anticompetitive conduct and generating continued disputes.
The virtue of the structural approaches we propose is that once established, they should require minimal judicial oversight. The ownership structures would create incentives to develop and operate the bottlenecks in ways that do not create preferences or other anticompetitive conduct. With an additional bar to re-acquisition of critical assets, such remedies are sustainable and would maximize the value of the bottlenecks to all stakeholders.
Turn Amazon’s Superstore into a Condo
The condominium model is one in which the users would “own” their specific units as well as collectively “owning” the entire facility. But a distinct entity would provide the administration of the core facility. Examples of such structures include the current rights to capacity on natural gas pipelines, rights to space on container ships, and administration for standard essential patents and for pooled copyrights. These examples all involve situations in which participants have a right to use some capacity or right but the administration of the system rests with a distinct party whose incentive is to maximize the value of the facility to all users. In a full condominium analogy, the owners of the units would have the right to terminate the manager and replace it. Thus, as long as there are several potential managers, the market would set the price for the managerial service.
A condominium mode requires the easy separability of management of the bottleneck from the uses being made of it. The manager would coordinate the uses and maintain the overall facility while the owners of access rights can use the facility as needed.
Another feature of this model is that when the rights of use/access are constrained, they can be tradable; much as a condo owner may elect to rent the condo to someone who values it more. Scarcity in a bottleneck creates the potential for discriminatory exploitation whenever a single monopolist holds those rights. Distributing access rights to many owners removes the incentive for discriminatory or exclusionary conduct, and the owner has only the opportunity to earn rents (high prices) from the sale or lease of its capacity entitlement. Thus, dispersion of interests results in a clear change in the incentives of a rights holder. This in turn means that the kinds of disputes seen in AT&T’s breakup are largely or entirely eliminated.
The FTC suggests skullduggery in the operation of the Amazon Superstore. Namely, degrading suggestions via self-preferencing:
Amazon further degrades the quality of its search results by buying organic content under recommendation widgets, such as the “expert recommendation” widget, which display Amazon’s private label products over other products sold on Amazon.
Moreover, in a highly redacted area of the complaint, the FTC alleges that Amazon has the ability to “profitably worsen its services.”
The FTC also alleges that Amazon bars customers from “multihoming:”
[Multihoming is] simultaneously offering their goods across multiple online sales channels. Multihoming can be an especially critical mechanism of competition in online markets, enabling rivals to overcome the barriers to entry and expansion that scale economies and network effects can create. Multihoming is one way that sellers can reduce their dependence on a single sales channel.
If the Superstore were a condo, the vendors would be free to decide how much to focus on this platform in comparison to other platforms. Merchants would also be freed from the MFN, as the condo owner would not attempt to ban merchants from steering customers to a lower-cost platform.
Condominiumization of the Amazon Superstore would go a long way to reducing what Cory Doctorow might call the “enshittification” of the Amazon Superstore. Given its dominance over merchants, it would probably be necessary to divest and rebrand the “Amazon basics” business. Each participating vendor (retailer or direct selling manufacturer) would share in the ownership of the platform and would have its own place to promote its line of goods or services.
The most challenging issue is how to handle product placement on the overall platform. Given the administrator’s role as the agent of the owners, the administrator should seek to offer a range of options. Or leave it to owners themselves to create joint ventures to promote products. Alternatively, specific premium placement could go to those vendors that value the placement the most, rather than based on who owns the platform. The revenue would in turn be shared among the owners of the condo. Thus, the platform administrator would have as its goal maximizing the value of the platform to all stakeholders. This would also potentially resolve some of the advertising issues. According to the Complaint,
Amazon charges sellers for advertising services. While Amazon also charges sellers other fees, these four types constitute over [redacted] % of the revenue Amazon takes in from sellers. As a practical matter, most sellers must pay these four fees to make a significant volume of sales on Amazon.
Condo ownership would mean that the platform constituents would be able to choose which services they purchase from the platform, thereby escaping the harms of Amazon’s tie-in. Constituents could more efficiently deploy advertising resources because they would not be locked-into or compelled to buy from the platform.
Optimization would include information necessary for customer decision-making. One of the other charges in the Complaint was the deliberate concealment of meaningful product reviews:
Rather than competing to secure recommendations based on quality, Amazon intentionally warped its own algorithms to hide helpful, objective, expert reviews from its shoppers. One Amazon executive reportedly said that “[f]or a lot of people on the team, it was not an Amazonian thing to do,” explaining that “[j]ust putting our badges on those products when we didn’t necessarily earn them seemed a little bit against the customer, as well as anti-competitive.”
Making the platform go condo does not necessarily mean that all goods are treated equally by customers. That is the nature of competition. It would mean that in terms of customer information, however, a condominiumized platform would enable sellers to have equal and nondiscriminatory access to the platform and to be able to promote themselves based upon their non-compelled expenditures.
Turn Amazon’s Fulfillment Center in a Coop
The Coop model envisions shared user ownership, management, and operation of the bottleneck. Such transformation of ownership should change the incentives governing the operation and potential expansion of the bottleneck.
The individual owner-user stands to gain little by trying to impose a monopoly price on users including itself or by restricting access to the bottleneck by new entrants. So long as there are many owners, the primary objective should be to manage the entity so that it operates efficiently and with as much capacity as possible.
This approach is for enterprises that require substantial continued engagement of the participants in the governance of the enterprise. With such shared governance, the enterprise will be developed and operated with the objective of serving the interest of all participants.
The more the bottleneck interacts directly with other aspects of the users’ or suppliers’ activity, the more those parties will benefit from active involvement in the decisions about the nature and scope of the activity. Historically, cooperative grain elevators and creameries provided responses to bottlenecks in agriculture. Contemporary examples could include a computer operating system, an electric transmission system, or social media platform. In each, there are a myriad of choices to be made about design or location or both. Different stakeholders will have different needs and desires. Hence, the challenge is to find a workable balance of interests. That maximizes the overall value of the system for its participants rather than serving only the interests of a single owner.
This method requires that no party or group dominates the decision processes, and all parties recognize their mutual need to make the bottleneck as effective as possible for all users. Enhancing use is a shared goal, and the competing experiences and needs should be negotiated without unilateral action that could devalue the collective enterprise.
As explained above, Amazon tie-in effectively requires that all vendors using its platform must also use Amazon’s fulfillment services. Yet distribution is distinct from online selling. Hence, the distribution system should be structurally separated from the online superstore. Indeed, vendors using the platform condo may not wish to participate in the distribution system regardless of access. Conversely, vendors not using the condo platform might value the fulfillments services for orders received on their platforms. Still other vendors might find multi-homing to be the best option for sales. As the Complaint points out, multi-homing may give rise to other benefits if not locked into Amazon Distribution:
Sellers could multihome more cheaply and easily by using an independent fulfillment provider- a provider not tied to any one marketplace to fulfill orders across multiple marketplaces. Permitting independent fulfillment providers to compete for any order on or off Amazon would enable them to gain scale and lower their costs to sellers. That, in turn, would make independent providers even more attractive to sellers seeking a single, universal provider. All of this would make it easier for sellers to offer items across a variety of outlets, fostering competition and reducing sellers’ dependence on Amazon.
The FTC Complaint alleges that Amazon has monopoly power in its fulfillment services. This is a nationwide complex of specialized warehouses and delivery services. The FTC is apparently asserting that this system has such economies of scale and scope that it occupies a monopoly bottleneck for the distribution of many kinds of consumer goods. If a single firm controlled this monopoly, it would have incentives to engage in exploitative and exclusionary conduct. Our proposed remedy to this is a cooperative model. Then, the goal of the owners is to minimize the costs of providing the necessary service. These users would need to be more directly involved in the operation of the distribution system as a whole to ensure its development and operation as an efficient distribution network.
Indeed, its users might not be exclusively users of the condominiumized platform. Like other cooperatives, the proposal is that those who want to use the service would join and then participate in the management of the service. Separating distribution from the selling platform would also enhance competition between sellers who opt to use the cooperative distribution and those that do not. For those that join the distribution cooperative, the ability to engage in the tailoring of those distribution services without the anticompetitive constraints created by its former owner (Amazon) would likely result in reduced delivery costs.
Separation of Fulfillment from Superstore Is Essential for Both Models
We propose some remedies to the problems articulated in the FTC’s Amazon Complaint—at least the redacted version. Thus, we end with some caveats.
First, we do not have access to the unredacted Complaint. Thus, to the extent that additional information might make either of our remedies improbable, we certainly do not have access to that information as of now.
Second, these condo and cooperative proposals go hand in hand with other structural remedies. There should be separation of the Fulfillment services from the Superstore and Amazon Brands might have to be divested or restructured. Moreover, their recombination should be permanently prohibited. These are necessary conditions for both remedies to function properly.
Third, in both the condo and coop model, governance structures must be in place to assure that both fulfillment services and the Superstore are not recaptured by a dominant player. In most instances, a proper governance structure would bar that. The government should not hesitate to step in should capture be evident.
Peter C. Carstensen is a Professor of Law Emeritus at the Law School of University of Wisconsin-Madison. Darren Bush is Professor of Law at University of Houston Law Center.