Economic Analysis and Competition Policy Research

Home   •   About   •   Analytics   •   Videos

The Justice Department’s pending antitrust case against Google, in which the search giant is accused of illegally monopolizing the market for online search and related advertising, revealed the nature and extent of a revenue sharing agreement (“RSA”) between Google and Apple. Pursuant to the RSA, Apple gets 36 percent of advertising revenue from Google searches by Apple users—a figure that reached $20 billion in 2022. The RSA has not been investigated in the EU. This essay briefly recaps the EU law on remedies and explains why choice screens, the EU’s preferred approach, are the wrong remedy focused on the wrong problem. Restoring effective competition in search and related advertising requires (1) the dissolution of the RSA, (2) the fostering of suppressed publishers and independent advertisers, and (3) the use of an access remedy for competing search-engine-results providers.

EU Law on Remedies

EU law requires remedies to “bring infringements and their effects to an end.” In Commercial Solvents, the Commission power was held to “include an order to do certain acts or provide certain advantages which have been wrongfully withheld.”

The Commission team that dealt with the Microsoft case noted that a risk with righting a prohibition of the infringement was that “[i]n many cases, especially in network industries, the infringer could continue to reap the benefits of a past violation to the detriment of consumers. This is what remedies are intended to avoid.” An effective remedy puts the competitive position back as it was before the harm occurred, which requires three elements. First, the abusive conduct must be prohibited. Second, the harmful consequences must be eliminated. For example, in Lithuanian Railways, the railway tracks that had been taken away were required to be restored, restoring the pre-conduct competitive position. Third, the remedy must prevent repetition of the same conduct or conduct having an “equivalent effect.” The two main remedies are divestiture and prohibition orders.

The RSA Is Both a Horizontal and a Vertical Arrangement

In the 2017 Google Search (Shopping) case, Google was found to have abused its dominant position in search. In the DOJ’s pending search case, Google is also accused of monopolizing the market for search. In addition to revealing the contours of the RSA, the case revealed a broader coordination between Google and Apple. For example, discovery revealed there are monthly CEO-to-CEO meetings where the “vision is that we work as if we are one company.” Thus, the RSA serves as much more than a “default” setting—it is effectively an agreement not to compete.

Under the RSA, Apple gets a substantial cut of the revenue from searches by Apple users. Apple is paid to promote Google Search, with the payment funded by income generated from the sale of ads to Apple’s wealthy user base. That user base has higher disposable income than Android users, which makes it highly attractive to those advertising and selling products. Ads to Apple users are thought to generate 50 percent of ad spend but account for only 20 percent of all mobile users.

Compared to Apple’s other revenue sources, the scale of the payments made to Apple under the RSA is significant. It generates $20 billion in almost pure profit for Apple, which accounts for 15 to 20 percent of Apple’s net income. A payment this large and under this circumstance creates several incentives for Apple to cement Google’s dominance in search:

The RSA also gives Google an incentive to support Apple’s dominance in top end or “performance smartphones,” and to limit Android smartphone features, functions and prices in competition with Apple. In its Android Decision, the EU Commission found significant price differences between Google Android and iOS devices, while Google Search is the single largest source of traffic from iPhone users for over a decade.

Indeed, the Department of Justice pleadings in USA v. Apple show how Apple has sought to monopolize the market for performance smartphones via legal restrictions on app stores and by limiting technical interoperability between Apple’s system and others. The complaint lists Apple’s restrictions on messaging apps, smartwatches, and payments systems. However, it overlooks the restrictions on app stores from using Apple users’ data and how it sets the baseline for interoperating with the Open Web. 

It is often thought that Apple is a devices business. On the contrary, the size of its RSA with Google means Apple’s business, in part, depends on income from advertising by Google using Apple’s user data. In reality, Apple is a data-harvesting business, and it has delegated the execution to Google’s ads system. Meanwhile, its own ads business is projected to rise to $13.7 billion by 2027. As such, the RSA deserves very close scrutiny in USA v. Apple, as it is an agreement between two companies operating in the same industry.

The Failures of Choice Screens

The EU Google (Search) abuse consisted in Google’s “positioning and display” of its own products over those of rivals on the results pages. Google’s underlying system is one that is optimized for promoting results by relevance to user query using a system based on Page Rank. It follows that promoting owned products over more relevant rivals requires work and effort. The Google Search Decision describes this abuse as being carried out by applying a relevance algorithm to determine ranking on the search engine results pages (“SERPs”). However, the algorithm did not apply to Google’s own products. As the figure below shows, Google’s SERP has over time filled up with own products and ads.

To remedy the abuse, the Decision compelled Google to adopt a “Choice Screen.” Yet this isn’t an obvious remedy to the impact on competitors that have been suppressed, out of sight and mind, for many years. The choice screen has a history in EU Commission decisions.

In 2009, the EU Commission identified the abuse Microsoft’s tying of its web browser to its Windows software. Other browsers were not shown to end users as alternatives. The basic lack of visibility of alternatives was the problem facing the end user and a choice screen was superficially attractive as a remedy, but it was not tested for efficacy. As Megan Grey observed in Tech Policy Press, “First, the Microsoft choice screen probably was irrelevant, given that no one noticed it was defunct for 14 months due to a software bug (Feb. 2011 through July 2012).” The Microsoft case is thus a very questionable precedent.  

In its Google Android case, the European Commission found Google acted anticompetitively by tying Google Search and Google Chrome to other services and devices and required a choice screen presenting different options for browsers. It too has been shown to be ineffective. A CMA Report (2020) also identified failures in design choices and recognized that display and brand recognition are key factors to test for choice screen effectiveness.

Giving consumers a choice ought to be one of the most effective ways to remedy a reduction of choice. But a choice screen doesn’t provide choice of presentation and display of products in SERPs.  Presentations are dependent on user interactions with pages. And Google’s knowledge of your search history, as well as your interactions with its products and pages, means it presents its pages in an attractive format. Google eventually changed the Choice Screen to reflect users top five choices by Member State. However, none of these factors related to the suppression of brands or competition, nor did it rectify the presentation and display’s effects on loss of variety and diversity in supply. Meanwhile, Google’s brand was enhanced from billions of user’s interactions with its products.

Moreover, choice screens have not prevented rival publishers, providers and content creators from being excluded from users’ view by a combination of Apple’s and Google’s actions. This has gone on for decades. Alternative channels for advertising by rival publishers are being squeezed out.

A Better Way Forward

As explained above, Apple helps Google target Apple users with ads and products in return for 36 percent of the ad revenue generated. Prohibiting that RSA would remove the parties’ incentives to reinforce each other’s market positions. Absent its share of Google search ads revenue, Apple may find reasons to build its own search engine or enhance its browser by investing in it in a way that would enable people to shop using the Open Web’s ad funded rivals. Apple may even advertise in competition with Google.  

Next, courts should impose (and monitor) a mandatory access regime. Applied here, Google could be required to operate within its monopoly lane and run its relevance engine under public interest duties in “quarantine” on non-discriminatory terms. This proposal has been advanced by former White House advisor Tim Wu:

I guess the phrase I might use is quarantine, is you want to quarantine businesses, I guess, from others. And it’s less of a traditional antitrust kind of remedy, although it, obviously, in the ‘56 consent decree, which was out of an antitrust suit against AT&T, it can be a remedy. And the basic idea of it is, it’s explicitly distributional in its ideas. It wants more players in the ecosystem, in the economy. It’s almost like an ecosystem promoting a device, which is you say, okay, you know, you are the unquestioned master of this particular area of commerce. Maybe we’re talking about Amazon and it’s online shopping and other forms of e-commerce, or Google and search.

If the remedy to search abuse were to provide access to the underlying relevance engine, rivals could present and display products in any order they liked. New SERP businesses could then show relevant results at the top of pages and help consumers find useful information.

Businesses, such as Apple, could get access to Google’s relevance engine and simply provide the most relevant results, unpolluted by Google products. They could alternatively promote their own products and advertise other people’s products differently. End-users would be able to make informed choices based on different SERPs.

In many cases, the restoration of competition in advertising requires increased familiarity with the suppressed brand. Where competing publishers’ brands have been excluded, they must be promoted. Their lack of visibility can be rectified by boosting those harmed into rankings for equivalent periods of time to the duration of their suppression. This is like the remedies used for other forms of publication tort. In successful defamation claims, the offending publisher must publish the full judgment with the same presentation as the offending article and displayed as prominently as the offending article. But the harm here is not to individuals; instead, the harm redounds to alternative publishers and online advertising systems carrying competing ads. 

In sum, the proper remedy is one that rectifies the brand damage from suppression and lack of visibility. Remedies need to address this issue and enable publishers to compete with Google as advertising outlets. Identifying a remedy that rectifies the suppression of relevance leads to the conclusion that competition between search-results-page businesses is needed. Competition can only be remedied if access is provided to the Google relevance engine. This is the only way to allow sufficient competitive pressure to reduce ad prices and provide consumer benefits going forward.

The authors are Chair Antitrust practice, Associate, and Paralegal, respectively, of Preiskel & Co LLP. They represent the Movement for an Open Web versus Google and Apple in EU/US and UK cases currently being brought by their respective authorities. They also represent Connexity in its claim against Google for damages and abuse of dominance in Search (Shopping).

Neoliberal columnist Matt Yglesias recently weighed into antitrust policy in Bloomberg, claiming falsely that the “hipsters” in charge of Biden’s antitrust agencies were abandoning consumers and the war on high prices. Yglesias thinks this deviation from consumer welfare makes for bad policy during our inflationary moment. I have a thread that explains all the things he got wrong. The purpose of this post, however, is to clarify how antitrust enforcement has changed under the current regime, and what it means to abandon antitrust’s consumer welfare standard as opposed to abandoning consumers.

Ever since the courts embraced Robert Bork’s demonstrably false revisionist history of antitrust’s goals, consumer welfare became antitrust’s lodestar, which meant that consumers sat atop antitrust’s hierarchy. Cases were pursued by agencies if and only if exclusionary conduct could be directly connected to higher prices or reduced output. This limitation severely neutered antitrust enforcement by design—with a two minor exceptions described below, there was not a single (standalone) monopolization case brought by the DOJ after U.S. v. Microsoft for over two decades—presumably because most harm in the modern (digital) age did not manifest in the form of higher prices for consumers. Under the Biden administration, the agencies are pursuing monopoly cases against Amazon, Apple, and Google, among others.

(For the antitrust nerds, the DOJ’s 2011 case against United Regional Health Care System included a Section 2 claim, but it was basically included to bolster a Section 1 claim. It can hardly be counted as a Section 2 case. And the DOJ’s 2015 case to block United’s alleged monopolization of takeoff and landing slots at Newark included a Section 2 claim. But these were just blips. Also the FTC pursued a Section 2 case prior to the Biden administration against Qualcomm in 2017.)

Even worse, if there was ever a perceived conflict between the welfare of consumers and the welfare of workers or merchants (or input providers generally), antitrust enforcers lost in court. The NCAA cases made clear that injury to college players derived from extracting wealth disproportionately created by predominantly Black athletes would be tolerated so long as viewers with a taste for amateurism were better off. And American Express stood for the principle that harms to merchants from anti-steering rules would be tolerated so long as generally wealthy Amex cardholders enjoyed more luxurious perks. (Patrons of Amex’s Centurian lounge can get free massages and Michelle Bernstein cuisine in the Miami Airport!) The consumer welfare standard was effectively a pro-monopoly policy, in the sense that it tolerated massive concentrations of economic power throughout the economy and firms deploying a surfeit of unfair and predatory tactics to extend and entrench their power.

Labor Theories of Harm in Merger Enforcement

In the consumer welfare era, which is now hopefully in our rear-view mirror, labor harms were not even on the agencies’ radars, particularly when it came to merger review. By freeing the agencies of having to construct price-based theories of harm to consumers, the so-called hipsters have unleashed a new wave of challenges, reinvigorating merger enforcement, particularly in labor markets. In October 2022, the DOJ stopped a merger of two book publishers on the theory that the combination would harm authors, an input provider in book production process. This was the first time in history that a merger was blocked solely on the basis of a harm to input providers.

And the DOJ’s complaint in the Live Nation/Ticketmaster merger spells out harms to, among other economic agents, musicians and comedians that flow from Live Nation’s alleged tying of its promotion services to access to its large amphitheaters. (Yglesias incorrectly asserted that DOJ’s complaint against Live Nation “is an example of the consumer-welfare approach to antitrust.” Oops.) The ostensible purpose of the tie-in is to extract a supra-competitive take rate from artists.

Not to be outdone, in two recent complaints, the FTC has identified harms to workers as a critical part of their case in opposition to a merger. In its February 2024 complaint, the FTC asserts, among other theories of harm, that for thousands of grocery store workers, Kroger’s proposed acquisition of Albertsons would immediately weaken competition for workers, putting downward pressure on wages. That the two supermarkets sometimes poach each other’s workers suggests that workers themselves could leverage one employer against the other. Yet the complaint focuses on the leverage of the unions when negotiating over collective bargaining agreements. If the two supermarkets were to combine, the complaint asserts, the union would lose leverage in its dealings with the merger parties over wages, benefits, and working conditions. Unions representing grocery workers would also lose leverage over threatened boycotts or strikes.

In its April 2024 complaint to block the combination of Tapestry and Capri, the FTC asserts, among other theories of harm, that the merger threatens to reduce wages and degrade working conditions for hourly workers in the affordable handbag industry. The complaint describes one episode in July 2021 in which Capri responded to a pubic commitment by Tapestry to pay workers at least $15 per hour with a $15 per hour commitment of its own. This labor-based theory of harm exists independently of the FTC’s consumer-based theory of harm.

Labor Theories of Harm Outside of Merger Enforcement

The agencies have also pursued no-poach agreements to protect workers. A no-poach agreement, as the name suggests, prevents one employer from “poaching” (or hiring away) a worker from its competitors. The agreements are not wage-fixing agreements per se, but instead are designed to limit labor mobility, which economists recognize is key to wage growth. In October 2022, a health care staffing company entered into a plea agreement with the DOJ, marking the Antitrust Division’s first successful prosecution of criminal charges in a labor-side antitrust case. The DOJ has tried three criminal no-poach cases to a jury, and in all three the defendants were acquitted. For example, in April 2023, a court ordered the acquittal of all defendants in a no-poach case involving the employment of aerospace engineers. (Disclosure: I am the plaintiffs’ expert in a related case brought by a class of aerospace engineers.) Despite these losses, AAG Jonathan Kanter is still committed as ever to addressing harms to labor with the antitrust laws.

And the FTC has promulgated a rule to bar non-compete agreements. Whereas a no-poach agreement governs the conduct among rival employers, a non-compete is an agreement between an employer and its workers. Like a no-poach, the non-compete is designed to limit labor mobility and thereby suppress wages. Having worked on a non-compete case for a class of MMA fighters against the UFC that dragged on for a decade, I can say with confidence (and experience) that a per se prohibition of non-competes is infinitely more efficient than subjecting these agreements to antitrust’s rule-of-reason standard. Again, this deviation from consumer welfare has proven controversial among neoliberals; even the Washington Post editorial board penned as essay on why high-wage workers earning over $100,000 per year should be exposed to such encumbrances.

Consumers Still Have a Cop on the Beat

If you take Yglesias’s depiction literally, it means that the antitrust agencies under Biden have abandoned the protection of consumers. But nothing can be further from the truth. Antitrust enforcers can walk and chew gum at the same time. The list of enforcement actions on behalf of consumers is too long to reproduce here, but to summarize a few recent highlights:

Presumably Yglesias and his neoliberal clan have access to Google Search, Lina Khan’s Twitter handle, or the Antitrust Division’s press releases. It only takes a few keystrokes to learn of countless enforcement actions brought on behalf of consumers. Although this view is a bit jaded, one interpretation is that this crowd, epitomized by the Wall Street Journal editorial board and its 99 hit pieces against Chair Khan, uses the phrase “consumer welfare” as code for lax enforcement of antitrust law. In other words, what really upsets neoliberals (and libertarians) is not the abandonment of consumers, but instead any enforcement of antitrust law, particularly when it (1) deprives monopolists from expanding their monopolies to the betterment of their investors or (2) steers profits away from employers towards workers. In my darkest moments, I suspect that some target of an FTC or DOJ investigation funds neoliberal columnists and journals—looking at you, The Economist—to cook up consumer-welfare-based theories of how the agencies are doing it wrong. All such musings should be ignored, as the antitrust hipsters are alright.

Your intrepid writer, when not toiling for free in the basement of The Sling, does a fair amount of testifying as an expert economic witness. Many of these cases involve alleged price-fixing (or wage-fixing) conspiracies. One would think there would be no need to define the relevant market in such cases, as the law condemns price-fixing under the per se standard. But because of certain legal niceties—such as whether the scheme involved an intermediary (or ringleader) that allegedly coached and coaxed the parties with price-setting power—we often spend reams of paper and hundreds of billable hours engaging in what amounts to navel inspection to determine the contours of the relevant market. The idea is that if the defendants do not collectively possess market power in a relevant antitrust market, then the challenged conduct cannot possibly generate anticompetitive effects.

A traditional method of defining the relevant market asks the following question: Could a hypothetical monopolist who controlled the supply of the good (or services) that allegedly comprise the relevant market profitably raise prices over competitive levels? The test has been shortened to the hypothetical monopolist test (HMT). 

It bears noting that there are other ways to define relevant markets, including by assessing the Brown Shoe factors or practical indicia of the market boundaries. The Brown Shoe test can be used independently or in conjunction with the HMT. But this alternative is beyond the scope of this essay.

Published in the Harvard Law Review in 2010, Louis Kaplow’s essay was provocatively titled “Why (Ever) Define Markets”? It’s a great question, and having spent 25-odd years in the antitrust business, I can provide a smart-alecky and jaded answer: The market definition exercise is a way for defendants to deflect attention away from the harms inflicted on consumers (or workers) and towards an academic exercise, which is admittedly entertaining for antitrust nerds. Don’t look at the body on the ground, with goo spewing out of the victim’s forehead. Focus instead on this shiny object over here!

And it works. The HMT commands undue influence in antitrust cases, with some courts employing the market-definition exercise as a make-or-break evidentiary criterion for plaintiffs, before considering anticompetitive effects. Other classic examples of market definition serving as a distraction include American Express (2018), where the Supreme Court even acknowledged evidence a net price increase yet got hung up over market definition, or Sabre/Farelogic (2020), where the court acknowledged that the merging parties competed in practice but not per the theory of two-sided markets.

A better way forward

When it comes to retrospective monopolization cases (aka “conduct” cases), there is a more probative question to be answered. Rather than focusing on hypotheticals, courts should be asking whether a not-so-hypothetical monopolist—or collection of defendants that could mimic monopoly behavior—could profitably raise price above competitive levels by virtue of the scheme. Or in a monopsony case, did the not-so-hypothetical monopsonist—or collection of defendants assembled here—profitably reduce wages below competitive levels by virtue of the scheme? Let’s call this alternative the NSHMT, as we can’t compete against the HMT without our own clever acronym.

Consider this fact pattern. A ringleader, who gathered and then shared competitively sensitive information from horizontal rivals, has been accused of orchestrating a scheme to raise prices in a given industry. After years of engaging in the scheme, an antitrust authority began investigating, and the ring was disbanded. On behalf of plaintiffs, an economist builds an econometric model that links the prices paid to the customers at issue—typically a dummy variable equal to one when the defendant was part of the scheme and zero otherwise—plus a host of control variables that also explain movements in prices. After controlling for many relevant (i.e., motivated by record evidence or economic theory) and measurable confounding factors, eliminating any variables that might serve as mediators of the scheme itself, the econometric model shows that the scheme had an economically and statistically significantly effect of artificially raising prices.

Setting aside any quibbles that defendants’ economists might have with the model—it is their job to quibble over modeling choices while accepting that the challenged conduct occurred—the clear inference is that this collection of defendants was in fact able to raise prices while coordinating their affairs through the scheme. Importantly, they could not have achieved such an outcome of inflated prices unless they collectively possessed selling power. (Indeed, why would defendants engage in the scheme in the first place, risking antitrust liability, if higher profits could not be achieved?) So, if we are trying to assemble the smallest collection of products such that a (not-so) hypothetical seller of such products could exercise selling power, we have our answer! The NSHMT is satisfied, which should end the inquiry over market power.

(Note that fringe firms in the same industry might weakly impose some discipline on the collection of firms in the hypothetical. But the fringe firms were apparently not needed to exercise power. Hence, defining the market slightly more broadly to include the fringe is a conservative adjustment.)

At this point, the marginal utility of performing a formal HMT to define the relevant market based on what some hypothetical monopolist could pull off is dubious. I use the modifier “formal” to connote a quantitative test as to whether a hypothetical monopolist who controlled the purported relevant market could increase prices by (say) five percent above competitive levels.

The formal HMT has a few variants, but a standard formulation proceeds as follows. Step 1: Measure the actual elasticity of demand faced by defendants. Step 2: Estimate the critical elasticity of demand, which is the elasticity that would make the hypothetical monopolist just indifferent between raising and not raising prices. Step 3: Compare the actual to the critical elasticity; if the former is less than the latter, then the HMT is satisfied and you have yourself a relevant antitrust market! An analogous test compares the “predicted loss” to the “critical loss” of a hypothetical monopolist.

For those thinking the New Brandeisians dispensed with such formalism in the newly issued 2023 Merger Guidelines, I refer you to Section 4.3.C, which spells out the formal HMT in “Evidence and Tools for Carrying Out the Hypothetical Monopoly Test.” To their credit, however, the drafters of the new guidelines relegated the formal HMT to the fourth of four types of tools that can be used to assess market power. See Preamble to 4.3 at pages 40 to 41, placing the formal HMT beneath (1) direct evidence of competition between the merging parties, (2) direct evidence of the exercise of market power, and (3) the Brown Shoe factors. It bears noting that the Merger Guidelines were designed with assessing the competitive effects of a merger, which is necessarily a prospective endeavor. In these matters, the formal HMT arguably can play a bigger role.

Aside from generating lots of billable hours for economic consultants, the formal HMT in retrospective conduct cases bears little fruit because the test is often hard to implement and because the test is contaminated by the scheme itself. Regarding implementation, estimating demand elasticities—typically via a regression on units sold—is challenging because the key independent variable (price) in the regression is endogenous, which when not correctly may lead to biased estimates, and therefore requires the economist to identify instrumental variables that can stand in the shoes of prices. Fighting over the proper instruments in a potentially irrelevant thought experiment is the opposite of efficiency! Regarding the contamination of the formal test, we are all familiar with the Cellophane fallacy, which teaches that at elevated prices (owing to the anticompetitive scheme), distant substitutes will appear closer to the services in question, leading to inflated estimates of the actual elasticity of demand. Moreover, the formal HMT is a mechanical exercise that may not apply to all industries, particularly those that do not hold short-term profit maximization as their objective function.

The really interesting question is, What happens if the NSHMT finds an anticompetitive effect owing to the scheme—and hence an inference of market power—but the formal HMT finds a broader market is needed? Clearly the formal HMT would be wrong in that instance for any (or all) of the myriad reasons provided above, and it should be given zero weight by the factfinder.

A special form of direct proof

An astute reader might recognize the NSHMT as a type of direct proof of market power, which has been recognized as superior to indirect proof of market power—that is, showing high shares and entry barriers in a relevant market. As explained by Carl Shapiro, former Deputy Assistant Attorney General for Economics at DOJ: “IO economists know that the actual economic effects of a practice do not turn on where one draws market boundaries. I have been involved in many antitrust cases where a great deal of time was spent debating arcane details of market definition, distracting from the real economic issues in the case. I shudder to think about how much brain damage among antitrust lawyers and economists has been caused by arguing over market definition.” Aaron S. Edlin and Daniel L. Rubinfeld offered this endorsement of direct proof: “Market definition is only a traditional means to the end of determining whether power over price exists. Power over price is what matters . . . if power can be shown directly, there is no need for market definition: the value of market definition is in cases where power cannot be shown directly and must be inferred from sufficiently high market share in a relevant market.” More recently, John Newman, former Deputy Director of the Bureau of Competition at the FTC, remarked on Twitter: “Could a company that doesn’t exist impose a price increase that doesn’t exist of some undetermined amount—probably an arbitrarily selected percentage—above a price level that probably doesn’t exist and may have never existed? In my more cynical moments, I occasionally wonder if this question is the right one to be asking in conduct cases.”

I certainly agree with these antitrust titans that direct proof of power is superior to indirect proof. Let me humbly suggest that the NSHMT is distinct from and superior to common forms of direct proof. Common forms of direct proof include evidence that the defendant commands a pricing premium over its peers (or imposes a large markup), as determined by some competitive benchmark (or measure of incremental costs), or engages in price discrimination, which is only possible if it faces a downward-sloping demand curve. The NSHMT is distinct from these common forms of direct evidence because it is tethered to the challenged conduct. It is superior to these other forms because it addresses the profitability of an actual price increase owing to the scheme as opposed to levels of arguably inflated prices. Put differently, it is one thing to observe that a defendant is gouging customers or exploiting its workers. It is quite another to connect this exploitation to the scheme itself.  

Regarding policy implications, when the NSMHT is satisfied, there should be no need to show market power indirectly via the market-definition exercise. To the extent that market definition is still required, when there is a clear case of the scheme causing inflated prices or lower output or exclusion in monopolization cases, plaintiffs should get a presumption that defendants possess market power in a relevant market. 

In summary, for merger cases, where the analysis focuses on a prospective exercise of power, the HMT might play a more useful role. In merger cases, we are trying to predict the profitability of some future price increase. Even in merger cases, the economist might be able to exploit price increases (or wage suppression) owing to prior acquisitions, which would be a form of direct proof. For conduct cases, however, the NSHMT is superior to the HMT, which offers little marginal utility for the factfinder. The NSHMT just so happens to inform the profitability of an actual price hike by a collection of actual firms that wield monopoly power, as opposed to some hypothetical monopolist. And it also helpfully focuses attention on the anticompetitive harm, where it rightly belongs. Look at the body on the ground and not at the shiny object.  

According to J.C. Bradbury, an economics professor at Kennesaw State, owners of professional men’s sports teams have received more than $19 billion in taxpayer subsidies this century. And according to a recent article in the Salt Lake City Tribune, men’s professional sports around the United States continue to ask for billions more. The root of the problem is monopoly, as explained below, and unless and until Congress addresses the root cause, citizens should alter their demands from local politicians.

A Game Only Men Get to Play

The taxpayer subsidy game, in which teams like the Washington Capitals threaten to leave their host city unless taxpayers fork over billions in subsidies, is very much a game that only men get to play. Politicians have never been willing to give billions of dollars to build stadiums and arenas for teams in women’s professional sports. Karen Leetzow, President of the Chicago Red Stars of the NWSL, would like that to change:

Women’s sports need to have a seat at the table. We need to be in the mix because otherwise we’re just going to end up chasing our tail around how to grow women’s sports. If you’re a politician, what better way for you to leave a lasting legacy in the state of Illinois or the city of Chicago than to do something that’s never been done, which is provide meaningful funding for women.

As Leetzow summarized the argument, “equity needs to be part of the conversation.”

One suspects that many sports economists would disagree with this statement. The disagreement isn’t about the word “equity.” The disagreement likely is based entirely on the nature of the “conversation.”

For decades, sports economists have objected to the entire conversation politicians and men’s sports leagues have about taxpayer subsidies. Politicians and team owners have consistently argued that spending billions to build a stadium or arena for men’s professional sports teams is justified in terms of economic growth and jobs. Economists who study this issue, though, have offered a very consistent academic response: This is bullshit!

Okay, the response involves a bit more. Essentially, a host of academic studies fail to find evidence that stadiums and arenas are capable of generating significant economic growth. In the end, economists consistently argue these billions in subsidies are just a transfer of money from ordinary taxpayers to billionaire sports owners.

These studies have been published for decades. And sports economists have screamed about this issue for decades. But all this screaming hasn’t turned off the taxpayer faucet. Men’s professional sports leagues have continued to ask for—and continued to receive—billions in taxpayer subsidies.

Diagnosing the (Monopoly) Problem

This leads to a question: Why hasn’t all the objective empirical studies by sports economists (and all the screaming) stopped the subsidies?  

If we move past the obvious explanation that people don’t really listen to economists as often as economists might like, we can do what people often do when life doesn’t go their way. We can blame someone!

In this case, the name of the person we should blame is William Hulbert. In 1876, Hulbert, then owner of the Chicago White Stockings (the franchise known as the Chicago Cubs today), launched the National League. Hulbert’s creation brought an “innovation” that today is employed by essentially all professional North American sports leagues: Following the advice of Lewis Meacham, an editor with the Chicago Tribune, Hulbert’s new league decided that each city would only get one team.

The National League was hardly a successful business in the 1870s. The vast majority of the first teams went out of business. So it’s possible that Hulbert and Meacham were simply trying to find a model that ensured the financial success of as many teams as possible in a struggling business. Regardless of what motivated Meacham and Hulbert to employ this innovation in the formation of the National League, this model seems to be the root cause of our current stadium financing problem.

Outside of New York, Los Angeles, and Chicago, most cities today still only get one team in each professional sports league. And because leagues completely control how many teams are in each league, some cities that could clearly support a franchise don’t get a team at all. Consequently, Hulbert’s innovation has led to a world where leagues and its owners have substantial monopoly power over fans (and monopsony power over players). If you want a team, you have to give the owners what they want. And what they want is billions in taxpayer subsidies.

Once again, the owners claim these subsidies create economic growth and jobs. And once again, sports economists scream they are lying. Building stadiums and arenas for them does not create economic growth and does not create jobs. Therefore, we are effectively giving these billionaires taxpayer handouts worth billions.

Ignoring the Economists

All of this is true. But from a politician’s perspective, none of this probably matters.  To see this, all one has to do is think back to January 13th of this year. On that day, the Kansas City Chiefs played the Miami Dolphins in a Wild Card playoff game. Given that this was January in Kansas City, the weather for the game was immensely bad. The temperature was -4 Fahrenheit with wind chills about twenty degrees colder. Not surprisingly, many fans suffered frostbite. And recently it was revealed, some of these fans actually lost fingers and toes.

Let’s think about that for a minute. Fans of football are so addicted to this product that they would risk amputation to watch their favorite team.

Chiefs fans are hardly the only sports fans who are emotionally attached to their team. When the Bills lost to the Chiefs the next week, the video of the Bills fan crying in the stands went viral.

Given this emotional attachment, it should not be surprising that when the Buffalo Bills asked taxpayers in New York to give them more than a billion dollars for a new stadium politicians couldn’t say no. The alternative was to say to the people crying in the stands that their team might not be in Buffalo anymore.

In the end, this is probably not about politicians believing a lie. This is really about teams having monopoly power and knowing that they have created a product that very much controls the emotions of their customers. Assuming we can’t compel sports leagues to permit more competition within a city we need to think about different remedies.

Perhaps we would be better off thinking about this story differently. Sports make people happy (or really sad!). In that sense, stadiums are like building city parks. No one argues that city parks are built to create economic growth. Cities build parks to make people happier. Stadiums very much serve the same purpose.

Therefore, maybe it is time for politicians to just be honest about why we are doing this. We are not using taxpayer dollars to create jobs. We are using these to ensure that the sports teams that make people happy (or sad) will continue to exist.

Of course, some people aren’t sports fans and therefore some people may not like their taxpayer dollars going towards this end. To those people, my response is simple: It is time to grow up and learn how democracy works. Government in a democracy reflects the preferences of everyone in that society. This means that sometimes the government does what you want. And sometimes, it doesn’t.

A Modest Proposal

What we should demand of our government is that it treats people equally (at least, that’s what I want!). If we are going to invest billions in men’s sports, we should at least be willing to invest millions in women’s team sports. Politicians only supporting men’s sports is simply wrong.

Yes, I am sure some economists may still scream we shouldn’t be giving taxpayer dollars to anyone. Seriously, though, that’s not going to stop. As long as sports leagues maintain their monopoly power, politicians are probably going to keep doing this. And that is true, no matter how much you scream.

So maybe we need to try screaming something else. Women’s professional sports are growing and the number of people these leagues make happy (or sad!) is growing rapidly. It is time for politicians to turn the conversation to equity and try and make these fans happy as well!

David Berri is a sports economics and professor of economics at Southern Utah University. Along with Martin Schmidt and Stacey Brook, he is the author of The Wages of Wins: Taking Measure of the Many Myths in Modern Sport (Stanford University Press 2006).

Right before Thanksgiving, Josh Sisco wrote that the Federal Trade Commission is investigating whether the $9.6 billion purchase of Subway by private equity firm Roark Capital creates a sandwich shop monopoly, by placing Subway under the same ownership as Jimmy John’s, Arby’s, McAlister’s Deli, and Schlotzky’s. The acquisition would allow Roark to control over 40,000 restaurants nationwide. Senator Elizabeth Warren amped up the attention by tweeting her disapproval of the merger, prompting the phrase “Big Sandwich” to trend on Twitter.

Fun fact: Roark is named for Howard Roark, the protagonist in Ayn Rand’s novel The Fountainhead, which captures the spirit of libertarianism and the anti-antitrust movement. Ayn Rand would shrug off this and presumably any other merger!

It’s a pleasure reading pro-monopoly takes on the acquisition. Jonah Goldberg writes in The Dispatch that sandwich consumers can easily switch, in response to a merger-induced price hike, to other forms of lunch like pizza or salads. (Similar screeds appear here and here.) Jonah probably doesn’t understand the concept, but he’s effectively arguing that the relevant product market when assessing the merger effects includes all lunch products, such that a hypothetical monopoly provider of sandwiches could not profitably raise prices over competitive levels. Of course, if a consumer prefers a sandwich, but is forced to eat a pizza or salad to evade a price hike, her welfare is almost certainly diminished. And even distant substitutes like salads might appear to be closer to sandwiches when sandwiches are priced at monopoly levels.

The Brown Shoe factors permit courts to assess the perspective of industry participants when defining the contours of a market, including the merging parties. Subway’s franchise agreement reveals how the company perceives its competition. The agreement defines a quick service restaurant that would be “competitive” for Subway as being within three miles of one of its restaurants and deriving “more than 20% of its total gross revenue from the sale of any type of sandwiches on any type of bread, including but not limited to sub rolls and other bread rolls, sliced bread, pita bread, flat bread, and wraps.” The agreement explicitly mentions by name Jimmy John’s, McAlister’s Deli and Schlotzky’s as competitors. This evidence supports a narrower market.

Roark’s $9.6 billion purchase of Subway exceeded the next highest bid by $1.35 billion—from TDR Capital and Sycamore Partners at $8.25 billion—an indication that Roark is willing to pay a substantial premium relative to other bidders, perhaps owing to Roark’s existing restaurant holdings. The premium could reflect procompetitive merger synergies, but given what the economic literature has revealed about such purported benefits, the more likely explanation of the premium is that Roark senses an opportunity to exercise newfound market power.

To assess Roark’s footprint in the restaurant business, I downloaded the Nation’s Restaurant News (NRN) database of sales and stores for the top 500 restaurant chains. If one treats all chain restaurants as part of the relevant product market, as Jonah Goldberg prefers, with total sales of $391.2 billion in 2022, then Roark’s pre-merger share of sales (not counting Subway) is 10.8 percent, and its post-merger share of sales is 13.1 percent. These numbers seem small, especially the increment to concentration owing to the merger.

Fortunately, the NRN data has a field for fast-food segment. Both Subway and Jimmy John’s are classified as “LSR Sandwich/Deli,” where LSR stands for limited service restaurants, which don’t offer table service. By comparison, McDonald’s, Panera, and Einstein are classified under “LSR Bakery/Café”. If one limits the data to the LSR Sandwich/Deli segment, total sales in 2022 fall from $391.1 billion to $26.3 billion. Post-merger, Roark would own four of the top six sandwich/deli chains in America. It bears noting that imposing this filter eliminates several of Roark’s largest assets—e.g., Dunkin’ Donuts (LSR Coffee), Sonic (LSR Burger), Buffalo Wild Wings (FSR Sports Bar)—from the analysis.

Restaurant Chains in LSR Sandwich/Deli Sector, 2022

ChainSales (Millions)UnitsShare of Sales
Subway*9,187.920,57634.9%
Arby’s*4,535.33,41517.2%
Jersey Mike’s2,697.02,39710.3%
Jimmy John’s*2,364.52,6379.0%
Firehouse Subs1,186.71,1874.5%
McAlister’s Deli*1,000.45243.8%
Charleys Philly Steaks619.86422.4%
Portillo’s Hot Dogs587.1722.2%
Jason’s Deli562.12452.1%
Potbelly496.14291.9%
Wienerschnitzel397.33211.5%
Schlotzsky’s*360.83231.4%
Chicken Salad Chick284.12221.1%
Penn Station East Coast264.33211.0%
Mr. Hero157.91090.6%
American Deli153.22040.6%
Which Wich131.32260.5%
Capriotti’s122.61420.5%
Nathan’s Famous119.12720.5%
Port of Subs112.91270.4%
Togo’s107.71620.4%
Biscuitville107.5680.4%
Cheba Hut95.0500.4%
Primo Hoagies80.4940.3%
Cousins Subs80.1930.3%
Ike’s Place79.3810.3%
D’Angelo75.4830.3%
Dog Haus73580.3%
Quiznos Subs57.81650.2%
Lenny’s Sub Shop56.3620.2%
Sandella’s51520.2%
Erbert & Gerbert’s47.4750.2%
Goodcents47.3660.2%
Total26,298.60230,629100.0%

Source: Nation’s Restaurant News (NRN) database of sales and stores for the top 500 restaurant chains. Note: * Owned by Roark

With this narrower market definition, Roark’s pre-merger share of sales (not counting Subway) is 31.4 percent, and its post-merger share of sales is 66.3 percent. These shares seem large, and the standard measure of concentration—which sums the square of the market shares—goes from 2,359 to 4,554, which would create the inference of anticompetitive effects under the 2010 Merger Guidelines.

One complication to the merger review is that Roark wouldn’t have perfect control of the sandwich pricing by its franchisees. Franchisees often are free to set their own prices, subject to suggestions (and market studies) by the franchise. So while Roark might want (say) a Jimmy John’s franchisee to raise sandwich prices after the merger, that franchisee might not internalize the benefits to Roark of diversion of some its customers to Subway. With enough money at stake, Roark could align its franchisees’ incentives with the parent company, by, for example, creating profit pools based on the profits of all of Roark’s sandwich investments.

Another complication is that Roark does not own 100 percent of its restaurants. Roark is the majority-owner of Inspire Brands. In July 2011, Roark acquired 81.5 percent of Arby’s Restaurant Group. Roark purchased Wendy’s remaining 12.3 percent holding of Inspire Brands in 2018. To the extent Roark’s ownership of any of the assets mentioned above is partial, a modification to the traditional concentration index could be performed, along the lines spelled out by Salop and O’Brien. (For curious readers, they show in how the change in concentration is a function of the market shares of the acquired and acquiring firms plus the fraction of the profits of the acquired firm captured by the acquiring firm, which varies according to different assumption about corporate control.)

When defining markets and assessing merger effects, it is important to recognize that, in many towns, residents will not have access to the fully panoply of options listed in the top 500 chains. (Credit to fellow Sling contributor Basel Musharbash for making this point in a thread.) So even if one were to conclude that the market was larger than LSR Sandwich/Deli chains, it wouldn’t be the case that residents could chose from all such restaurants in the (expanded) relevant market. Put differently, if you live in a town where your only options are Subway, Jimmy John’s, and McDonald’s, the merger could significantly concentrate economic power.

Although this discussion has focused on the harms to consumers, as Brian Callaci points out, the acquisition could allow Roark to exercise buying power vis-à-vis the sandwich shops suppliers. And Helaine Olen explains how the merger could enhance Roark’s power over franchise owners. The DOJ recently blocked a book-publisher merger based on a theory of harm to input providers (publishers), indicating that consumers no longer sit alone atop the antitrust hierarchy.

While it’s too early to condemn the merger, monopoly-loving economists and libertarians who mocked the concept of Big Sandwich should recognize that there are legitimate economic concerns here. It all depends on how you slice the market!

How many times have you heard from an antitrust scholar or practitioner that merely possessing a monopoly does not run afoul of the antitrust laws? That a violation requires the use of a restraint to extend that monopoly into another market, or to preserve the original monopoly to constitute a violation? Here’s a surprise.

Both a plain reading and an in-depth analysis of the text of Section 2 of the Sherman Act demonstrate that this law’s violation does not require anticompetitive conduct, and that it does not have an efficiencies defense. Section 2 of the Sherman Act was designed to impose sanctions on any firm that monopolizes or attempts to monopolize a market. Period. With no exceptions for firms that are efficient or for firms that did not engage in anticompetitive conduct.

This is the conclusion one should reach if one were a judge analyzing the Sherman Act using textualist principles. Like most of the people reading this article I’m not a textualist. But many judges and Supreme Court Justices are, so this method of statutory interpretation must be taken quite seriously today.

To understand how to read the Sherman Act as a textualist, one must first understand the textualist method of statutory interpretation. This essay presents a textualist analysis of Section 2 that is a condensation of a 92-page law review article, titled “The Sherman Act Is a No-Fault Monopolization Statute: A Textualist Demonstration.” My analysis demonstrates that Section 2 is actually a no-fault statute. Section 2 requires courts to impose sanctions on monopolies and attempts to monopolize without inquiring into whether the defendant engaged in anticompetitive conduct or whether it was efficient.

A Brief Primer on Textualism

As most readers know, a traditionalist approach to statutory interpretation analyzes a law’s legislative history and interprets it accordingly. The floor debates in Congress and relevant Committee reports affect how courts interpret a law, especially in close cases or cases where the text is ambiguous. By contrast, textualism only interprets the words and phrases actually used in the relevant statute. Each word and phrase is given its fair, plain, ordinary, and original meaning at the time the statute was enacted.

Justice Scalia and Bryan Garner, a professor at SMU’s Dedman School of Law, wrote a 560-page book explaining and analyzing textualism. Nevertheless, a basic textualist analysis can be described relatively simply. To ascertain the meaning of the relevant words and phrases in the statute, textualism relies mostly upon definitions contained in reliable and authoritative dictionaries of the period in which the statute was enacted. These definitions are supplemented by analyzing the terms as they were used in contemporaneous legal treatises andcases. Crucially, textualism ignores statutes’ legislative history. In the words of Justice Scalia, “To say that I used legislative history is simply, to put it bluntly, a lie.” 

Textualism does not attempt to discern what Congress “intended to do” other than by plainly examining the words and phrases in statutes. A textualist analysis does not add or subtract from the statute’s exact language and does not create exceptions or interpret statutes differently in special circumstances. Nor should a textualist judge insert his or her own policy preferences into the interpretation. No requirement should be read into a law unless, of course, it is explicitly contained in the legislation. No exemption should be inferred to achieve some overall policy goal Congress arguably had unless, of course, the text demands it.

As Justice Scalia wrote, “Once the meaning is plain, it is not the province of a court to scan its wisdom or its policy.” Indeed, if a court were to do so this would be the antithesis of textualism. There are some complications relevant to a textualist analysis of Section 2, but they do not change the results that follow.

A Textualist Analysis of Section 2 of the Sherman Act

A straightforward textualist interpretation of Section 2 demonstrates that a violation does not require anticompetitive conduct and applies regardless whether the firm achieved its position through efficient behavior.

Section 2 of the Sherman Act makes it unlawful for any person to “monopolize, or attempt to monopolize . . .  any part of the trade or commerce among the several States . . . .”  There is nothing, no language in Section 2, requiring anticompetitive conduct or creating an exception for efficient monopolies. A textualist interpretation of Section 2 therefore needs only to determine what the terms “monopolize” and “attempt to monopolize” meant in 1890. This examination demonstrates that these terms meant the same things they mean today if they are “fairly,” “ordinarily,” or “plainly” interpreted, free from the legal baggage that has grown up around them by a multitude of court decisions.

What Did “Monopolize” Mean in 1890?

When the Sherman Act was passed the word “monopolize” simply meant to acquire a monopoly. The term was not limited to monopolies acquired or preserved by anticompetitive conduct, and it did not exclude firms that achieved their monopoly due to efficient behavior.

As noted earlier, Justice Scalia was especially interested in the definitions of key terms in contemporary dictionaries. Scalia and Garner believe that six dictionaries published between 1851 to 1900 are “useful and authoritative.” All six were checked for definitions of “monopolize”. The principle definition in each for “monopolize” was simply that a firm had acquired a monopoly. None required anticompetitive conduct for a firm to “monopolize” a market, or excluded efficient monopolies.

For example, the 1897 edition of Century Dictionary and Cyclopedia defined “monopolize” as: “1. To obtain a monopoly of; have an exclusive right of trading in: as, to monopolize all the corn in a district . . . . ”

Serendipitously, a definition of “monopolize” was given in the Sherman Act’s legislative debates, just before the final vote on the Bill. Although normally a textualist does not care about anything uttered during a congressional debate, Senator Edmund’s remarks should be significant to a textualist because he quotes from a contemporary dictionary that Scalia considered useful and reliable. “[T]he best answer I can make to both my friends is to read from Webster’s Dictionary the definition of the verb “to monopolize”: He went on:

1. To purchase or obtain possession of the whole of, as a commodity or goods in market, with the view to appropriate or control the exclusive sale of; as, to monopolize sugar or tea.

There was no requirement of anticompetitive conduct, or exception for a monopoly efficiently gained.

These definitions are essentially the same as those in the 1898 and 1913 editions of Webster’s Dictionary. The four other dictionaries of the period Scalia & Garner considered reliable also contained essentially identical definitions. The first edition of the Oxford English Dictionary, from 1908, also contained a similar definition of “monopolize:”

1 . . . . To get into one’s hands the whole stock of (a particular commodity); to gain or hold exclusive possession of (a trade);  . . . . To have a monopoly. . . . 2 . . . . To obtain exclusive possession or control of; to get or keep entirely to oneself. 

Not only does the 1908 Oxford English Dictionary equate “monopolize” with “monopoly,” but nowhere does it require a monopolist to engage in anticompetitive conduct.

Moreover, all but one of the definitions in Scalia’s preferred dictionaries do not limit monopolies to firms making every sale in a market. They roughly correspond to the modern definition of “monopoly power,” by defining “monopolize” as the ability to control a market. The 1908 Oxford English Dictionary defined “monopolize” in part as “To obtain exclusive possession or control of.” The Webster’s Dictionary defined monopolize as “with the view to appropriate or control the exclusive sale of.” Stormonth defined monopolize as “one who has command of the market.”  Latham defined monopolize as “ to have the sole power or privilege of vending.…” And Hunter & Morris defined monopolize as “to have exclusive command over.”

In summary, every one of Scalia’s preferred period dictionaries defined “monopolize” as simply to gain all the sales of a market or the control of a market. A textualist analysis of contemporary legal treatises and cases yields the same result. None required conduct we would today characterize as anticompetitive, or exclude a firm gaining a monopoly by efficient means.  

A Textualist Analysis of “Attempt to Monopolize”

 A textualist interpretation of Section 2 should analyze the word “attempt” as it was used in the phrase “attempt to monopolize” circa 1890. However, no unexpected or counterintuitive result comes from this examination. Circa 1890 “attempt” had its colloquial 21st Century meaning, and there was no requirement in the statute that an “attempt to monopolize” required anticompetitive conduct or excluded efficient attempts.

The “useful and authoritative” 1897 Century Dictionary and Cyclopedia defines “attempt” as:

1. To make an effort to effect or do; endeavor to perform; undertake; essay: as, to attempt a bold flight . . . . 2. To venture upon: as, to attempt the sea.— 3. To make trial of; prove; test . . . . .

The 1898 Webster’s Dictionary gives a similar definition: “Attempt . . . 1. To make trial or experiment of; to try. 2. To try to move, subdue, or overcome, as by entreaty.’ The Oxford English Dictionary, which defined “attempt” in a volume published in 1888, similarly reads: “1. A putting forth of effort to accomplish what is uncertain or difficult….”

However, the word “attempt” in a statute did have a specific meaning under the common law circa 1890. It meant “an intent to do a particular criminal thing, with an act toward it falling short of the thing intended.” One definition stated that the act needed to be “sufficient both in magnitude and in proximity to the fact intended, to be taken cognizance of by the law that does not concern itself with things trivial and small.” But no source of the period defined the magnitude or nature of the necessary acts with great specificity (indeed, a precise definition might well be impossible).

It is noteworthy that in 1881 Oliver Wendell Holmes wrote about the attempt doctrine in his celebrated treatise, The Common Law:

Eminent judges have been puzzled where to draw the line . . . the considerations being, in this case, the nearness of the danger, the greatness of the harm, and the degree of apprehension felt. When a man buys matches to fire a haystack . . . there is still a considerable chance that he will change his mind before he comes to the point. But when he has struck the match . . . there is very little chance that he will not persist to the end . . .

Congress’s choice of the phrase “attempt to monopolize” surely built upon the existing common law definitions of an “attempt” to commit robbery and other crimes.  Although the meaning of a criminal “attempt” to violate a law has evolved since 1890, a textualist approach towards an “attempt to monopolize” should be a “fair” or “ordinary” interpretation of these words as they were used in 1890, ignoring the case law that has arisen since then. It is clear that acts constituting mere preparation or planning should be insufficient. Attempted monopolization should also require the intent to take over a market and at least one serious act in furtherance of this plan.

But “attempted monopolization” under Section 2 should not require the type of conduct we today consider anticompetitive, or exempt efficient conduct. Because current case law only imposes sanctions under Section 2 if a court decides the firm engaged in anticompetitive conduct,this case law was wrongly decided. It should be overturned, as should the case law that excuses efficient attempts.

Moreover, attempted monopolization’s current “dangerous probability” requirement should be modified significantly. Today it is quite unusual for a court to find that a firm illegally “attempted to monopolize” if it possessed less than 50 percent of a market.But under a textualist interpretation of Section 2, suppose a firm with only a 30 percent market share seriously tried to take over a relevant market. Isn’t a firm with a 30 percent market share often capable of seriously attempting to monopolize a market? And, of course, attempted monopolization shouldn’t have an anticompetitive conduct requirement or an efficiency exception.

Textualists Should Be Consistent, Even If That Means More Antitrust Enforcement

Where did the exception for efficient monopolies come from? How did the requirement that anticompetitive conduct is necessary for a Section 2 violation arise? They aren’t even hinted at in the text of the Sherman Act. Shouldn’t we recognize that conservative judges simply made up the anticompetitive conduct requirement and efficiency exception because they thought this was good policy? This is not textualism. It’s the opposite of textualism.

No fault monopolization embodies a love for competition and a distaste for monopoly so strong that it does not even undertake a “rule of reason” style economic analysis of the pros and cons of particular situations. It’s like a per se statute insofar as it should impose sanctions on all monopolies and attempts to monopolize. At the remedy stage, of course, conduct-oriented remedies often have been, and should continue to be, found appropriate in Section 2 cases.

The current Supreme Court is largely textualist, but also extremely conservative. Would it decide a no-fault case in the way that textualism mandates?   

Ironically, when assessing the competitive effects of the Baker Hughes merger, (then) Judge Thomas changed the language of the statute from “may be substantially to lessen competition” to “will substantially lessen competition,” despite considering himself to be a textualist. So much for sticking to the language of the statute!

Until recently, textualism has only been used to analyze an antitrust law a modest number of times. This is ironic because, even though textualism has historically only been championed by conservatives, a textualist interpretation of the antitrust laws should mean that the antitrust statutes will be interpreted according to these laws’ original aggressive, populist and consumer-oriented language.  

Robert Lande is the Venable Professor of Law Emeritus at the University of Baltimore Law School.

Over 100 years ago, Congress responded to railroad and oil monopolies’ stranglehold on the economy by passing the United States’ first-ever antitrust laws. When those reforms weren’t enough, Congress created the Federal Trade Commission to protect consumers and small businesses from predation. Today, unchecked monopolies again threaten economic competition and our democratic institutions, so it’s no surprise that the FTC is bringing a historic antitrust suit against one of the biggest fish in the stream of commerce: Amazon.

Make no mistake: modern-day monopolies, particularly the Big Tech giants (Amazon, Apple, Alphabet, and Meta), are active threats to competition and consumers’ welfare. In 2020, the House Antitrust Subcommittee concluded an extensive investigation into Big Tech’s monopolistic harms by condemning Amazon’s monopoly power, which it used to mistreat sellers, bully retail partners, and ruin rivals’ businesses through the use of sellers’ data. The Subcommittee’s report found that, as both the operator of and participant in its marketplace, Amazon functions with “an inherent conflict of interest.”

The FTC’s lawsuit builds off those findings by targeting Amazon’s notorious practice of “self-preferencing,” in which the company gathers private data on what products users are purchasing, creates its own copies of those products, then lists its versions above any competitors on user searches. Moreover, by bullying sellers looking to discount their products on other online marketplaces, Amazon has forced consumers to fork over more money than what they would have in a truly-competitive environment.

But perhaps the best evidence of Amazon’s illegal monopoly power is how hard the company has worked for years to squash any investigation into its actions. For decades, Amazon has relied on the classic ‘revolving door’ strategy of poaching former FTC officials to become its lobbyists, lawyers, and senior executives. This way, the company can use their institutional knowledge to fight the agency and criticize strong enforcement actions. These “revolvers” defend the business practices which their former FTC colleagues argue push small businesses past their breaking points. They also can help guide Amazon’s prodigious lobbying efforts, which reached a corporate record in 2022 amidst an industry wide spending spree in which “the top tech companies spent nearly $70 million on lobbying in 2022, outstripping other industries including pharmaceuticals and oil and gas.”

Amazon’s in-house legal and policy shops are absolutely stacked full of ex-FTC officials and staffers. In less than two years, Amazon absorbed more than 28 years of FTC expertise with just three corporate counsel hires: ex-FTC officials Amy Posner, Elisa Kantor Perlman and Andi Arias. The company also hired former FTC antitrust economist Joseph Breedlove as its principal economist for litigation and regulatory matters (read: the guy we’re going to call as an expert witness to say you shouldn’t break us up) in 2017.

It goes further than that. Last year, Amazon hired former Senate Judiciary Committee staffer Judd Smith as a lobbyist after he previously helped craft legislation to rein in the company and other Big Tech giants. Amazon also contributed more than $1 million to the “Competitiveness Coalition,” a Big Tech front group led by former Sen. Scott Brown (R-MA). The coalition counts a number of right-wing, anti-regulatory groups among its members, including the Competitive Enterprise Institute, a notorious purveyor of climate denialism, and National Taxpayers Union, an anti-tax group regularly gifted op-ed space in Fox News and the National Review.

This goes to show the lengths to which Amazon will go to avoid oversight from any government authority. True, the FTC has finally filed suit against Amazon, and that is a good thing. But Amazon, throughout their pursuance of ever growing monopoly power, hired their team of revolvers precisely for this moment. These ex-officials bring along institutional knowledge that will inform Amazon’s legal defense. They will likely know the types of legal arguments the FTC will rely on, how the FTC conducted its pretrial investigations, and the personalities of major players in the case. 

This knowledge is invaluable to Amazon. It’s like hiring the assistant coach of an opposing team and gaining access to their playbook — you know what’s coming before it happens and you can prepare accordingly. Not only that, but this stream of revolvers makes it incredibly difficult to know the dedication of some regulators towards enforcing the law against corporate behemoths. How is the public expected to trust its federal regulators to protect them from monopoly power when a large swath of its workforce might be waiting for a monopoly to hire them? (Of course, that’s why we need both better pay for public servants as well as stricter restrictions on public servants revolving out to the corporations they were supposedly regulating.)

While spineless revolvers make a killing defending Amazon, the actual people and businesses affected by their strong arming tactics are applauding the FTC’s suit. Following the FTC’s filing, sellers praised the Agency on Amazon’s Seller Central forum, calling it “long overdue” and Amazon’s model as a “race to the bottom.” One commenter even wrote they will be applying to the FTC once Amazon’s practices force them off the platform. This is the type of revolving we may be able to support. When the FTC is staffed with people who care more about reigning in monopolies than receiving hefty paychecks from them in the future (e.g., Chair Lina Khan), we get cases that actually protect consumers and small businesses.

The FTC’s suit against Amazon signals that the federal government will no longer stand by as monopolies hollow-out the economy and corrupt the inner-workings of our democracy, but the revolvers will make every step difficult. They will be in the corporate offices and federal courtrooms advising Amazon on how best to undermine their former employer’s legal standing. They will be in the media, claiming to be objective as a former regulator, while running cover for Amazon’s shady practices that the business press will gobble up. The prevalence of these revolvers makes it difficult for current regulators to succeed while simultaneously undermining public trust in a government that should work for people, not corporations. Former civil servants who put cash from Amazon over the regulatory mission to which they had once been committed are turncoats to the public good. They should be scorned by the public and ignored by government officials and media alike. 

Andrea Beaty is Research Director at the Revolving Door Project, focusing on anti-monopoly, executive branch ethics and housing policy. KJ Boyle is a research intern with the Revolving Door Project. Max Moran is a Fellow at the Revolving Door Project. The Revolving Door Project scrutinizes executive branch appointees to ensure they use their office to serve the broad public interest, rather than to entrench corporate power or seek personal advancement.

The Federal Trade Commission has accused Amazon of illegally maintaining its monopoly, extracting supra-competitive fees on merchants that use Amazon’s platform. If and when the fact-finder determines that Amazon violated the antitrust laws, we propose structural remedies to address the competitive harms. Behavioral remedies have fallen out of favor among antitrust scholars. But the success of a structural remedy cannot be taken for granted.

To briefly review the bidding, the FTC’s Complaint alleges that Amazon prevents merchants from steering customers to a lower-cost platform—that is, a platform that charges a lower take rate—by offering discounts off the price it charges on Amazon. Amazon threatens merchants’ access to the Buy Box if merchants are caught charging a lower price outside of Amazon, a variant of a most-favored-nation (MFN) restriction. In other words, Amazon won’t allow merchants to share any portions of its savings with customers as an inducement to switch platforms; doing so would put downward pressure on Amazon’s take rate, which has climbed from 35 to 45 percent since 2020 per ILSR.

The Complaint also alleges that Amazon ties its fulfillment services to access to Amazon Prime. Given the importance of Amazon Prime to survival on Amazon’s Superstore, Amazon’s policy is effectively conditioning a merchant’s access to its Superstore on an agreement to purchase Amazon’s fulfillment, often at inflated rates. Finally, the Complaint alleges that Amazon gives its own private-label brands preference in search results.

These are classic exclusionary restraints that, in another era, would be instinctively addressed via behavioral remedies. Ban the MFN, ban the tie-in, and ban the self-preferencing. But that would be wrongheaded, as doing so would entail significant oversight by enforcement authorities. As the DOJ Merger Remedies Manual states, “conduct remedies typically are difficult to craft and enforce.” To the extent that a remedy is fully conduct-based, it should be disfavored. The Remedies Manual appears to approve of conduct relief to facilitate structural relief, “Tailored conduct relief may be useful in certain circumstances to facilitate effective structural relief.”

Instead, there should be complete separation of the fulfillment services from the Superstore. In a prior piece for The Sling, we discussed two potential remedies for antitrust bottlenecks—the Condo and the Coop. In what follows, we explain that the Condo approach is a potential remedy for the Amazon platform bottleneck and the Coop approach a good remedy for the fulfillment center system. Our proposed remedy has the merit of allowing for market mechanisms to function to bypass the need for continued oversight after structural remedies are deployed.

Breaking Up Is Hard To Do

Structural remedies to monopolization have, in the past, created worry about continued judicial oversight and regulation. “No one wants to be Judge Greene.” He spent the bulk of his remaining years on the bench having his docket monopolized by disputes arising from the breakup of AT&T. Breakup had also been sought in the case of Microsoft. But the D.C. Circuit, citing improper communications with the press prior to issuance of Judge Jackson’s opinion and his failure to hold a remedy hearing prior to ordering divestiture of Microsoft’s operating system from the rest of the company, remanded the case for determination of remedy to Judge Kollar-Kotelly.

By that juncture of the proceeding, a new Presidential administration brought a sea change by opposing structural remedies not only in this case but generally. Such an anti-structural policy conflicts with the pro-structural policy set forth in Standard Oil and American Tobacco—that the remedy for unlawful monopolization should be restructuring the enterprises to eliminate the monopoly itself. The manifest problem with the AT&T structural remedy and the potential problem with the proposed remedy in Microsoft is that neither removed the core monopoly power that existed, thus retaining incentives to engage in anticompetitive conduct and generating continued disputes.

The virtue of the structural approaches we propose is that once established, they should require minimal judicial oversight. The ownership structures would create incentives to develop and operate the bottlenecks in ways that do not create preferences or other anticompetitive conduct. With an additional bar to re-acquisition of critical assets, such remedies are sustainable and would maximize the value of the bottlenecks to all stakeholders.

Turn Amazon’s Superstore into a Condo

The condominium model is one in which the users would “own” their specific units as well as collectively “owning” the entire facility. But a distinct entity would provide the administration of the core facility. Examples of such structures include the current rights to capacity on natural gas pipelines, rights to space on container ships, and administration for standard essential patents and for pooled copyrights. These examples all involve situations in which participants have a right to use some capacity or right but the administration of the system rests with a distinct party whose incentive is to maximize the value of the facility to all users. In a full condominium analogy, the owners of the units would have the right to terminate the manager and replace it. Thus, as long as there are several potential managers, the market  would set the price for the managerial service.

A condominium mode requires the easy separability of management of the bottleneck from the uses being made of it. The manager would coordinate the uses and maintain the overall facility while the owners of access rights can use the facility as needed.

Another feature of this model is that when the rights of use/access are constrained, they can be tradable; much as a condo owner may elect to rent the condo to someone who values it more. Scarcity in a bottleneck creates the potential for discriminatory exploitation whenever a single monopolist holds those rights. Distributing access rights to many owners removes the incentive for discriminatory or exclusionary conduct, and the owner has only the opportunity to earn rents (high prices) from the sale or lease of its capacity entitlement. Thus, dispersion of interests results in a clear change in the incentives of a rights holder. This in turn means that the kinds of disputes seen in AT&T’s breakup are largely or entirely eliminated.

The FTC suggests skullduggery in the operation of the Amazon Superstore. Namely, degrading suggestions via self-preferencing:

Amazon further degrades the quality of its search results by buying organic content under recommendation widgets, such as the “expert recommendation” widget, which display Amazon’s private label products over other products sold on Amazon.

Moreover, in a highly redacted area of the complaint, the FTC alleges that Amazon has the ability to “profitably worsen its services.” 

The FTC also alleges that Amazon bars customers from “multihoming:” 

[Multihoming is] simultaneously offering their goods across multiple online sales channels. Multihoming can be an especially critical mechanism of competition in online markets, enabling rivals to overcome the barriers to entry and expansion that scale economies and network effects can create. Multihoming is one way that sellers can reduce their dependence on a single sales channel.

If the Superstore were a condo, the vendors would be free to decide how much to focus on this platform in comparison to other platforms. Merchants would also be freed from the MFN, as the condo owner would not attempt to ban merchants from steering customers to a lower-cost platform.

Condominiumization of the Amazon Superstore would go a long way to reducing what Cory Doctorow might call the “enshittification” of the Amazon Superstore. Given its dominance over merchants, it would probably be necessary to divest and rebrand the “Amazon basics” business. Each participating vendor (retailer or direct selling manufacturer) would share in the ownership of the platform and would have its own place to promote its line of goods or services.

The most challenging issue is how to handle product placement on the overall platform. Given the administrator’s role as the agent of the owners, the administrator should seek to offer a range of options. Or leave it to owners themselves to create joint ventures to promote products. Alternatively, specific premium placement could go to those vendors that value the placement the most, rather than based on who owns the platform. The revenue would in turn be shared among the owners of the condo. Thus, the platform administrator would have as its goal maximizing the value of the platform to all stakeholders. This would also potentially resolve some of the advertising issues. According to the Complaint,  

Amazon charges sellers for advertising services. While Amazon also charges sellers other fees, these four types constitute over [redacted] % of the revenue Amazon takes in from sellers. As a practical matter, most sellers must pay these four fees to make a significant volume of sales on Amazon.

Condo ownership would mean that the platform constituents would be able to choose which services they purchase from the platform, thereby escaping the harms of Amazon’s tie-in. Constituents could more efficiently deploy advertising resources because they would not be locked-into or compelled to buy from the platform.

Optimization would include information necessary for customer decision-making. One of the other charges in the Complaint was the deliberate concealment of meaningful product reviews:

Rather than competing to secure recommendations based on quality, Amazon intentionally warped its own algorithms to hide helpful, objective, expert reviews from its shoppers. One Amazon executive reportedly said that “[f]or a lot of people on the team, it was not an Amazonian thing to do,” explaining that “[j]ust putting our badges on those products when we didn’t necessarily earn them seemed a little bit against the customer, as well as anti-competitive.”

Making the platform go condo does not necessarily mean that all goods are treated equally by customers. That is the nature of competition. It would mean that in terms of customer information, however, a condominiumized platform would enable sellers to have equal and nondiscriminatory access to the platform and to be able to promote themselves based upon their non-compelled expenditures.

Turn Amazon’s Fulfillment Center in a Coop

The Coop model envisions shared user ownership, management, and operation of the bottleneck. Such transformation of ownership should change the incentives governing the operation and potential expansion of the bottleneck.

The individual owner-user stands to gain little by trying to impose a monopoly price on users including itself or by restricting access to the bottleneck by new entrants. So long as there are many owners, the primary objective should be to manage the entity so that it operates efficiently and with as much capacity as possible.

This approach is for enterprises that require substantial continued engagement of the participants in the governance of the enterprise. With such shared governance, the enterprise will be developed and operated with the objective of serving the interest of all participants.

The more the bottleneck interacts directly with other aspects of the users’ or suppliers’ activity, the more those parties will benefit from active involvement in the decisions about the nature and scope of the activity. Historically, cooperative grain elevators and creameries provided responses to bottlenecks in agriculture. Contemporary examples could include a computer operating system, an electric transmission system, or social media platform. In each, there are a myriad of choices to be made about design or location or both. Different stakeholders will have different needs and desires. Hence, the challenge is to find a workable balance of interests. That maximizes the overall value of the system for its participants rather than serving only the interests of a single owner.

This method requires that no party or group dominates the decision processes, and all parties recognize their mutual need to make the bottleneck as effective as possible for all users. Enhancing use is a shared goal, and the competing experiences and needs should be negotiated without unilateral action that could devalue the collective enterprise.

As explained above, Amazon tie-in effectively requires that all vendors using its platform must also use Amazon’s fulfillment services. Yet distribution is distinct from online selling. Hence, the distribution system should be structurally separated from the online superstore. Indeed, vendors using the platform condo may not wish to participate in the distribution system regardless of access. Conversely, vendors not using the condo platform might value the fulfillments services for orders received on their platforms. Still other vendors might find multi-homing to be the best option for sales. As the Complaint points out, multi-homing may give rise to other benefits if not locked into Amazon Distribution:

Sellers could multihome more cheaply and easily by using an independent fulfillment provider- a provider not tied to any one marketplace to fulfill orders across multiple marketplaces. Permitting independent fulfillment providers to compete for any order on or off Amazon would enable them to gain scale and lower their costs to sellers. That, in turn, would make independent providers even more attractive to sellers seeking a single, universal provider. All of this would make it easier for sellers to offer items across a variety of outlets, fostering competition and reducing sellers’ dependence on Amazon.

The FTC Complaint alleges that Amazon has monopoly power in its fulfillment services. This is a nationwide complex of specialized warehouses and delivery services. The FTC is apparently asserting that this system has such economies of scale and scope that it occupies a monopoly bottleneck for the distribution of many kinds of consumer goods. If a single firm controlled this monopoly, it would have incentives to engage in exploitative and exclusionary conduct. Our proposed remedy to this is a cooperative model. Then, the goal of the owners is to minimize the costs of providing the necessary service. These users would need to be more directly involved in the operation of the distribution system as a whole to ensure its development and operation as an efficient distribution network.

Indeed, its users might not be exclusively users of the condominiumized platform. Like other cooperatives, the proposal is that those who want to use the service would join and then participate in the management of the service. Separating distribution from the selling platform would also enhance competition between sellers who opt to use the cooperative distribution and those that do not. For those that join the distribution cooperative, the ability to engage in the tailoring of those distribution services without the anticompetitive constraints created by its former owner (Amazon) would likely result in reduced delivery costs.

Separation of Fulfillment from Superstore Is Essential for Both Models

We propose some remedies to the problems articulated in the FTC’s Amazon Complaint—at least the redacted version. Thus, we end with some caveats.

First, we do not have access to the unredacted Complaint. Thus, to the extent that additional information might make either of our remedies improbable, we certainly do not have access to that information as of now.

Second, these condo and cooperative proposals go hand in hand with other structural remedies. There should be separation of the Fulfillment services from the Superstore and Amazon Brands might have to be divested or restructured. Moreover, their recombination should be permanently prohibited. These are necessary conditions for both remedies to function properly.

Third, in both the condo and coop model, governance structures must be in place to assure that both fulfillment services and the Superstore are not recaptured by a dominant player. In most instances, a proper governance structure would bar that. The government should not hesitate to step in should capture be evident.

Peter C. Carstensen is a Professor of Law Emeritus at the Law School of University of Wisconsin-Madison. Darren Bush is Professor of Law at University of Houston Law Center.

“Their goal is simply to mislead, bewilder, confound, and delay and delay and delay until once again we lose our way, and fail to throw off the leash the monopolists have fastened on our neck.” – Barry Lynn

Today, the name Draper is associated with either a fictional adman or a successful real-life venture capital dynasty.

Among the latter, the late Bill Draper was a widely respected early investor in Skype, OpenTable, and other top-tier startups. Less remembered now is the role of the family patriarch—Bill’s father, General William Henry Draper, Junior—in shaping the course of history in postwar Germany and Japan. Well before founding Silicon Valley’s first venture capital firm in 1959, the ür-Draper had made a name for himself in other powerful circles. A graduate of New York University with both a bachelor’s and a master’s in economics, Draper’s early career alternated between stints in investment banking and military service. During World War II, his experience spanned the gamut from developing military procurement policies to commanding an infantry. Socially savvy and obsessively hard-working, Draper was tapped to lead the “economic side of the occupation,” known as the Economics Division, well before Germany officially surrendered in May 1945.

The structure of the military government was cobbled together that summer and fall. Meanwhile, Washington hammered out the principles of occupation policy. Because Germany had surrendered unconditionally, the Allies had “supreme authority” to govern and reform their respective zones. Despite sharp rifts between U.S. agencies about whether Germany deserved a “soft” or a “hard” peace, some goals remained consistent after FDR’s death in April 1945. Notably, there was consensus that Germany’s political and economic systems would both have to be reformed to prevent war and promote democracy.

President Truman personally embraced such principles, which were spelled out in an order from the Joint Chiefs of Staff dictating the mission of the military governor, as well as in the August 1945 Potsdam Agreement between the Allies. The latter document instructed that “[a]t the earliest practicable date, the German economy shall be decentralized for the purpose of eliminating the present excessive concentration of economic power as exemplified in particular by cartels, syndicates, trusts and other monopolistic arrangements.” This policy was undergirded by years of Congressional hearings on how German industry had assisted Hitler’s consolidation of power and path to war.

So how did Draper approach building his Economics Division in light of these mandates? He entrusted an executive from (then monopoly) AT&T to hire men from their network of New York bankers and big business executives. An ominous start, running counter to the antimonopoly mission. There does not appear to have been any effort to recruit staff with more diverse business experience. Draper himself was still technically on leave from Dillon, Read, & Company—an investment bank that, in the decades after World War I, had underwritten over $100 million in German industrial bonds. Those bonds had enabled a German steel firm to buy out its competitors to become the largest steel combine in Germany—and then, the ringmaster of an international cartel.

When the military government’s organizational chart was finalized in the fall of 1945, the Economics Division had swallowed up several other sister proto-divisions, including the group that was investigating cartels and monopolies. This essentially inverted the structure of U.S. domestic enforcement: it was as if economists ran the Federal Trade Commission. Draper later denied engineering this chain of command, which indeed may have been prompted by other factors, including the inconvenient tendency of early decentralization leaders to make press leaks about the military government’s failure to remove some prominent Nazis from positions of power. And at first, Draper was not focused on what the group was doing—he initially viewed their work as tackling “just one of a great many problems.”

Archival footage of the early days

After Senate scrutiny jump-started recruitment of trustbusters en masse, the consequences of depriving the group of Division status became more apparent. The longest-serving leader of the “Decartelization Branch,” James Stewart Martin, had spent much of the war in an “economic warfare” unit investigating warmongering German firms. His team quickly dove into expanding this research and developing legal cases that would be ready to launch once the military government enacted the equivalent of an antitrust law.

Draper believed in bright line rules when it came to cartel agreements: they “should be eliminated, made illegal and prohibited.” Military government eventually passed a law which announced that participation in any international cartel “is hereby declared illegal and is prohibited,” and a year later issued regulations requiring firms to send “notices of termination” informing counterparties that cartel terms were illegal. But Draper viewed “deconcentration” (breaking up combines) differently. He later proclaimed agreement with the general policy, but carefully qualified his statements with loose caveats about not “breaking down the economic situation.” According to members of the Decartelization Branch, Draper and his men thought deconcentration would threaten Germany’s ability to ramp up production enough to sustain itself through exports—even though staffers repeatedly explained that deconcentration would increase output. (The Division’s incessant questioning of the premises of the official U.S. policy may not have been driven by corruption, but also does not appear to have been grounded in evidence, just ideological instincts borne from their professional circles.)

Draper and his men increasingly wielded their veto power to thwart deconcentration efforts. According to Martin, Draper personally teamed up with an intransigent counterpart on the British side to undermine the official U.S. position, successfully delaying negotiation of the law for a year and a half and ultimately weakening the final product. Draper allegedly insisted that accused German firms should be given procedural rights that exceeded those given to domestic companies under U.S. antitrust law—an extraordinary position to take in the context of a hyper-concentrated economy lead by firms that had proactively plotted how they would commandeer rivals in conquered nations. (The timeline allotted for objections and appeals would, coincidentally, postpone final adjudication until after the next U.S. Presidential election, when members of the Economics Division might expect headlines to read “DEWEY DEFEATS TRUMAN.”).

How that gamble turned out (hint: man on left is not Dewey)

There are rarely controls in public policy, but it is telling that the successes in decentralizing Germany’s economy occurred in areas beyond Draper’s veto power—that is, outside of the Decartelization Branch. One signature accomplishment of military government was breaking up the notorious chemical combine I.G. Farben. The United States had captured Farben’s seven-story headquarters in April 1945, and the deputy military governor personally focused on ensuring that a law authorizing the seizure and dissolution of Farben was enacted by all of the Allies by the end of that year. Draper seems to have essentially regarded Farben as the spoils of war, acknowledging that policy considerations other than any potential impact on production or German recovery took precedence in that decision. In any event, the officers in charge of overseeing Farben’s break up reported directly to the military governor, not to Draper. Although Farben was not dispersed to the extent originally envisioned, the military government followed through on reversing the merger that had cemented Farben’s monopoly by spinning off three large successors and a dozen smaller businesses. Another telling accomplishment was the reorganization of the banks into a Federal Reserve-like system. That was handled by the Finance Division, which had co-equal status with Draper’s Economics Division.

Farben and the banks were undoubtedly among the most important targets for decentralizing Germany’s economy, but were likely not the only worthy targets. Wartime investigations concluded that fewer than 100 men controlled over two-thirds of Germany’s industrial system by sitting on the boards of Germany’s Big Six Banks along with 70 industrial combines and holding companies. Although some major industries, such as steel and coal, were located in the British zone, over two dozen firms were based in the U.S. zone. The failure of the military government to launch any actions against any other combines after two years of occupation suggests that something other than good faith disagreements about particular procedures or particular companies was afoot.

There is much, much more to this story: a “factfinding” mission by U.S. industrialists, a cameo by ex-President Herbert Hoover, an untimely marriage, lies, press leaks, Congressional hearings, an internal Army investigation, righteous resignations, early retirements, and more. Not to mention that time Draper swooped into postwar Japan to copy-paste his preferred economic prescriptions over General MacArthur’s reform program there.

Of course, it would be a stretch to conclude that the Decartelization Branch failed to implement a robust antimonopoly program because of one man alone. Others have suggested that even an unhobbled program would have been doomed sooner or later by bickering Allies, changing control of Congress, and the beginning of the Cold War two years into the occupation. Or, perhaps, by the inherent irony of a centralized military tasked with decentralizing a society.

Yet these explanations divert attention from the key ingredients of the pivotal “first 100 days” of any endeavor: institutional structure and selection of mission-aligned leadership. Different choices at that crucial stage might have yielded some tangible early successes and built momentum that could have weathered later headwinds. Taking this possibility seriously underscores why, in modern times, President Biden’s establishment of the competition council and appointment of “Wu, Khan, and Kanter” were so essential—and why ongoing missed opportunities in the administration are so troubling. Elevating amoral excellence and reputed raw managerial ability over other leadership qualifications has consequences. The story of the Decartelization Branch also provides deeper context for understanding how trustbusters approached their work upon returning to the Department of Justice, and for the political will that drove adoption of the Celler-Kefauver Act of 1950.

The saga of the Decartelization Branch will be explored in detail in a forthcoming Substack series. This is, for the most part, not a new story, but it is apparently not well-known in antitrust circles; some of the most extensive accounts were written by historians who came across the saga in the course of researching bigger questions, such as the genesis of the Cold War and the rise and fall of international cartels. Perhaps most importantly, the series will be accompanied by new scans of primary sources, to facilitate renewed scholarship into this era.

Laurel Kilgour is a startup attorney in private practice, and also teaches policy courses. The views expressed herein do not necessarily represent the views of the author’s employers or clients. This is not legal advice about any particular legal situation. To the extent any states might consider this attorney advertising, those states sure have some weird and counterintuitive definitions of attorney advertising.

As the DOJ’s antitrust case against Google begins, all eyes are focused on whether Google violated antitrust law by, among other things, entering into exclusionary agreements with equipment makers like Apple and Samsung or web browsers like Mozilla. Per the District Court’s Memorandum Opinion, released August 4, “These agreements make Google the default search engine on a range of products in exchange for a share of the advertising revenue generated by searches run on Google.” The DOJ alleges that Google unlawfully monopolizes the search advertising market.

Aside from matters relating to antitrust liability, an equally important question is what remedy, if any, would work to restore competition in search advertising in particular and online advertising generally?

Developments in the UK might shed some light. The UK Treasury commissioned a report to make recommendations on changes to competition law and policy, which aimed to “help unlock the opportunities of the digital economy.” The report found that Big Tech’s monopolizing of data and control over open web interoperability could undermine innovation and economic growth. Big Tech platforms now have all the data in their hands, block interoperability with other sources, and will capture more of it, through their huge customer-facing machines, and so can be expected to dominate the data needed for the AI Period, enabling them to hold back competition and economic growth.

The dominant digital platforms currently provide services to billions of end users. Each of us has either an Apple or Android device in our pocket. These devices operate as part of integrated distribution platforms: anything anyone wants to obtain from the web goes through the device, its browser (often Google’s search engine), and the platform before accessing the Open Web, if not staying on an app on an apps store within the walls of the garden.

Every interaction with every platform product generates data, refreshed billions of times a day from multiple touch points providing insight into buying intent and able to predict people’s behavior and trends.

All this data is used to generate alphanumeric codes that match data contained in databases (aka “Match Keys”), which are used to help computers interoperate and serve relevant ads to match users’ interests. These were for many years used by all from the widely distributed Double Click ID. They were shared across the web and were used as the main source of data by competing publishers and advertisers. After Google bought Double Click and grew big enough to “tip” the market, however, Google withdrew access to its Match Keys for its own benefit.

The interoperability that is a feature of the underlying internet architecture has gradually been eroded. Facebook collected its own data from user’s “Likes” and community groups and also withdrew access for independent publishers to its Match Key data, and recently Apple has restricted access to Match Key data that is useful for ads for all publishers, except Google has a special deal on search and search data. As revealed in U.S. vs Google, Apple is paid over $10 billion a year by Google so that Google can provide its search product to Apple users and gather all their search history data that it can then use for advertising. The data generated by end user interactions with websites is now captured and kept within each Big Tech walled garden.

If the Match Keys were shared with rival publishers for use in their independent supply channel and used by them for their own ad-funded businesses, interoperability would be improved and effective competition could be generated with the tech platforms. Competition probably won’t exist otherwise.  

Both Google and Apple currently impose restrictions on access to data and interoperability. Cookie files also contain Match Keys that help maintain computer sessions and “state” so that different computers can talk to each other and help remember previous visits to websites and enable e-commerce. Cookies do not themselves contain personal data and are much less valuable than the Match Keys that were developed by Double Click or ID for advertisers, but they do provide something of a substitute source of data about users’ intent to purchase for independent publishers.

Google and Apple are in the process of blocking access to Match Keys in all forms to prevent competitors from obtaining relevant data about users needs and wants. They also prevent the use of the Open Web and limit the inter-operation of their apps stores with Open Web products, such as progressive web apps.

The UK’s Treasury Report refers to interoperability 8 times and the need for open standards as a remedy 43 times; the Bill refers to interoperability and we are expecting further debate about the issue as the Bill passes through Parliament.

A Brief History of Computing and Communications

The solution to monopolization, or lack of competition, is the generation of competition and more open markets. For that to happen in digital worlds, access to data and interoperability is needed. Each previous period of monopolization involved intervention to open-up computer and communications interfaces via antitrust cases and policy that opened market and liberalized trade. We have learned that the authorities need to police standards for interoperability and open interfaces to ensure the playing field is level and innovation can take place unimpeded. 

IBM’s activity involved bundling computers and peripherals and the case was eventually solved by unbundling and unblocking interfaces needed by competitors to interoperate with other systems. Microsoft did the same, blocking third parties from interoperating via blocking access to interfaces with its operating system. Again, it was resolved by opening-up interfaces to promote interoperability and competition between products that could then be available over platforms.

When Tim Berners Lee created the World Wide Web in the early 1990s, it took place nearly ten years after the U.S. courts imposed a break-up of AT&T and after the liberalization of telecommunications data transmission markets in the United States and the European Union. That liberalization was enabled by open interfaces and published standards. To ensure that new entrants could provide services to business customers, a type of data portability was mandated, enabling numbers held in incumbent telecoms’ databases to be transferred for use by new telecoms suppliers. The combination of interconnection and data portability neutralized the barrier to entry created by the network effect arising from the monopoly control over number data.

The opening of telecoms and data markets in the early 1990s ushered in an explosion of innovation. To this day, if computers operate to the Hyper Text Transfer Protocol then they can talk to other computers. In the early 1990s, a level playing field was created for decentralized competition among millions of businesses.

These major waves of digital innovation perhaps all have a common cause. Because computing and communications both have high fixed costs and low variable or incremental costs, and messaging and other systems benefit from network effects, markets may “tip” to a single provider. Competition in computing and communications then depends on interoperability remedies. Open, publicly available interfaces in published standards allow computers and communications systems to interoperate; and open decentralized market structures mean that data can’t easily be monopolized. 

It’s All About the Match Keys

The dominant digital platforms currently capture data and prevent interoperability for commercial gain. The market is concentrated with each platform building their own walled gardens and restricting data sharing and communication across. Try cross-posting among different platforms as an example of a current interoperability restriction. Think about why messaging is restricted within each messaging app, rather than being possible across different systems as happens with email. Each platform restricts interoperability preventing third-party businesses from offering their products to users captured in their walled gardens.

For competition to operate in online advertising markets, a similar remedy to data portability in the telecom space is needed. Only, with respect to advertising, the data that needs to be accessed is Match Key data, not telephone numbers.    

The history of anticompetitive abuse and remedies is a checkered one. Microsoft was prohibited from discriminating against rivals and had to put up a choice screen in the EU Microsoft case. It didn’t work out well. Google was similarly prohibited by the EU in Google search (Shopping) from (1) discriminating against rivals in its search engine results pages, (2) entering exclusive agreements with handset suppliers that discriminated against rivals, and (3) showing only Google products straight out of the box in the EU Android case. The remedies did not look at the monopolization of data and its use in advertising. Little has changed and competitors claim that the remedies are ineffective.

Many in the advertising publishing and ad tech markets recall that the market worked pretty well before Google acquired Double Click. Google uses multiple data sources as the basis for its Match Keys and an access and interoperability remedy might be more effective, proportionate and less disruptive.     

Perhaps if the DOJ’s case examines why Google collects search data from its search engine, its use of search histories, browser histories and data from all interactions with all products for its Match Key for advertising, the court will better appreciate the importance of data for competitors and how to remedy that position for advertising-funded online publishing. 

Following Europe’s Lead

The EU position is developing. Under the EU’s Digital Markets Act (DMA), which now supplements EU antitrust law as applied in the Google Search and Android Decisions, it is recognized that people want to be able to provide products and services across different platforms or cross-post or communicate with people connected to each social network or messaging app. In response, the EU has imposed obligations on Big Tech platforms in Articles 5(4) and 6(7) that provide for interoperability and require gatekeepers to allow open access to the web.

Similarly, Section 20.3 (e) of the UK’s Digital Markets, Competition and Consumers Bill (DMCC) refers to interoperability and may be the subject of forthcoming debate as the bill passes further through Parliament. Unlike U.S. jurisprudence with its recent fixation on consumer welfare, the objective of the Competition and Markets Authority is imposed by the law. The obligation to “promote competition for the benefit of consumers” is contained in EA 2013 s 25(3). This can be expressly related to intervention opening up access to the source of the current data monopolies: the Match Keys could be shared, meaning all publishers could get access to IDs for advertising (i.e., operating systems generated IDs such as Apple’s IDFA or Google’s Google ID or MAID).

In all jurisdictions it will be important for remedies to stimulate innovation, and to ensure that competition is promoted between all products that can be sold online, rather than between integrated distribution systems. Moreover, data portability needs to apply with reference to use of open and interoperable Match Keys that can be used for advertising, and that way address the data monopolization risk. As with the DMA, the DMCC should contain an obligation for gatekeepers to ensure fair reasonable and nondiscriminatory access, and treat advertisers in a similar way to that through which interoperability and data potability addressed monopoly benefits in previous computer, telecoms, and messaging cases.        

Tim Cowen is the Chair of the Antitrust Practice at the London-based law firm of Preiskel & Co LLP.

This piece originally appeared in ProMarket but was subsequently retracted, with the following blurb (agreed-upon language between ProMarket’s Luigi Zingales and the authors):

“ProMarket published the article “The Antitrust Output Goal Cannot Measure Welfare.” The main claim of the article was that “a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.” The published version was unclear on whether the theorem contained in the article was a statement about an equilibrium outcome or a mere existence claim, regardless of the possibility that this outcome might occur in equilibrium. When we asked the authors to clarify, they stated that their claim regarded only the existence of such points, not their occurrence in equilibrium. After this clarification, ProMarket decided that the article was uninteresting and withdrew its publication.”

The source of the complaint that caused the retraction was, according to Zingales, a ProMarket Advisory Board member. The authors had no contact with that person, nor do we know who it is. We would have welcomed published scholarly debate versus retraction compelled by an anonymous Board Member.

We reproduce the piece in its entirety here. In addition, we provide our proposed revision to the piece, which we wrote to clear up the confusion that it was claimed was created by the first piece. We will let our readers be the judge of the piece’s interest. Of course, if you have any criticisms, we welcome professional scholarly debate.

(By the way, given that the piece never mentions supply or demand or prices, it is a mystery to us why any competent economist could have thought it was about “equilibrium.” But perhaps “equilibrium” was a pretext for removing the article for other reasons.)

The Antitrust Output Goal Cannot Measure Welfare (ORIGINAL POST)

Many antitrust scholars and practitioners use output to measure welfare. Darren Bush, Gabriel A. Lozada, and Mark Glick write that this association fails on theoretical grounds and that ideas of welfare require a much more sophisticated understanding.

By Darren Bush, Gabriel A. Lozada, and Mark Glick

Debate seems to have pivoted in the discourse on consumer welfare theory to the question of whether welfare can be indirectly measured based upon output. The tamest of these claims is not that output measures welfare, but that generally, output increases are associated with increases in economic welfare. 

This claim, even at its tamest, is false. For one, welfare depends on more than just output, and increasing output may detrimentally affect some of the other factors which welfare depends on. For example, increasing output may cause working conditions to deteriorate; may cause competing firms to close, resulting in increased unemployment, regional deindustrialization, and fewer avenues for small business formation; may increase pollution; may increase the political power of the growing firm, resulting in more public policy controversies and, yes, more lawsuits being decided in its interest; and may adversely affect suppliers. 

Even if we completely ignore those realities, it is still possible for an increase in output to reduce welfare. These two short proofs show that even in the complete absence of these other effects—that is, even if we assume that people obtain welfare exclusively by receiving commodities, which they always want more of—increasing output may reduce welfare. 

We will first prove that it is possible for an increase in output to reduce welfare under the assumption that welfare is assessed by a social planner. Then we will prove it assuming no social planner, so that welfare is assessed strictly via individuals’ utility levels.

The Social Planner Proof 

Here we show that a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.

Suppose in the figure below that the original production possibility frontier is PPF0 and

the new production possibility frontier is PPF1. Let USWF be the original level of social welfare, so that the curve in the diagram labeled USWF is the social indifference curve when the technology is represented by PPF0. This implies that when the technology is at PPF0, society chooses the socially optimal point, I, on PPF0. Next, suppose there is an increase in potential output, to PPF1. If society moves to a point on PPF1 which is above and to the left of point A, or is below and to the right of point B, then society will be worse off on PPF1 than it was on PPF0. Even though output increased, depending on the social indifference curve and the composition of the new output, there can be lower social welfare.

The Individual Utility Proof

Next, we continue to assume that only consumption of commodities determines welfare, and we show that when output increases every individual can be worse off. Consider the figure below, which represents an initial Edgeworth Box having solid borders, and a new, expanded Edgeworth Box, with dashed borders. The expanded Edgeworth Box represents an increase in output for both apples and bananas, the two goods in this economy.

The original, smaller Edgeworth Box has an origin for Jones labeled J and an origin for Smith labeled S. In this smaller Edgeworth Box, suppose the initial position is at C. The indifference curve UJ0 represents Jones’s initial level of utility with the smaller Edgeworth Box, and the indifference curve US represents Smith’s initial level of utility with the smaller Box.  In the larger Edgeworth Box, Jones’s origin shifts from J to J’, and his UJ0 indifference curve correspondingly shifts to UJ0′.  Smiths’ US indifference curve does not shift. The hatched areas in the graph are all the allocations in the bigger Edgeworth Box which are worse for both Smith and Jones compared to the original allocation in the smaller Edgeworth Box.

In other words, despite the fact that output has increased, if the new allocation is in the hatched area, then Smith and Jones both prefer the world where output is lower. We get this result because welfare is affected by allocation and distribution as well as by the sheer amount of output, and more output, if mis-allocated or poorly distributed, can decrease welfare.

GDP also does not measure aggregate Welfare 

The argument that “output” alone measures welfare sometimes refers not to literal output, as in the two examples above, but to a reified notion of “output.” A good example is GDP.  GDP is the aggregated monetary value of all final goods and services, weighted using current prices. Welfare economists, beginning with Richard Easterlin, have understood that GDP does not accurately measure economic well-being. Since prices are used for the aggregation, GDP incorporates the effects of income distribution, but in a way which hides this dependence, making GDP seem value-free although it is not. In addition, using GDP as a measure of welfare deliberately ignores many important welfare effects while only taking into account output. As Amit Kapoor and Bibek Debroy put it:

GDP takes a positive count of the cars we produce but does not account for the emissions they generate; it adds the value of the sugar-laced beverages we sell but fails to subtract the health problems they cause; it includes the value of building new cities but does not discount for the vital forests they replace. As Robert Kennedy put it in his famous election speech in 1968, “it [GDP] measures everything in short, except that which makes life worthwhile.”

Any industry-specific measure of price-weighted “output” or firm-specific measure of price-weighted “output” is similarly flawed.

For these reasons, few, if any, welfare economists would today use GNP alone to assess a nation’s welfare, preferring instead to use a collection of “social indicators.”

Conclusion

Output should not be the sole criterion for antitrust policy. We can do a better job of using competition policy to increase human welfare without this dogma. In this article, we showed that we cannot be certain that output increases welfare even in a purely hypothetical world where welfare depends solely on the output of commodities. In the real world, where welfare depends on a multitude of factors besides output—many of which can be addressed by competition policy—the case against a unilateral output goal is much stronger.

Addendum

The Original Sling posting inadvertently left off the two proposed graphs that we drew as we sought to remedy the Anonymous Board Member’s confusion about “equilibrium.” We now add the graphs we proposed. The explanation of the graphs was similar, and the discussion of GNP was identical to the original version.

The Proof if there is a Social Welfare Function (Revised Graph)

A diagram of apples and apples

Description automatically generated

The Individual Utility Proof (Revised Graph)

A diagram of a graph

Description automatically generated

Over the past two years, heterodox economic theory has burst into the public eye more than ever as conventional macroeconomic models have failed to explain the economy we’ve been living in since 2020. In particular, theories around consolidation and corporate power as factors in macroeconomic trends–from neo-Brandeisian antitrust policy to theories of profit seeking as a driver of inflation–have exploded onto the scene. While “heterodox economics” isn’t really a singular thing–it’s more a banner term for anything that breaks from the well established schools of thought–the ideas it represents challenge decades of consensus within macro- and financial economics. This development, of course, has left the proponents of the traditional models rather perturbed.

One of the heterodox ideas that has seen the most media attention is the idea of sellers’ inflation: the theory that inflation can, at least partially, be a result of companies using economic shocks as smokescreens to exercise their market power and raise the prices they charge. The name most associated with this theory is Isabella Weber, a professor of economics at the University of Massachusetts, but there are certainly other economists who support this theory (and many more who support elements of it but are holding out for more empirical evidence before jumping into the rather fraught public debate.)

Conventional economists have been bristling about sellers’ inflation being presented as an alternative to the more staid explanation of a wage-price spiral (we’ll come back to that), but in recent months there have been extremely aggressive (and often condescending, self-important, and factually incorrect) attacks on the idea and its proponents. Despite this, sellers’ inflation really is not that far from a lot of long standing economic theory, and the idea is grounded in key assumptions about firm behavior that are deeply held across most economic models.

My goal here is threefold: first, to explain what the sellers’ inflation and conventional models actually are; second, to break down the most common lines of attack against sellers’ inflation; third, to demonstrate that, whatever its shortcomings, sellers’ inflation is better supported than the traditional wage-price spiral. Many even seem to recognize this, shifting to an explanation of corporations just reacting to increased demand. As we’ll see, that explanation is even weaker.

What Is Sellers’ Inflation?

The Basic Story

As briefly mentioned above, sellers’ inflation is the idea that, in significantly concentrated sectors of the economy, coordinated price hikes can be a significant driver of inflation. While the concept’s opponents generally prefer to call it “greedflation,” largely as a way of making it seem less intellectually serious, the experts actually advancing the theory never use that term for a very simple reason: it doesn’t really have anything to do with variance in how greedy corporations are. It does rely on corporations being “greedy,” but so do all mainstream economic theories of corporate behavior. Economic models around firm behavior practically always assume companies to be profit maximizing, conduct which can easily be described as greedy. As we’ll see, this is just one of many points in which sellers’ inflation is actually very much aligned with prevailing economic theory.

Under the sellers’ inflation model, inflation begins with a series of shocks to the macroeconomy: a global pandemic causes an economic crash. Governments respond with massive fiscal stimulus, but the economy experiences huge supply chain disruptions that are further worsened with the Russian invasion of Ukraine. All of these events caused inflation to increase either by decreasing supply or increasing demand. The stimulus checks increased demand by boosting consumers’ spending power–exactly what it was supposed to do. Both strained supply chains and the sanctions cutting Russia off from global trade restricted supply. Contrary to what some opponents of sellers’ inflation will say, the theory does not deny the stimulus being inflationary (though some individual proponents might). Rather, sellers’ inflation is an explanation for the sustained inflation we saw over the past two years. Those shocks led to a mismatch between demand and supply for consumer goods, but something kept inflation high even after the effects of those shocks should have waned.

The culprit is corporate power. With such a whirlwind of economic shocks, consumers are less able to tell when prices are rising to offset increases in the cost of production versus when prices are being raised purely to boost profit. This, too, is not at odds with conventional macro wisdom. Every basic model of supply and demand tells us that when supply dwindles and demand soars, the price level will rise. Sellers’ inflation is an explanation of how and why prices rise and why prices will increase more in an economy with fewer firms and less competition. 

Sellers’ inflation is really just a specific application of the theory of rent-seeking, which has been largely accepted since it was introduced by David Ricardo, a contemporary of the father of modern economics, Adam Smith. (Indeed, this point, which I raised nearly a year and a half ago in Common Dreams, was recently explored in a new paper from scholars at the University of London.) As anyone who has ever watched a crime show could tell you, when you want to solve a whodunnit, you need to look at motive, means, and opportunity. The greed (which, again, is at the same level it always is) is the motive. Corporations will always seek to charge as high of a price as they can without being dangerously undercut by competitors. Sellers’ inflation doesn’t posit a massive increase in corporate greed, but a unique economic environment that allows firms to act upon the greed they have possessed.

Concentration is the means; when the market is in the hands of only one or a few firms, it becomes easier to raise prices for a couple of reasons. First, large firms have price-setting power, meaning they control enough of the sector that they are able to at least partially set the going rate for what they sell. Second, when there’s only a few firms in a sector, wink-wink-nudge-nudge pricing coordination is much easier. Just throw in some vague but loaded phrases in press releases or earnings calls that you know your competition will read and see if they take the same tack. For simplicity, imagine an industry dominated by two firms, A and B. At any given point, both are choosing between holding prices steady and raising them (assume lowering prices is off the table because it’s unprofitable, let’s keep it simple.) This sets up the classic game-theoretical model of the prisoner’s dilemma:

A Maintains PriceA Raises Price
B Maintains Price, ,
B Raises Price, ,

In the chart above, the red arrows represent the change in A’s profit and the blue represent the change in B’s. If both hold the price steady, nothing changes, we’re at an equilibrium. If one and only one firm raises prices without the other, the price-hiker will lose money as price-conscious consumers switch to their competitor, who will now see higher profits. This makes the companies averse to raising prices on their own. But, if both raise their prices, both will be able to increase their profits. That’s why collusion happens. But, wait, isn’t that illegal? Yes, yes it is. But it is nigh on impossible to police implicit collusion, especially when there is a seemingly plausible alternative explanation for price hikes.

As James Galbraith wrote, in stable periods, firms prefer the safer equilibrium of holding prices relatively stable. As he explains:

In normal times, margins generally remain stable, because businesses value good customer relations and a predictable ratio of price to cost. But in disturbed and disrupted moments, increased margins are a hedge against cost uncertainties, and there develops a general climate of “get what you can, while you can.” The result is a dynamic of rising prices, rising costs, rising prices again — with wages always lagging behind.

And that gets us to opportunity, which is what the macroeconomic shocks provide. Firms probably did experience real increases in their production costs, which gives them good reason to raise their prices…to a point. But what has been documented by Groundwork Collaborative and separately by Isabella Weber and Evan Wasner is corporate executives openly discussing increasing returns using “pricing power,” which is code for charging more than is needed to offset their costs. This is them signaling that they see an opportunity to get to that second equilibrium in the chart above, where everyone makes more money. And since that same information and rationale is likely to be present at all of the firms in an industry, they all have the incentive (or greed if you prefer) to do the same. This is easiest to conceptualize in a sector with two firms, but it holds for one with more that is still concentrated. At some point, though, you reach a critical mass where suddenly there’s one or more firms who won’t go along with it. As the number of firms increases, it becomes more and more probable that one won’t just go along with it, which is why concentration facilitates coordination

And that’s it. In an economy with significant levels of concentration — more than 75 percent industries in the American economy have become more concentrated since the 1990s — and the smokescreen of existing inflation, corporate pricing strategy can sustain rising prices due to the uncertainty. Now, if you ask twenty different supporters of sellers’ inflation, you’ll likely get twenty slightly different versions of the story. However, the main beats are mostly agreed upon: 1) firms are profit maximizing, 2) they always want to raise prices but usually won’t out of fear of either being undercut by the competition or being busted for illegal collusion, and 3) other inflationary pressures provide some level of plausible deniability which lowers the potential downside of price increases.

What Evidence Is There?

The evidence available to support theories of sellers’ inflation is one of the main points of contention between its proponents and detractors. Despite that, there is strong theoretical and empirical evidence that backs the theory up.

First is a basic issue of accounting that nobody in the traditional macro camp seems to have a good answer for. Profits are always equal to the difference between revenues (all the money a company brings in) and costs (all the money a company sends out). 

Profits= Revenue – Costs

This is inviolable; that is simply the definition of profits. As I’ve written before, this means that the only two possible ways for a company to increase profits is by generating more revenue or cutting costs (or a combination of the two, but let’s keep it simple). Costs can’t be the primary driver in our case because we know they’re increasing, not decreasing. Inflationary pressures should still have increased production costs like labor and any kind of input that is imported. Companies also have been adamant about the fact that they are facing rising costs; that’s their whole justification for price hikes. And mainstream economists would agree. They blame lingering inflation on a wage price spiral, which says that workers demanding higher wages have driven cost increases that force companies to raise prices – resulting in higher inflation. As both sides agree that input costs are rising, the only possible explanation for increased profits is an increase in revenue. Revenue also has itself a handy little formula:

Revenue = Price * Units Sold

While the units sold may have increased, price was the bigger factor. We know this for at least two key reasons: because of evidence showing that output (the units sold) actually decreased and because of the evidence from earnings calls compiled by Groundwork. Executives said their strategy was to raise prices, not to sell more products. And there’s two very good reasons to believe the execs: (1) they know their firms better than anyone, and (2) they are legally required to tell the truth on those calls. (That second reason is also evidence of sellers’ inflation on its own; if the theory’s opponents don’t buy the explanation given by the executives to investors, they must think executives are committing securities fraud.) 

In rebuttal to the accounting issue, Brian Albrech, chief economist at the International Center for Law and Economics, has argued that using accounting identities is wrongheaded:

Just as we never reason from a price change, we need to never reason from an accounting identity. My income equals my savings plus my consumption: I = S + C. But we would never say that if I spend more money, that will cause my income will rise.

This, on face, seems like a reasonable argument, except all it really shows is that Albrecht doesn’t understand basic math. Tracking just one part of the equation won’t automatically tell us what the others do…duh. But we can track what a variable is doing empirically and use that relationship to make sense of it. We would never say that someone spending more money on consumption causes their income to rise. But we certainly could say that if we observe an increase in personal consumption, then we can reason that either their income increased or their savings decreased. The mathematical definition holds, you just have to actually consider all of the variables. In fact, Albrecht agrees, but warns “Yes, the accounting identity must hold, and we need to keep track of that, but it tells us nothing about causation.” No, it tells us correlation. Which, by the way, is what econometrics and quantitative analyses tell us about as well. 

The way you get to causation in economics is by tying theory and context to empirical correlations to explain those relationships. Albrecht’s case is just a very reductive view of the actual logic at play. He continues:

After all, any revenue PQ = Costs + Profits. So P = Costs/Q + Profits/Q. If inflation means that P goes up, it must be “caused” by costs or profits.

No, again. Stop it. This is like saying consumption causes income.

Once again, Albrecht is wrong here. This is like saying higher consumption will correspond to either higher income or lower savings. Additionally, there’s a key difference between the accounting identities for income and for profits: income is broken down into consumption and savings after you receive it, whereas costs and revenues must exist before profits. This makes causal inference in the latter much more reasonable; income is determined exogenously to that formula, but profits are endogenous to their accounting identity. 

In addition to these observations, though, there is also various economic research that supports the idea of seller’s inflation. Some of the best empirical evidence comes from this report from the Federal Reserve Bank of Boston, this one from the Federal Reserve Bank of San Francisco, and this one from the International Monetary Fund.

Another key piece of evidence is a Bloomberg investigation that found that the biggest price increases came from the largest firms. If market power were not a factor, then prices should have been rising roughly proportionally across firms, regardless of their size. If anything, large firms’ economies of scale should have cut down on the need to hike prices. Especially because basic economic theory tells us that when demand increases, companies want to expand supply, which should have resulted in more products (especially from larger firms with more resources) and a corresponding drop in price increases. And yet, what we actually saw was a drop in production from major companies like Pepsi, who opted instead to increase profits by maintaining a shortfall in supply.

That said there’s plenty more, including this from the Kansas City Fed, this from Jacob Linger et al., this from French economists Malte Thie and Axelle Arquié, this from the European Central Bank, this one from the Roosevelt Institute, and more. The Bank of Canada has also endorsed the view. It seems unlikely that the Federal Reserve, European Central Bank and the Bank of Canada have all become bastions of activist economists unmoored from evidence. Perhaps it’s time those denying sellers’ inflation are labeled the ideologues.

The Case Against Sellers Inflation

A Few Notes on Semantics

Before we get into the substance of critiques against sellers’ inflation as a theory, there are a few miscellaneous issues with the framing its opponents often use. There is a tendency for arguments against sellers’ inflation to use loaded words or skewed phrasing to implicitly undermine the legitimacy of people who are spearheading the push for greater scrutiny of corporations as a part of managing inflation.

For instance, Eric Levitz says the debate sees “many mainstream economists against

heterodox progressives.” This phrasing suggests that the debate is between economists on the one hand and proponents of sellers’ inflation on the other. But that’s not true! There are both economists and non-economists on both sides of the issue. Weber is an economist, as are the researchers at the Boston and San Francisco Feds. And others, including James Galbraith, Paul Donovan, Hal Singer, and Groundwork’s Chris Becker and Rakeen Mabud are on board. Notably, Lael Brainard, the head of President Biden’s Council on Economic Advisors (and former Federal Reserve Vice Chair) recently endorsed the view.

Or take how Kevin Bryan, a professor of management at the University of Toronto described Isabella Weber as a “young researcher” who “has literally 0 pubs on inflation.” Weber is old enough to have two PhDs and tenure at UMass and–will you look at that–has written about inflation before! Presenting her as young sets the stage for making her seem inexperienced, which saying she has no publications doubles down on. But his claims are false. Weber wrote a paper with Evan Wasner specifically about sellers’ inflation. But even if we take Bryan’s point as true and ignore the very real work Weber has done on inflation and pricing, Weber still has significant experience with political economy, which helps to explain how institutional power is able to influence markets—exactly the type of thinking sellers’ inflation is based upon.

(And this is nothing compared to the abuse that Weber endured after an op-ed in The Guardian provoked a frenzy of insulting, condescending attacks from many professional economists. For more on that, see Zach Carter’s New Yorker profile of Weber and/or this Twitter thread that documents Noah Smith’s outlash at Weber.)

But even the semantics that don’t get into ad hominem territory are confusing. Here is a list of the topline concerns that Kevin Bryan raised:

Let’s just run through that list of concerns real quick:

  1. What does very online even mean? Sellers’ inflation has been embraced as at least a plausible concept by the President of the United States, the European Central Bank, at least two Federal Reserve Banks, and the International Monetary Fund. If that’s not enough legitimization it’s hard to know what would be. This concern makes it sound like the proponents are random reddit users, rather than the serious academics and policy makers they are.
  2. I don’t know why the presence of “virulent defenders”  undermines the idea itself. Defenders of traditional economics are virulent as well; Larry Summers called the idea of relating antitrust policy to inflation “science denial.”
  3. Traditional monetary policy is often (but not always) associated with centrist, pro-business politics. Also, conventional Industrial Organization theory and even Borkian consumer welfare theories recognize a relationship between price and the structure of firms and markets, so the fundamental ideas are certainly not leftist.
  4. That proponents of sellers’ inflation refer to gatekeepers shooting down these theories  seems disingenuous. Everyone who supports sellers’ inflation would probably rather be discussing it because of its merits. But when people like Bryan or Larry Summers refuse to even consider the idea as potentially legitimate, the only option left is to discuss it because of the iconoclasm. If there isn’t a story about changing academic opinions, then the story about challenges to conventional wisdom being shut out by the old guard will have to do.

All of this is to set up the next point in that Twitter thread, which is that “being an Iconoclast is not the same thing as being rigorous, or being right.” True, but dodging the debate by attacking the credibility of an idea’s advocates and taking issue with the method of dissemination are also not the same as being rigorous. Or as being right.

These are just a couple of examples, but opponents of this theory really lean into making it sound like its champions are inexperienced and don’t know what they’re talking about. Aside from being in bad faith, this also indicates a lack of confidence in comparing the contemporary story to that of sellers’ inflation.

The Theoretical Substance of the Opposition

With the semantics out of the way, it’s time to get into the meat of the case(s) against sellers’ inflation. There is no singular, unified case here, more of a constellation of related ideas. 

The first line of defense against theories of sellers’ inflation is asserting that traditional macroeconomics is good and has solved our inflation problem. For example, Chris Conlon of NYU has credited rate hikes with inflation cooling. Conlon says “I for one am glad Powell and Biden admin followed boring US textbook ideas.” But there’s a problem with that: the contemporary economic story does not actually explain how rate hikes can cool inflation without a corresponding rise in unemployment. 

The traditional story starts in the same place as the sellers’ inflation story: macroeconomic shocks create inflation. (Although the traditionalists prefer to emphasize fiscal stimulus as the primary shock, rather than supply chains. The evidence largely indicates that stimulus did have some inflationary effect, but not much. The global nature of inflation also undercuts the idea that American domestic fiscal policy could be the main explanation.) The shock(s) create a supply and demand mismatch, with too much money chasing too few available goods. After that, however, the traditional mechanism for explaining inflation remaining high is supposed to be a wage-price spiral. 

The story goes something like this: the stimulus boosted consumer demand, which overheated the economy, and created more jobs than could be filled, meaning job seekers negotiated higher pay when they took positions. They then spent that extra money which increased demand further, leading to even higher prices as supply couldn’t keep up with demand. Workers saw that their cost of living went up, so they took the opportunity to demand better pay. Companies were forced to give in because they knew in a hot labor market, their workers could leave and earn more elsewhere if employers didn’t meet workers’ demands. Once their wages went up, those workers had more spending power, which they used to buy more things, further increasing demand. That elevated prices more, as the supply-demand mismatch increased. Now workers see their cost of living rising again, so they ask for another raise. If this pattern has held for a few rounds of pay negotiations, maybe workers ask for more than they otherwise would, trying to get out ahead of their spending power shrinking again. Rinse and repeat.

But we know that this story doesn’t describe the inflation that we saw over the last couple of years. Wage growth lagged behind inflation, which indicates that something else had to be driving price increases. Plus the Phillips curve, which is meant to illustrate this relationship between higher employment and higher inflation, has been broken in the US for years. It simply does not show a meaningful positive relationship any more. 

It’s important that we understand this story as a whole. Levitz, in his piece, tries to separate the initial supply-demand mismatch from the wage-price spiral as a way of making the conventional model stack up better against sellers’ inflation. But that doesn’t actually hold because if you omit the wage-price spiral (which Levitz agrees seems dubious), the mainstream macro story has no mechanism for inflation staying high. If it were just a one-time stimulus, that would explain a one-time inflation spike, but once that money is all sent out (say by the end of 2021), there’s no source for further exacerbating the supply-demand mismatch (in say the end of 2022 or early 2023). (Remember, inflation is the rate of change of prices, so if prices spike and then stay the same afterwards, that plateau will reflect a higher price level but not sustained high inflation.) 

Similarly, focusing on only the supply-side shocks provides no reason for why inflation remained elevated long after supply chain bottlenecks had cleared and shipping prices had fallen.

The incentive shift that occurs in concentrated markets is key to understanding this. In a competitive market, firms’ response to a surge in demand is to produce more. But, when the market is concentrated and some level of implicit coordination is possible, increased production is actually against a firm’s best interest, it will just put them back at that first equilibrium from earlier. They want to enjoy the high prices and hang out in the second equilibrium as long as they can

Sellers’ inflation, at least, has an internal mechanism that can explain how we got from one-off shocks to the economy to sustained inflation. Yet its opponents wrongly describe what that mechanism is. Remember the story from earlier: the motive of profit maximization, the means of market power in concentrated industries, and the opportunity of existing inflation. The most basic objection to this mechanism is to mischaracterize it as blaming sustained upward pressure on prices on an increase in the level of greed among corporations. That’s what economist Noah Smith did in a number of blogs that have aged quite poorly. But no one is seriously arguing companies are greedier, only that there is an innate level of greed, which conventional models also assume. 

The strawmanning continues when we get to the means, which is what this Business Insider piece by Tevan Logan of Ohio State does by pointing out how Kingsford charcoal tried and failed to rent seek by raising prices, which just caused them to lose market share to retailers’ generic brands. Exactly! The competition in the charcoal market demonstrates why consolidation is a key ingredient in sellers’ inflation. If Kingsford had a product without so many generic substitutes, then consumers would not have had the chance to switch products. And that’s why a lot of the biggest price hikes occurred with goods like gas, meat, and eggs, all of which are controlled by cartel-esque oligopolies.

The opportunity component actually seems to be a point that there’s broad agreement on. For example, Conlon says that the “idea that firms might raise prices by more than their costs is neither surprising nor uncommon.” He goes on to suggest, however, that this is likely because firms expect costs to continue rising. There’s certainly an element of truth to that, but also consider the basic motivation of corporations: maximizing profits. As a result, if companies expect their costs to rise by, say, 5 percent over the next year and they’re going to adjust prices anyway, why not raise prices by 7 percent, more than enough to offset expected cost increases? 

The theoretical case against sellers’ inflation is, as Eric Levitz noted, “deeply confused;” he was just wrong about which side was getting stumped. 

The Empirical Case Against Sellers’ Inflation

The other side of the opposition to sellers’ inflation focuses on the empirics. To be fair, there’s certainly more work that needs to be done. But that’s about as far as the critique goes. The response is just “the data isn’t there.” I’ll refer you to Groundwork’s excellent work on executives saying that they are raising prices beyond costs, Weber’s paper, the Boston and San Francisco Fed papers, Bloomberg’s findings about larger firms charging higher prices, Linger et al.’s case study of concentration and price in rent increases, and the IMF working paper. 

Setting aside the very real empirical evidence in support of seller’s inflation, the argument about a lack of empirics still gives no reason to default to the traditional model of inflation. Even if we accept a lack of data for sellers’ inflation, we have quite a lot of data that directly contradicts the mainstream story. Surely, something unproven is still preferable to something disproven.

Some economists like Olivier Blanchard have raised questions about methodology and the need for more work. Great! That’s what good discourse is all about; being skeptical of ideas is fine, as long as you don’t throw them out on gut instinct. Unfortunately, critics often simply reject the theory, rather than express skepticism. When they do, however, they often fall into the same methodological gaps in which they accuse “greedflation” proponents. For example, Chris Conlon egregiously conflating correlation and causation of the Fed’s monetary policy. Or Brian Albrecht taking issue with inductive logic while siding with a traditional story that makes up ever more convoluted, illusory concepts

So Where Does That Leave Us?

The traditional model of inflation is broken. The Phillips curve is no longer a useful tool for understanding inflation, a wage-price spiral flies in the face of reality, and there’s no viable alternative mechanism for sustained inflation within the demand-side model. Enter sellers’ inflation.

From the same starting point, and drawing on several cornerstone pieces of economic theory, sellers’ inflation is able to provide a consistent vehicle for one-off shocks to create prolonged upward pressure on price levels as firms exercise their market power. The bedrock ideas of the theory are consistent with seminal economic thought from the likes of David Ricardo and even Adam Smith himself and has the support of a number of subject matter experts. Is it a perfect theory? No, but to paraphrase President Biden, don’t compare it to the ideal, compare it to the alternative. More empirics would be preferable, but the case for sellers’ inflation remains much stronger than the case for a fiscal stimulus igniting a wage-price spiral, which is entirely anathema to most of the evidence we do have.

One way or another, inflation is trending down and, by some measures, is closing in on the target rate again. Many have rushed to credit the Federal Reserve for following the textbook course, but they don’t have any internal story about how the Fed could have done that without increasing employment. As Nobel laureate Paul Krugman (who supported rate hikes and once bashed the theory of sellers’ inflation) asked, “Where’s the rise in economic slack?” The conventional story is missing its second chapter and yet its advocates are eager to point to an ending they can’t explain as all the justification they need to avoid reconsidering their priors. One possibility Krugman notes, which Matthew Klein explicates here, is that inflation really was transitory the whole time. The sharp upward pressures were, indeed, caused by one-off shocks from the pandemic, supply chains, and Russian aggression, but the effects had unusually long tails. This theory aligns very well with sellers’ inflation; corporate price hikes could simply be the explanation for such long lasting effects. 

Additionally, as Hal Singer pointed out, the recent drop in inflation corresponds to a downturn in corporate profits. Some, including Noah Smith (in that tweet’s comments), disagree and argue that both lower profits and less inflation are caused by new slack in demand. But that doesn’t really match what we’re seeing across macroeconomic data. True, employment growth has slowed, as has the growth of personal consumption, but that still doesn’t match up with the type of deflationary pressure that we were supposed to need; Larry Summers was citing figures as high as 6 percent unemployment. Plus, the metrics that do show demand softening largely only show that employment and consumption are steadying, not decreasing. On top of that, the contraction in output that The Wall Street Journal identified makes the case for simple shifts in demand driving price levels dubious. Additionally, if a wage-price spiral were at fault, leveling off employment growth would not be enough, the labor market would still be too tight (aka inflationary), hence why we’d need to increase unemployment. 

Good economic theories always need more work to apply them to new situations and produce quality empirics. But pretending that sellers’ inflation is a wacky idea while the conventional macro story maps perfectly onto the economy of the past three years is thumbing your nose at the most complete story available, significant empirical evidence, and centuries of economic theory.

Dylan Gyauch-Lewis is Senior Researcher at the Revolving Door Project.