As the DOJ’s antitrust case against Google begins, all eyes are focused on whether Google violated antitrust law by, among other things, entering into exclusionary agreements with equipment makers like Apple and Samsung or web browsers like Mozilla. Per the District Court’s Memorandum Opinion, released August 4, “These agreements make Google the default search engine on a range of products in exchange for a share of the advertising revenue generated by searches run on Google.” The DOJ alleges that Google unlawfully monopolizes the search advertising market.
Aside from matters relating to antitrust liability, an equally important question is what remedy, if any, would work to restore competition in search advertising in particular and online advertising generally?
Developments in the UK might shed some light. The UK Treasury commissioned a report to make recommendations on changes to competition law and policy, which aimed to “help unlock the opportunities of the digital economy.” The report found that Big Tech’s monopolizing of data and control over open web interoperability could undermine innovation and economic growth. Big Tech platforms now have all the data in their hands, block interoperability with other sources, and will capture more of it, through their huge customer-facing machines, and so can be expected to dominate the data needed for the AI Period, enabling them to hold back competition and economic growth.
The dominant digital platforms currently provide services to billions of end users. Each of us has either an Apple or Android device in our pocket. These devices operate as part of integrated distribution platforms: anything anyone wants to obtain from the web goes through the device, its browser (often Google’s search engine), and the platform before accessing the Open Web, if not staying on an app on an apps store within the walls of the garden.
Every interaction with every platform product generates data, refreshed billions of times a day from multiple touch points providing insight into buying intent and able to predict people’s behavior and trends.
All this data is used to generate alphanumeric codes that match data contained in databases (aka “Match Keys”), which are used to help computers interoperate and serve relevant ads to match users’ interests. These were for many years used by all from the widely distributed Double Click ID. They were shared across the web and were used as the main source of data by competing publishers and advertisers. After Google bought Double Click and grew big enough to “tip” the market, however, Google withdrew access to its Match Keys for its own benefit.
The interoperability that is a feature of the underlying internet architecture has gradually been eroded. Facebook collected its own data from user’s “Likes” and community groups and also withdrew access for independent publishers to its Match Key data, and recently Apple has restricted access to Match Key data that is useful for ads for all publishers, except Google has a special deal on search and search data. As revealed in U.S. vs Google, Apple is paid over $10 billion a year by Google so that Google can provide its search product to Apple users and gather all their search history data that it can then use for advertising. The data generated by end user interactions with websites is now captured and kept within each Big Tech walled garden.
If the Match Keys were shared with rival publishers for use in their independent supply channel and used by them for their own ad-funded businesses, interoperability would be improved and effective competition could be generated with the tech platforms. Competition probably won’t exist otherwise.
Both Google and Apple currently impose restrictions on access to data and interoperability. Cookie files also contain Match Keys that help maintain computer sessions and “state” so that different computers can talk to each other and help remember previous visits to websites and enable e-commerce. Cookies do not themselves contain personal data and are much less valuable than the Match Keys that were developed by Double Click or ID for advertisers, but they do provide something of a substitute source of data about users’ intent to purchase for independent publishers.
Google and Apple are in the process of blocking access to Match Keys in all forms to prevent competitors from obtaining relevant data about users needs and wants. They also prevent the use of the Open Web and limit the inter-operation of their apps stores with Open Web products, such as progressive web apps.
The UK’s Treasury Report refers to interoperability 8 times and the need for open standards as a remedy 43 times; the Bill refers to interoperability and we are expecting further debate about the issue as the Bill passes through Parliament.
A Brief History of Computing and Communications
The solution to monopolization, or lack of competition, is the generation of competition and more open markets. For that to happen in digital worlds, access to data and interoperability is needed. Each previous period of monopolization involved intervention to open-up computer and communications interfaces via antitrust cases and policy that opened market and liberalized trade. We have learned that the authorities need to police standards for interoperability and open interfaces to ensure the playing field is level and innovation can take place unimpeded.
IBM’s activity involved bundling computers and peripherals and the case was eventually solved by unbundling and unblocking interfaces needed by competitors to interoperate with other systems. Microsoft did the same, blocking third parties from interoperating via blocking access to interfaces with its operating system. Again, it was resolved by opening-up interfaces to promote interoperability and competition between products that could then be available over platforms.
When Tim Berners Lee created the World Wide Web in the early 1990s, it took place nearly ten years after the U.S. courts imposed a break-up of AT&T and after the liberalization of telecommunications data transmission markets in the United States and the European Union. That liberalization was enabled by open interfaces and published standards. To ensure that new entrants could provide services to business customers, a type of data portability was mandated, enabling numbers held in incumbent telecoms’ databases to be transferred for use by new telecoms suppliers. The combination of interconnection and data portability neutralized the barrier to entry created by the network effect arising from the monopoly control over number data.
The opening of telecoms and data markets in the early 1990s ushered in an explosion of innovation. To this day, if computers operate to the Hyper Text Transfer Protocol then they can talk to other computers. In the early 1990s, a level playing field was created for decentralized competition among millions of businesses.
These major waves of digital innovation perhaps all have a common cause. Because computing and communications both have high fixed costs and low variable or incremental costs, and messaging and other systems benefit from network effects, markets may “tip” to a single provider. Competition in computing and communications then depends on interoperability remedies. Open, publicly available interfaces in published standards allow computers and communications systems to interoperate; and open decentralized market structures mean that data can’t easily be monopolized.
It’s All About the Match Keys
The dominant digital platforms currently capture data and prevent interoperability for commercial gain. The market is concentrated with each platform building their own walled gardens and restricting data sharing and communication across. Try cross-posting among different platforms as an example of a current interoperability restriction. Think about why messaging is restricted within each messaging app, rather than being possible across different systems as happens with email. Each platform restricts interoperability preventing third-party businesses from offering their products to users captured in their walled gardens.
For competition to operate in online advertising markets, a similar remedy to data portability in the telecom space is needed. Only, with respect to advertising, the data that needs to be accessed is Match Key data, not telephone numbers.
The history of anticompetitive abuse and remedies is a checkered one. Microsoft was prohibited from discriminating against rivals and had to put up a choice screen in the EU Microsoft case. It didn’t work out well. Google was similarly prohibited by the EU in Google search (Shopping) from (1) discriminating against rivals in its search engine results pages, (2) entering exclusive agreements with handset suppliers that discriminated against rivals, and (3) showing only Google products straight out of the box in the EU Android case. The remedies did not look at the monopolization of data and its use in advertising. Little has changed and competitors claim that the remedies are ineffective.
Many in the advertising publishing and ad tech markets recall that the market worked pretty well before Google acquired Double Click. Google uses multiple data sources as the basis for its Match Keys and an access and interoperability remedy might be more effective, proportionate and less disruptive.
Perhaps if the DOJ’s case examines why Google collects search data from its search engine, its use of search histories, browser histories and data from all interactions with all products for its Match Key for advertising, the court will better appreciate the importance of data for competitors and how to remedy that position for advertising-funded online publishing.
Following Europe’s Lead
The EU position is developing. Under the EU’s Digital Markets Act (DMA), which now supplements EU antitrust law as applied in the Google Search and Android Decisions, it is recognized that people want to be able to provide products and services across different platforms or cross-post or communicate with people connected to each social network or messaging app. In response, the EU has imposed obligations on Big Tech platforms in Articles 5(4) and 6(7) that provide for interoperability and require gatekeepers to allow open access to the web.
Similarly, Section 20.3 (e) of the UK’s Digital Markets, Competition and Consumers Bill (DMCC) refers to interoperability and may be the subject of forthcoming debate as the bill passes further through Parliament. Unlike U.S. jurisprudence with its recent fixation on consumer welfare, the objective of the Competition and Markets Authority is imposed by the law. The obligation to “promote competition for the benefit of consumers” is contained in EA 2013 s 25(3). This can be expressly related to intervention opening up access to the source of the current data monopolies: the Match Keys could be shared, meaning all publishers could get access to IDs for advertising (i.e., operating systems generated IDs such as Apple’s IDFA or Google’s Google ID or MAID).
In all jurisdictions it will be important for remedies to stimulate innovation, and to ensure that competition is promoted between all products that can be sold online, rather than between integrated distribution systems. Moreover, data portability needs to apply with reference to use of open and interoperable Match Keys that can be used for advertising, and that way address the data monopolization risk. As with the DMA, the DMCC should contain an obligation for gatekeepers to ensure fair reasonable and nondiscriminatory access, and treat advertisers in a similar way to that through which interoperability and data potability addressed monopoly benefits in previous computer, telecoms, and messaging cases.
Tim Cowen is the Chair of the Antitrust Practice at the London-based law firm of Preiskel & Co LLP.
This piece originally appeared in ProMarket but was subsequently retracted, with the following blurb (agreed-upon language between ProMarket’s Luigi Zingales and the authors):
“ProMarket published the article “The Antitrust Output Goal Cannot Measure Welfare.” The main claim of the article was that “a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.” The published version was unclear on whether the theorem contained in the article was a statement about an equilibrium outcome or a mere existence claim, regardless of the possibility that this outcome might occur in equilibrium. When we asked the authors to clarify, they stated that their claim regarded only the existence of such points, not their occurrence in equilibrium. After this clarification, ProMarket decided that the article was uninteresting and withdrew its publication.”
The source of the complaint that caused the retraction was, according to Zingales, a ProMarket Advisory Board member. The authors had no contact with that person, nor do we know who it is. We would have welcomed published scholarly debate versus retraction compelled by an anonymous Board Member.
We reproduce the piece in its entirety here. In addition, we provide our proposed revision to the piece, which we wrote to clear up the confusion that it was claimed was created by the first piece. We will let our readers be the judge of the piece’s interest. Of course, if you have any criticisms, we welcome professional scholarly debate.
(By the way, given that the piece never mentions supply or demand or prices, it is a mystery to us why any competent economist could have thought it was about “equilibrium.” But perhaps “equilibrium” was a pretext for removing the article for other reasons.)
The Antitrust Output Goal Cannot Measure Welfare (ORIGINAL POST)
Many antitrust scholars and practitioners use output to measure welfare. Darren Bush, Gabriel A. Lozada, and Mark Glick write that this association fails on theoretical grounds and that ideas of welfare require a much more sophisticated understanding.
By Darren Bush, Gabriel A. Lozada, and Mark Glick
Debate seems to have pivoted in the discourse on consumer welfare theory to the question of whether welfare can be indirectly measured based upon output. The tamest of these claims is not that output measures welfare, but that generally, output increases are associated with increases in economic welfare.
This claim, even at its tamest, is false. For one, welfare depends on more than just output, and increasing output may detrimentally affect some of the other factors which welfare depends on. For example, increasing output may cause working conditions to deteriorate; may cause competing firms to close, resulting in increased unemployment, regional deindustrialization, and fewer avenues for small business formation; may increase pollution; may increase the political power of the growing firm, resulting in more public policy controversies and, yes, more lawsuits being decided in its interest; and may adversely affect suppliers.
Even if we completely ignore those realities, it is still possible for an increase in output to reduce welfare. These two short proofs show that even in the complete absence of these other effects—that is, even if we assume that people obtain welfare exclusively by receiving commodities, which they always want more of—increasing output may reduce welfare.
We will first prove that it is possible for an increase in output to reduce welfare under the assumption that welfare is assessed by a social planner. Then we will prove it assuming no social planner, so that welfare is assessed strictly via individuals’ utility levels.
The Social Planner Proof
Here we show that a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.
Suppose in the figure below that the original production possibility frontier is PPF0 and
the new production possibility frontier is PPF1. Let USWF be the original level of social welfare, so that the curve in the diagram labeled USWF is the social indifference curve when the technology is represented by PPF0. This implies that when the technology is at PPF0, society chooses the socially optimal point, I, on PPF0. Next, suppose there is an increase in potential output, to PPF1. If society moves to a point on PPF1 which is above and to the left of point A, or is below and to the right of point B, then society will be worse off on PPF1 than it was on PPF0. Even though output increased, depending on the social indifference curve and the composition of the new output, there can be lower social welfare.
The Individual Utility Proof
Next, we continue to assume that only consumption of commodities determines welfare, and we show that when output increases every individual can be worse off. Consider the figure below, which represents an initial Edgeworth Box having solid borders, and a new, expanded Edgeworth Box, with dashed borders. The expanded Edgeworth Box represents an increase in output for both apples and bananas, the two goods in this economy.
The original, smaller Edgeworth Box has an origin for Jones labeled J and an origin for Smith labeled S. In this smaller Edgeworth Box, suppose the initial position is at C. The indifference curve UJ0 represents Jones’s initial level of utility with the smaller Edgeworth Box, and the indifference curve US represents Smith’s initial level of utility with the smaller Box. In the larger Edgeworth Box, Jones’s origin shifts from J to J’, and his UJ0 indifference curve correspondingly shifts to UJ0′. Smiths’ US indifference curve does not shift. The hatched areas in the graph are all the allocations in the bigger Edgeworth Box which are worse for both Smith and Jones compared to the original allocation in the smaller Edgeworth Box.
In other words, despite the fact that output has increased, if the new allocation is in the hatched area, then Smith and Jones both prefer the world where output is lower. We get this result because welfare is affected by allocation and distribution as well as by the sheer amount of output, and more output, if mis-allocated or poorly distributed, can decrease welfare.
GDP also does not measure aggregate Welfare
The argument that “output” alone measures welfare sometimes refers not to literal output, as in the two examples above, but to a reified notion of “output.” A good example is GDP. GDP is the aggregated monetary value of all final goods and services, weighted using current prices. Welfare economists, beginning with Richard Easterlin, have understood that GDP does not accurately measure economic well-being. Since prices are used for the aggregation, GDP incorporates the effects of income distribution, but in a way which hides this dependence, making GDP seem value-free although it is not. In addition, using GDP as a measure of welfare deliberately ignores many important welfare effects while only taking into account output. As Amit Kapoor and Bibek Debroy put it:
GDP takes a positive count of the cars we produce but does not account for the emissions they generate; it adds the value of the sugar-laced beverages we sell but fails to subtract the health problems they cause; it includes the value of building new cities but does not discount for the vital forests they replace. As Robert Kennedy put it in his famous election speech in 1968, “it [GDP] measures everything in short, except that which makes life worthwhile.”
Any industry-specific measure of price-weighted “output” or firm-specific measure of price-weighted “output” is similarly flawed.
For these reasons, few, if any, welfare economists would today use GNP alone to assess a nation’s welfare, preferring instead to use a collection of “social indicators.”
Output should not be the sole criterion for antitrust policy. We can do a better job of using competition policy to increase human welfare without this dogma. In this article, we showed that we cannot be certain that output increases welfare even in a purely hypothetical world where welfare depends solely on the output of commodities. In the real world, where welfare depends on a multitude of factors besides output—many of which can be addressed by competition policy—the case against a unilateral output goal is much stronger.
The Original Sling posting inadvertently left off the two proposed graphs that we drew as we sought to remedy the Anonymous Board Member’s confusion about “equilibrium.” We now add the graphs we proposed. The explanation of the graphs was similar, and the discussion of GNP was identical to the original version.
The Proof if there is a Social Welfare Function (Revised Graph)
The Individual Utility Proof (Revised Graph)
Over the past two years, heterodox economic theory has burst into the public eye more than ever as conventional macroeconomic models have failed to explain the economy we’ve been living in since 2020. In particular, theories around consolidation and corporate power as factors in macroeconomic trends–from neo-Brandeisian antitrust policy to theories of profit seeking as a driver of inflation–have exploded onto the scene. While “heterodox economics” isn’t really a singular thing–it’s more a banner term for anything that breaks from the well established schools of thought–the ideas it represents challenge decades of consensus within macro- and financial economics. This development, of course, has left the proponents of the traditional models rather perturbed.
One of the heterodox ideas that has seen the most media attention is the idea of sellers’ inflation: the theory that inflation can, at least partially, be a result of companies using economic shocks as smokescreens to exercise their market power and raise the prices they charge. The name most associated with this theory is Isabella Weber, a professor of economics at the University of Massachusetts, but there are certainly other economists who support this theory (and many more who support elements of it but are holding out for more empirical evidence before jumping into the rather fraught public debate.)
Conventional economists have been bristling about sellers’ inflation being presented as an alternative to the more staid explanation of a wage-price spiral (we’ll come back to that), but in recent months there have been extremely aggressive (and often condescending, self-important, and factually incorrect) attacks on the idea and its proponents. Despite this, sellers’ inflation really is not that far from a lot of long standing economic theory, and the idea is grounded in key assumptions about firm behavior that are deeply held across most economic models.
My goal here is threefold: first, to explain what the sellers’ inflation and conventional models actually are; second, to break down the most common lines of attack against sellers’ inflation; third, to demonstrate that, whatever its shortcomings, sellers’ inflation is better supported than the traditional wage-price spiral. Many even seem to recognize this, shifting to an explanation of corporations just reacting to increased demand. As we’ll see, that explanation is even weaker.
As briefly mentioned above, sellers’ inflation is the idea that, in significantly concentrated sectors of the economy, coordinated price hikes can be a significant driver of inflation. While the concept’s opponents generally prefer to call it “greedflation,” largely as a way of making it seem less intellectually serious, the experts actually advancing the theory never use that term for a very simple reason: it doesn’t really have anything to do with variance in how greedy corporations are. It does rely on corporations being “greedy,” but so do all mainstream economic theories of corporate behavior. Economic models around firm behavior practically always assume companies to be profit maximizing, conduct which can easily be described as greedy. As we’ll see, this is just one of many points in which sellers’ inflation is actually very much aligned with prevailing economic theory.
Under the sellers’ inflation model, inflation begins with a series of shocks to the macroeconomy: a global pandemic causes an economic crash. Governments respond with massive fiscal stimulus, but the economy experiences huge supply chain disruptions that are further worsened with the Russian invasion of Ukraine. All of these events caused inflation to increase either by decreasing supply or increasing demand. The stimulus checks increased demand by boosting consumers’ spending power–exactly what it was supposed to do. Both strained supply chains and the sanctions cutting Russia off from global trade restricted supply. Contrary to what some opponents of sellers’ inflation will say, the theory does not deny the stimulus being inflationary (though some individual proponents might). Rather, sellers’ inflation is an explanation for the sustained inflation we saw over the past two years. Those shocks led to a mismatch between demand and supply for consumer goods, but something kept inflation high even after the effects of those shocks should have waned.
The culprit is corporate power. With such a whirlwind of economic shocks, consumers are less able to tell when prices are rising to offset increases in the cost of production versus when prices are being raised purely to boost profit. This, too, is not at odds with conventional macro wisdom. Every basic model of supply and demand tells us that when supply dwindles and demand soars, the price level will rise. Sellers’ inflation is an explanation of how and why prices rise and why prices will increase more in an economy with fewer firms and less competition.
Sellers’ inflation is really just a specific application of the theory of rent-seeking, which has been largely accepted since it was introduced by David Ricardo, a contemporary of the father of modern economics, Adam Smith. (Indeed, this point, which I raised nearly a year and a half ago in Common Dreams, was recently explored in a new paper from scholars at the University of London.) As anyone who has ever watched a crime show could tell you, when you want to solve a whodunnit, you need to look at motive, means, and opportunity. The greed (which, again, is at the same level it always is) is the motive. Corporations will always seek to charge as high of a price as they can without being dangerously undercut by competitors. Sellers’ inflation doesn’t posit a massive increase in corporate greed, but a unique economic environment that allows firms to act upon the greed they have possessed.
Concentration is the means; when the market is in the hands of only one or a few firms, it becomes easier to raise prices for a couple of reasons. First, large firms have price-setting power, meaning they control enough of the sector that they are able to at least partially set the going rate for what they sell. Second, when there’s only a few firms in a sector, wink-wink-nudge-nudge pricing coordination is much easier. Just throw in some vague but loaded phrases in press releases or earnings calls that you know your competition will read and see if they take the same tack. For simplicity, imagine an industry dominated by two firms, A and B. At any given point, both are choosing between holding prices steady and raising them (assume lowering prices is off the table because it’s unprofitable, let’s keep it simple.) This sets up the classic game-theoretical model of the prisoner’s dilemma:
|A Maintains Price||A Raises Price|
|B Maintains Price||→, →||↓, ↑|
|B Raises Price||↑, ↓||↑, ↑|
In the chart above, the red arrows represent the change in A’s profit and the blue represent the change in B’s. If both hold the price steady, nothing changes, we’re at an equilibrium. If one and only one firm raises prices without the other, the price-hiker will lose money as price-conscious consumers switch to their competitor, who will now see higher profits. This makes the companies averse to raising prices on their own. But, if both raise their prices, both will be able to increase their profits. That’s why collusion happens. But, wait, isn’t that illegal? Yes, yes it is. But it is nigh on impossible to police implicit collusion, especially when there is a seemingly plausible alternative explanation for price hikes.
As James Galbraith wrote, in stable periods, firms prefer the safer equilibrium of holding prices relatively stable. As he explains:
In normal times, margins generally remain stable, because businesses value good customer relations and a predictable ratio of price to cost. But in disturbed and disrupted moments, increased margins are a hedge against cost uncertainties, and there develops a general climate of “get what you can, while you can.” The result is a dynamic of rising prices, rising costs, rising prices again — with wages always lagging behind.
And that gets us to opportunity, which is what the macroeconomic shocks provide. Firms probably did experience real increases in their production costs, which gives them good reason to raise their prices…to a point. But what has been documented by Groundwork Collaborative and separately by Isabella Weber and Evan Wasner is corporate executives openly discussing increasing returns using “pricing power,” which is code for charging more than is needed to offset their costs. This is them signaling that they see an opportunity to get to that second equilibrium in the chart above, where everyone makes more money. And since that same information and rationale is likely to be present at all of the firms in an industry, they all have the incentive (or greed if you prefer) to do the same. This is easiest to conceptualize in a sector with two firms, but it holds for one with more that is still concentrated. At some point, though, you reach a critical mass where suddenly there’s one or more firms who won’t go along with it. As the number of firms increases, it becomes more and more probable that one won’t just go along with it, which is why concentration facilitates coordination.
And that’s it. In an economy with significant levels of concentration — more than 75 percent industries in the American economy have become more concentrated since the 1990s — and the smokescreen of existing inflation, corporate pricing strategy can sustain rising prices due to the uncertainty. Now, if you ask twenty different supporters of sellers’ inflation, you’ll likely get twenty slightly different versions of the story. However, the main beats are mostly agreed upon: 1) firms are profit maximizing, 2) they always want to raise prices but usually won’t out of fear of either being undercut by the competition or being busted for illegal collusion, and 3) other inflationary pressures provide some level of plausible deniability which lowers the potential downside of price increases.
The evidence available to support theories of sellers’ inflation is one of the main points of contention between its proponents and detractors. Despite that, there is strong theoretical and empirical evidence that backs the theory up.
First is a basic issue of accounting that nobody in the traditional macro camp seems to have a good answer for. Profits are always equal to the difference between revenues (all the money a company brings in) and costs (all the money a company sends out).
Profits= Revenue – Costs
This is inviolable; that is simply the definition of profits. As I’ve written before, this means that the only two possible ways for a company to increase profits is by generating more revenue or cutting costs (or a combination of the two, but let’s keep it simple). Costs can’t be the primary driver in our case because we know they’re increasing, not decreasing. Inflationary pressures should still have increased production costs like labor and any kind of input that is imported. Companies also have been adamant about the fact that they are facing rising costs; that’s their whole justification for price hikes. And mainstream economists would agree. They blame lingering inflation on a wage price spiral, which says that workers demanding higher wages have driven cost increases that force companies to raise prices – resulting in higher inflation. As both sides agree that input costs are rising, the only possible explanation for increased profits is an increase in revenue. Revenue also has itself a handy little formula:
Revenue = Price * Units Sold
While the units sold may have increased, price was the bigger factor. We know this for at least two key reasons: because of evidence showing that output (the units sold) actually decreased and because of the evidence from earnings calls compiled by Groundwork. Executives said their strategy was to raise prices, not to sell more products. And there’s two very good reasons to believe the execs: (1) they know their firms better than anyone, and (2) they are legally required to tell the truth on those calls. (That second reason is also evidence of sellers’ inflation on its own; if the theory’s opponents don’t buy the explanation given by the executives to investors, they must think executives are committing securities fraud.)
In rebuttal to the accounting issue, Brian Albrech, chief economist at the International Center for Law and Economics, has argued that using accounting identities is wrongheaded:
Just as we never reason from a price change, we need to never reason from an accounting identity. My income equals my savings plus my consumption: I = S + C. But we would never say that if I spend more money, that will cause my income will rise.
This, on face, seems like a reasonable argument, except all it really shows is that Albrecht doesn’t understand basic math. Tracking just one part of the equation won’t automatically tell us what the others do…duh. But we can track what a variable is doing empirically and use that relationship to make sense of it. We would never say that someone spending more money on consumption causes their income to rise. But we certainly could say that if we observe an increase in personal consumption, then we can reason that either their income increased or their savings decreased. The mathematical definition holds, you just have to actually consider all of the variables. In fact, Albrecht agrees, but warns “Yes, the accounting identity must hold, and we need to keep track of that, but it tells us nothing about causation.” No, it tells us correlation. Which, by the way, is what econometrics and quantitative analyses tell us about as well.
The way you get to causation in economics is by tying theory and context to empirical correlations to explain those relationships. Albrecht’s case is just a very reductive view of the actual logic at play. He continues:
After all, any revenue PQ = Costs + Profits. So P = Costs/Q + Profits/Q. If inflation means that P goes up, it must be “caused” by costs or profits.
No, again. Stop it. This is like saying consumption causes income.
Once again, Albrecht is wrong here. This is like saying higher consumption will correspond to either higher income or lower savings. Additionally, there’s a key difference between the accounting identities for income and for profits: income is broken down into consumption and savings after you receive it, whereas costs and revenues must exist before profits. This makes causal inference in the latter much more reasonable; income is determined exogenously to that formula, but profits are endogenous to their accounting identity.
In addition to these observations, though, there is also various economic research that supports the idea of seller’s inflation. Some of the best empirical evidence comes from this report from the Federal Reserve Bank of Boston, this one from the Federal Reserve Bank of San Francisco, and this one from the International Monetary Fund.
Another key piece of evidence is a Bloomberg investigation that found that the biggest price increases came from the largest firms. If market power were not a factor, then prices should have been rising roughly proportionally across firms, regardless of their size. If anything, large firms’ economies of scale should have cut down on the need to hike prices. Especially because basic economic theory tells us that when demand increases, companies want to expand supply, which should have resulted in more products (especially from larger firms with more resources) and a corresponding drop in price increases. And yet, what we actually saw was a drop in production from major companies like Pepsi, who opted instead to increase profits by maintaining a shortfall in supply.
That said there’s plenty more, including this from the Kansas City Fed, this from Jacob Linger et al., this from French economists Malte Thie and Axelle Arquié, this from the European Central Bank, this one from the Roosevelt Institute, and more. The Bank of Canada has also endorsed the view. It seems unlikely that the Federal Reserve, European Central Bank and the Bank of Canada have all become bastions of activist economists unmoored from evidence. Perhaps it’s time those denying sellers’ inflation are labeled the ideologues.
Before we get into the substance of critiques against sellers’ inflation as a theory, there are a few miscellaneous issues with the framing its opponents often use. There is a tendency for arguments against sellers’ inflation to use loaded words or skewed phrasing to implicitly undermine the legitimacy of people who are spearheading the push for greater scrutiny of corporations as a part of managing inflation.
For instance, Eric Levitz says the debate sees “many mainstream economists against
heterodox progressives.” This phrasing suggests that the debate is between economists on the one hand and proponents of sellers’ inflation on the other. But that’s not true! There are both economists and non-economists on both sides of the issue. Weber is an economist, as are the researchers at the Boston and San Francisco Feds. And others, including James Galbraith, Paul Donovan, Hal Singer, and Groundwork’s Chris Becker and Rakeen Mabud are on board. Notably, Lael Brainard, the head of President Biden’s Council on Economic Advisors (and former Federal Reserve Vice Chair) recently endorsed the view.
Or take how Kevin Bryan, a professor of management at the University of Toronto described Isabella Weber as a “young researcher” who “has literally 0 pubs on inflation.” Weber is old enough to have two PhDs and tenure at UMass and–will you look at that–has written about inflation before! Presenting her as young sets the stage for making her seem inexperienced, which saying she has no publications doubles down on. But his claims are false. Weber wrote a paper with Evan Wasner specifically about sellers’ inflation. But even if we take Bryan’s point as true and ignore the very real work Weber has done on inflation and pricing, Weber still has significant experience with political economy, which helps to explain how institutional power is able to influence markets—exactly the type of thinking sellers’ inflation is based upon.
(And this is nothing compared to the abuse that Weber endured after an op-ed in The Guardian provoked a frenzy of insulting, condescending attacks from many professional economists. For more on that, see Zach Carter’s New Yorker profile of Weber and/or this Twitter thread that documents Noah Smith’s outlash at Weber.)
But even the semantics that don’t get into ad hominem territory are confusing. Here is a list of the topline concerns that Kevin Bryan raised:
Let’s just run through that list of concerns real quick:
All of this is to set up the next point in that Twitter thread, which is that “being an Iconoclast is not the same thing as being rigorous, or being right.” True, but dodging the debate by attacking the credibility of an idea’s advocates and taking issue with the method of dissemination are also not the same as being rigorous. Or as being right.
These are just a couple of examples, but opponents of this theory really lean into making it sound like its champions are inexperienced and don’t know what they’re talking about. Aside from being in bad faith, this also indicates a lack of confidence in comparing the contemporary story to that of sellers’ inflation.
With the semantics out of the way, it’s time to get into the meat of the case(s) against sellers’ inflation. There is no singular, unified case here, more of a constellation of related ideas.
The first line of defense against theories of sellers’ inflation is asserting that traditional macroeconomics is good and has solved our inflation problem. For example, Chris Conlon of NYU has credited rate hikes with inflation cooling. Conlon says “I for one am glad Powell and Biden admin followed boring US textbook ideas.” But there’s a problem with that: the contemporary economic story does not actually explain how rate hikes can cool inflation without a corresponding rise in unemployment.
The traditional story starts in the same place as the sellers’ inflation story: macroeconomic shocks create inflation. (Although the traditionalists prefer to emphasize fiscal stimulus as the primary shock, rather than supply chains. The evidence largely indicates that stimulus did have some inflationary effect, but not much. The global nature of inflation also undercuts the idea that American domestic fiscal policy could be the main explanation.) The shock(s) create a supply and demand mismatch, with too much money chasing too few available goods. After that, however, the traditional mechanism for explaining inflation remaining high is supposed to be a wage-price spiral.
The story goes something like this: the stimulus boosted consumer demand, which overheated the economy, and created more jobs than could be filled, meaning job seekers negotiated higher pay when they took positions. They then spent that extra money which increased demand further, leading to even higher prices as supply couldn’t keep up with demand. Workers saw that their cost of living went up, so they took the opportunity to demand better pay. Companies were forced to give in because they knew in a hot labor market, their workers could leave and earn more elsewhere if employers didn’t meet workers’ demands. Once their wages went up, those workers had more spending power, which they used to buy more things, further increasing demand. That elevated prices more, as the supply-demand mismatch increased. Now workers see their cost of living rising again, so they ask for another raise. If this pattern has held for a few rounds of pay negotiations, maybe workers ask for more than they otherwise would, trying to get out ahead of their spending power shrinking again. Rinse and repeat.
But we know that this story doesn’t describe the inflation that we saw over the last couple of years. Wage growth lagged behind inflation, which indicates that something else had to be driving price increases. Plus the Phillips curve, which is meant to illustrate this relationship between higher employment and higher inflation, has been broken in the US for years. It simply does not show a meaningful positive relationship any more.
It’s important that we understand this story as a whole. Levitz, in his piece, tries to separate the initial supply-demand mismatch from the wage-price spiral as a way of making the conventional model stack up better against sellers’ inflation. But that doesn’t actually hold because if you omit the wage-price spiral (which Levitz agrees seems dubious), the mainstream macro story has no mechanism for inflation staying high. If it were just a one-time stimulus, that would explain a one-time inflation spike, but once that money is all sent out (say by the end of 2021), there’s no source for further exacerbating the supply-demand mismatch (in say the end of 2022 or early 2023). (Remember, inflation is the rate of change of prices, so if prices spike and then stay the same afterwards, that plateau will reflect a higher price level but not sustained high inflation.)
Similarly, focusing on only the supply-side shocks provides no reason for why inflation remained elevated long after supply chain bottlenecks had cleared and shipping prices had fallen.
The incentive shift that occurs in concentrated markets is key to understanding this. In a competitive market, firms’ response to a surge in demand is to produce more. But, when the market is concentrated and some level of implicit coordination is possible, increased production is actually against a firm’s best interest, it will just put them back at that first equilibrium from earlier. They want to enjoy the high prices and hang out in the second equilibrium as long as they can
Sellers’ inflation, at least, has an internal mechanism that can explain how we got from one-off shocks to the economy to sustained inflation. Yet its opponents wrongly describe what that mechanism is. Remember the story from earlier: the motive of profit maximization, the means of market power in concentrated industries, and the opportunity of existing inflation. The most basic objection to this mechanism is to mischaracterize it as blaming sustained upward pressure on prices on an increase in the level of greed among corporations. That’s what economist Noah Smith did in a number of blogs that have aged quite poorly. But no one is seriously arguing companies are greedier, only that there is an innate level of greed, which conventional models also assume.
The strawmanning continues when we get to the means, which is what this Business Insider piece by Tevan Logan of Ohio State does by pointing out how Kingsford charcoal tried and failed to rent seek by raising prices, which just caused them to lose market share to retailers’ generic brands. Exactly! The competition in the charcoal market demonstrates why consolidation is a key ingredient in sellers’ inflation. If Kingsford had a product without so many generic substitutes, then consumers would not have had the chance to switch products. And that’s why a lot of the biggest price hikes occurred with goods like gas, meat, and eggs, all of which are controlled by cartel-esque oligopolies.
The opportunity component actually seems to be a point that there’s broad agreement on. For example, Conlon says that the “idea that firms might raise prices by more than their costs is neither surprising nor uncommon.” He goes on to suggest, however, that this is likely because firms expect costs to continue rising. There’s certainly an element of truth to that, but also consider the basic motivation of corporations: maximizing profits. As a result, if companies expect their costs to rise by, say, 5 percent over the next year and they’re going to adjust prices anyway, why not raise prices by 7 percent, more than enough to offset expected cost increases?
The theoretical case against sellers’ inflation is, as Eric Levitz noted, “deeply confused;” he was just wrong about which side was getting stumped.
The other side of the opposition to sellers’ inflation focuses on the empirics. To be fair, there’s certainly more work that needs to be done. But that’s about as far as the critique goes. The response is just “the data isn’t there.” I’ll refer you to Groundwork’s excellent work on executives saying that they are raising prices beyond costs, Weber’s paper, the Boston and San Francisco Fed papers, Bloomberg’s findings about larger firms charging higher prices, Linger et al.’s case study of concentration and price in rent increases, and the IMF working paper.
Setting aside the very real empirical evidence in support of seller’s inflation, the argument about a lack of empirics still gives no reason to default to the traditional model of inflation. Even if we accept a lack of data for sellers’ inflation, we have quite a lot of data that directly contradicts the mainstream story. Surely, something unproven is still preferable to something disproven.
Some economists like Olivier Blanchard have raised questions about methodology and the need for more work. Great! That’s what good discourse is all about; being skeptical of ideas is fine, as long as you don’t throw them out on gut instinct. Unfortunately, critics often simply reject the theory, rather than express skepticism. When they do, however, they often fall into the same methodological gaps in which they accuse “greedflation” proponents. For example, Chris Conlon egregiously conflating correlation and causation of the Fed’s monetary policy. Or Brian Albrecht taking issue with inductive logic while siding with a traditional story that makes up ever more convoluted, illusory concepts.
The traditional model of inflation is broken. The Phillips curve is no longer a useful tool for understanding inflation, a wage-price spiral flies in the face of reality, and there’s no viable alternative mechanism for sustained inflation within the demand-side model. Enter sellers’ inflation.
From the same starting point, and drawing on several cornerstone pieces of economic theory, sellers’ inflation is able to provide a consistent vehicle for one-off shocks to create prolonged upward pressure on price levels as firms exercise their market power. The bedrock ideas of the theory are consistent with seminal economic thought from the likes of David Ricardo and even Adam Smith himself and has the support of a number of subject matter experts. Is it a perfect theory? No, but to paraphrase President Biden, don’t compare it to the ideal, compare it to the alternative. More empirics would be preferable, but the case for sellers’ inflation remains much stronger than the case for a fiscal stimulus igniting a wage-price spiral, which is entirely anathema to most of the evidence we do have.
One way or another, inflation is trending down and, by some measures, is closing in on the target rate again. Many have rushed to credit the Federal Reserve for following the textbook course, but they don’t have any internal story about how the Fed could have done that without increasing employment. As Nobel laureate Paul Krugman (who supported rate hikes and once bashed the theory of sellers’ inflation) asked, “Where’s the rise in economic slack?” The conventional story is missing its second chapter and yet its advocates are eager to point to an ending they can’t explain as all the justification they need to avoid reconsidering their priors. One possibility Krugman notes, which Matthew Klein explicates here, is that inflation really was transitory the whole time. The sharp upward pressures were, indeed, caused by one-off shocks from the pandemic, supply chains, and Russian aggression, but the effects had unusually long tails. This theory aligns very well with sellers’ inflation; corporate price hikes could simply be the explanation for such long lasting effects.
Additionally, as Hal Singer pointed out, the recent drop in inflation corresponds to a downturn in corporate profits. Some, including Noah Smith (in that tweet’s comments), disagree and argue that both lower profits and less inflation are caused by new slack in demand. But that doesn’t really match what we’re seeing across macroeconomic data. True, employment growth has slowed, as has the growth of personal consumption, but that still doesn’t match up with the type of deflationary pressure that we were supposed to need; Larry Summers was citing figures as high as 6 percent unemployment. Plus, the metrics that do show demand softening largely only show that employment and consumption are steadying, not decreasing. On top of that, the contraction in output that The Wall Street Journal identified makes the case for simple shifts in demand driving price levels dubious. Additionally, if a wage-price spiral were at fault, leveling off employment growth would not be enough, the labor market would still be too tight (aka inflationary), hence why we’d need to increase unemployment.
Good economic theories always need more work to apply them to new situations and produce quality empirics. But pretending that sellers’ inflation is a wacky idea while the conventional macro story maps perfectly onto the economy of the past three years is thumbing your nose at the most complete story available, significant empirical evidence, and centuries of economic theory.
Dylan Gyauch-Lewis is Senior Researcher at the Revolving Door Project.
The Federal Trade Commission’s scrutiny of Microsoft’s acquisition of game producer Activision-Blizzard did not end as planned. Judge Jacqueline Scott Corley, a Biden appointee, denied the FTC’s motion for preliminary injunction, ruling that the merger was in the public interest. At the time of this writing, the FTC has pursued an appeal of that decision to the Ninth Circuit, identifying numerous reversible legal errors that the Ninth Circuit will assess de novo.
But even critics of Judge Corley’s opinion might find agreement on one aspect: the relative lack of enforcement against anticompetitive vertical mergers in the past 40+ years. As Corley’s opinion correctly observes, United States v. AT&T Inc, 916 F.3d 1029 (D.C. Circuit 2019), is the only court of appeals decision addressing a vertical merger in decades. Absent evolution of the law to account for, among other recent phenomena, the unique nature of technology-enabled content platforms, the starting point for Corley’s opinion is misplaced faith in case law that casts vertical mergers as inherently pro-competitive.
As with horizontal mergers, the FTC and Department of Justice have historically promulgated vertical merger guidelines that outline analytical techniques and enforcement policies. In 2021, the Federal Trade Commission withdrew the 2020 Vertical Merger Guidelines, with the stated intent of avoiding industry and judicial reliance on “unsound economic theories.” In so doing, the FTC committed to working with the DOJ to provide guidance for vertical mergers that better reflects market realities, particularly as to various features of modern firms, including in digital markets.
The FTC’s challenge to Microsoft’s proposed $69 billion acquisition of Activision, the largest proposed acquisition in the Big Tech era, concerns a vertical merger in both existing and emerging digital markets. It involves differentiated inputs—namely, unique content for digital platforms that is inherently not replaceable. The FTC’s theories of harm, Judge Corley’s decision, and the now-pending appeal to the Ninth Circuit provide key insights into how the FTC and DOJ might update the Vertical Merger Guidelines to stem erosion of legal theories that are otherwise ripe for application to contemporary and emerging markets.
Beware of must-have inputs
In describing a vertical relationship, an “input” refers to goods that are created “upstream” of a distributor, retail, or manufacturer of finished goods. Take for instance the production and sale of tennis shoes. In the vertical relationship between the shoe manufacturer and the shoe retailer, the input is the shoe itself. If the shoe manufacturer and shoe retailer merge, that’s called a vertical merger—and the input in this example, tennis shoes, is characteristic of a replaceable good that vertical merger scrutiny has conventionally addressed. If such a merger were to occur and the newly-merged firm sought to foreclose rival shoe retailers from selling its shoes, rival shoe retailers would likely seek an alternative source for tennis shoes, assuming the availability of such an alternative.
When it comes to assessing vertical mergers in digital content markets, not all inputs are created equal. To the contrary, online platforms, audio and video streaming platforms, and—in the case of Microsoft’s proposed acquisition of Activision—gaming platforms all rely on unique intellectual property that cannot simply be replicated if a platform’s access to that content is restricted. The ability to foreclose access to differentiated content that flows from the merger of a content creator and distributor creates a heightened concern of anticompetitive effects, because rivals cannot readily switch to alternatives to the foreclosed product. This is particularly true when the foreclosed content is extremely popular or “must-have,” and where the goal of the merged firm is to steer consumers toward the platform where it is exclusively available. (See also Steven Salop, “Invigorating Vertical Merger Enforcement,” 127 Yale L.J. 1962 (2018).)
The 2020 Vertical Merger Guidelines fall short in their analysis of mergers involving highly differentiated products. The guidelines emphasize that vertical mergers are pro-competitive when they eliminate “double marginalization,” or mark-ups that independent firms claim at different levels of the distribution chain. For example, when game consoles purchase content from game developers, they may decide to add a mark-up on that content before offering it for consumer consumption. (In the real world of predatory pricing and cross-subsidization, the incentive to add such a mark-up is a more complex business calculation.) Theoretically, the elimination of those markups creates an incentive to lower prices to the end consumer.
But this narrow focus on elimination of double marginalization—and theoretical downward price pressure for consumers—ignores how the reduction in competition among downstream retailers for access to those inputs can also degrade the quality of the input. Let’s take Microsoft-Activision as an example. As an independent firm, Activision creates games and downstream consoles engage in some form of competition to carry those games. When consoles compete on terms to carry Activision games, the result to Activision includes greater investment in game development and higher quality games. When Microsoft acquires Activision, that downstream competition for exclusive or first-run access to Activision’s games is diminished. Gone is the pro-competitive pressure created by rival consoles bidding for exclusivity, as is the incentive for Activision to innovate and demand greater third-party investment in higher quality games.
Emphasizing the pro-competitive effects of eliminating double marginalization—even if that means lower prices to consumers—only provides half of the picture, because consumers will likely be paying for lower quality games. Previous iterations of the Vertical Merger Guidelines emphasize the consumer benefits of eliminating double marginalization, but they stop short of assessing the countervailing harms of mergers involving differentiated inputs. They should be updated accordingly.
Partial foreclosure will suffice
During the evidentiary hearings in the Northern District of California, the FTC repeatedly pushed back against the artificially high burden of having to prove that Microsoft had an incentive to fully foreclose access to Activision games. In the midst of an exchange during the FTC’s closing arguments, FTC’s counsel put it directly: “I don’t want to just give into the full foreclosure theory. That’s another artificially high burden that the Defendants have tried to put on the government.” And yet, in her decision, Judge Corley conflates the analysis for both full and partial foreclosure, writing, “If the FTC has not shown a financial incentive to engage in full foreclosure, then it has not shown a financial incentive to engage in partial foreclosure.”
Although agencies have acknowledged that the incentive to partially foreclose may exist even in the absence of total foreclosure (see, for instance, the FCC’s 2011 Order regarding the Comcast-NBCU vertical transaction), the Vertical Merger Guidelines do not make any such distinction. Again, that incomplete analysis hinges in part on the failure to distinguish between types of inputs. Take for instance a producer of oranges merging with a firm that makes orange juice. Theoretically, the merged firm might fully foreclose access to oranges to rival orange juice makers, who may then go in search for alternative sources of oranges. Or the merged firm might supply lower quality produce to rival firms, which may again send it in search of an alternative source.
But a merged firm’s ability and incentive to foreclose looks different when foreclosure takes the subtler form of investing less in the functionality of game content with a gaming console, subtly degrading game features, or adding unique features to the merged firm’s platforms in ways that will eventually drive more astute gamers to the merged firm (even though the game in question is technically still available on rival consoles). Such eventualities are perhaps easier to imagine in the context of other content platforms—for example, if news content were less readable on one social media platform than another. When a merged firm has unilateral control over those subtle design and development decisions, the ability and incentive to engage in more subtle forms of anticompetitive partial foreclosure is more likely and predictable.
In finding that Microsoft would not have a financial incentive to fully foreclose access to Activision games, Judge Corley’s analysis hinges on a near-term assessment of Microsoft’s financial incentive to elicit game sales by keeping games on rival consoles. (Never mind that Microsoft is a $2.5 trillion corporation that can afford near-term losses in service of its longer-view monopoly ambitions.) Regardless, a theory of partial foreclosure does not mean that Microsoft must forgo independent sales on rival consoles to achieve its ambitions. To the contrary, partial foreclosure would still allow users to purchase and play games on rival consoles. But it also allows for Microsoft’s incentive to gradually encourage consumers to use its own console or game subscription service for better game play and unique features.
Finally, Judge Corley’s analysis of Microsoft’s incentive to fully foreclosure is irresponsibly deferential to statements made by Activision Blizzard CEO Bobby Kotick that the merging entities would suffer “irreparable reputational harm” if games were not made available on rival consoles. Again, by conflating the incentives for full and partial foreclosure, the court ignores Microsoft’s ability to mitigate that reputational harm—while continuing to drive consumers to its own platforms—if foreclosure is only partial.
Rejecting private behavioral remedies
In a particularly convoluted passage in the district court’s order, the Court appears to read an entirely new requirement into the FTC’s initial burden of demonstrating a likelihood of success on the merits—namely, that the FTC must assess the adequacy of Microsoft’s proposed side agreements with rival consoles and third-party platforms to not foreclose access to Call of Duty. Never mind that these side agreements lack any verifiable uniformity, are timebound, and cannot possibly account for incentives for partial foreclosure. Yet, the Court takes at face value the adequacy of those agreements, identifying them as the principal evidence of Microsoft’s lack of incentive to foreclose access to just one of Activision’s several AAA games.
In its appeal to the Ninth Circuit, the FTC seizes on this potential legal error as a basis for reversal. The FTC writes, “in crediting proposed efficiencies absent any analysis of their actual market impact, the district court failed to heed [the Ninth Circuit’s] observation ‘[t]he Supreme Court has never expressly approved an efficiencies defense to a Section 7 claim.’” The FTC argues that Microsoft’s proposed remedies should only have been considered after a finding of liability at the subsequent remedy stage of a merits proceeding, citing the Supreme Court’s decision in United States v. Greater Buffalo Press, Inc., 402 U.S. 549 (1971). Indeed, federal statute identifies the Commission as the expert body equipped to craft appropriate remedies in the event of a violation of the antitrust laws.
In its statement withdrawing the 2020 Vertical Merger Guidelines, the FTC announced it would work with the Department of Justice on updating the guidelines to address ineffective remedies. Presumably, the district court’s heavy reliance on Microsoft’s proposed behavioral remedies is catalyst enough to clarify that they should not qualify as cognizable efficiencies, at least at the initial stages of a case brought by the FTC or DOJ.
If this decision has taught us anything, it is that the agencies can’t come out with the new Merger Guidelines fast enough. In particular, those guidelines must address the competitive harms that flow from the vertical integration of differentiated content and digital media platforms. Even so, updating the guidelines may be insufficient to shift a judiciary so hostile to merger enforcement that it will turn a blind eye to brazen admissions of a merging firm’s monopoly ambitions. If that’s the case, we should look to Congress to reassert its anti-monopoly objectives.
Lee Hepner is Legal Counsel at the American Economic Liberties Project.
At some point soon, the Federal Trade Commission is very likely to sue Amazon over the many ways the e-commerce giant abuses its power over online retail, cloud computing and beyond. If and when it does, the agency would be wise to lean hard on the useful and powerful law at the core of its anti-monopoly authority.
The agency’s animating statute, the Federal Trade Commission Act and its crucial Section 5, bans “unfair methods of competition,” a phrase Congress deliberately crafted, and the Supreme Court has interpreted, to give the agency broad powers beyond the traditional antitrust laws to punish and prevent the unfair, anticompetitive conduct of monopolists and those companies that seek to monopolize industries.
Section 5 is what makes the FTC the FTC. Yet the agency hasn’t used its most powerful statute to its fullest capability for years. Today, with the world’s most powerful monopolist fully in the commission’s sights, the time for the FTC to re-embrace its core mission of ensuring fairness in the economy is now.
The FTC appears to agree. Last year, the agency issued fresh guidance for how and why it will enforce its core anti-monopoly law, and the 16-page document read like a promise to once again step up and enforce the law against corporate abuse just as Congress had intended.
Why Section 5?
The history of the Section 5—why Congress included it in the law and how lawmakers expected it to be enforced—is clear and has been spelled out in detail: Congress set out to create an expert antitrust agency that could go after bad actors and dangerous conduct that the traditional anti-monopoly law, the Sherman Act, could not necessarily reach. To do that, Congress crafted Section 5 so that the FTC could stop tactics that dominant corporations devise to sidestep competition on the merits and instead unfairly drive out their competitors. Congress gave the FTC the power to enforce the law on its own, to stop judges from hamstringing the law from the bench, as they have done to the Sherman Act.
As I’ve detailed, the Supreme Court has issued scores of rulings since the 1970s that have collectively gutted the ability of public enforcement agencies and private plaintiffs to sue monopolists for their abusive conduct and win. These cases have names—Trinko, American Express, Brooke Group, and so on—and, together, they dramatically reshaped the country’s decades-old anti-monopoly policy and allowed once-illegal corporate conduct to go unchecked.
Many of these decisions are now decades old, but they continue to have outsized effects on our ability to policy monopoly abuses. The Court’s 1984 Jefferson Parish decision, for example, made it far more difficult to successfully prosecute a tying case, in which a monopolist in one industry forces customers to buy a separate product or service. The circuit court in the government’s monopoly case against Microsoft relied heavily on Jefferson Parish in overturning the lower court’s order to break Microsoft up. More recently, courts deciding antitrust cases against Facebook, Qualcomm and Apple all relied on decades of pro-bigness court rulings to throw out credible monopoly claims against powerful defendants.
Indeed, the courts’ willingness to undermine Congress was a core concern for lawmakers when drafting and passing Section 5. Three years before Congress created the FTC, the U.S. Supreme Court handed down its verdict in the government’s monopoly case against Standard Oil, breaking up the oil trust but also establishing the so-called “rule of reason” standard for monopoly cases. That standard gave judges the power to decide if and when a monopoly violated the law, regardless of the language of, or democratic intent behind, the Sherman Act. Since then, the courts have marched the law away from its goal of constraining monopoly power, case by case, to the point that bringing most monopolization cases under the Sherman Act today is far more difficult than it should be, given the simple text of the law and Congress’ intent when it wrote, debated, and passed the act.
That’s the beauty and the importance of Section 5. Congress knew that the judicial constraints put on the Sherman Act meant it could not not reach every monopolistic act in the economy. That’s now truer than ever. Section 5 can stop and prevent unfair, anticompetitive acts without having to rely on precedent built up around the Sherman Act. It’s a separate law, with a separate standard and a separate enforcement apparatus. What’s more, the case law around Section 5 has reinforced the agency’s purview. In at least a dozen decisions, the Supreme Court has made clear that Congress intended for the law to reach unfair conduct that falls outside of the reach of the Sherman Act.
So the law is on solid footing, and after decades of sidestepping the job Congress charged it to do, the FTC appears ready to once again take on abuses of corporate power. And not a moment too soon. After decades of inadequate antitrust enforcement, unfairness abounds, particularly when it comes to the most powerful companies in the economy. Amazon perches atop that list.
A Recidivist Violator of Antitrust Laws
Investigators and Congress have repeatedly identified Amazon practices that appear to violate the spirit of the antitrust laws. The company has a long history of using predatory pricing as a tactic to undermine its competition, either as a means of forcing companies to accept its takeover offers, as it did with Zappos and Diapers.com, or simply as a way to weaken vendors or take market share from competing retailers, especially small, independent businesses. Lina Khan, the FTC’s chair, has called out Amazon’s predatory pricing, both in her seminal 2017 paper Amazon’s Antitrust Paradox, and when working for the House Judiciary Committee during its big tech monopoly investigation.
Under the current interpretation of predatory pricing as a violation of the Sherman Act, a company that priced a product below cost to undercut a rival must successfully put that rival out of business and then hike up prices to the point that it can recoup the money it lost with its below-cost pricing. Yet with companies like Amazon—big, rich, with different income streams and sources of capital—it might never need to make up for its below-cost pricing by hiking up prices on any one specific product, let alone the below-cost product. Indeed, as Jeff Bezos’s vast fortune can attest, predatory pricing can generate lucrative returns simply by sending a company’s stock price soaring as it rapidly gains market share.
If Amazon wants to sell products from popular books to private-label batteries at a loss, it can. Amazon makes enormous profits by taxing small businesses on its marketplace platform and from Amazon Web Services. It can sell stuff below cost forever if it wants to–a clearly unfair method of competing with any other single-product business–all while avoiding prosecution under the judicially weakened Sherman Act. Section 5 can and should step in to stop such conduct.
Amazon’s marketplace itself is another monopolization issue that the FTC could and should address with Section 5. The company’s monopoly online retail platform has become essential for many small businesses and others trying to reach customers. To wit, the company controls at least half of all online commerce, and even more for some products. As an online retail platform, Amazon is essential, suggesting it should be under some obligation to allow equal access to all users at minimal cost. Of course, that’s not what happens; as my organization has documented extensively, Amazon’s captured third-party sellers pay a litany of tolls and fees just to be visible to shoppers on the site. Amazon’s tolls can now account for more than half of the revenues from every sale a small business makes on the platform.
The control Amazon displays over its sellers mirrors the railroad monopolies of yesteryear, which controlled commerce by deciding which goods could reach buyers and under what terms. Antitrust action under the Sherman Act and legislation helped break down the railroad trusts a century ago. But if enforcers were to declare Amazon’s marketplace an essential facility today, the path to prosecution under the Sherman Act would be difficult at best.
Section 5’s broad prohibition of unfair business practices could prevent Amazon’s anticompetitive abuses. It could ban Amazon from discriminating against companies that sell products on its platform that compete with Amazon’s own in-house brands, or stop it from punishing sellers that refuse to buy Amazon’s own logistics and advertising services by burying their products in its search algorithm. The FTC could potentially challenge such conduct under the Sherman Act, as a tying case, or an essential facilities case. But again, the pathway to winning those cases is fraught, even though the conduct is clearly unfair and anticompetitive. If Amazon’s platform is the road to the market, then the rules of that road need to be fair for all. Section 5 could help pave the way.
These are just a few of the ways we could see the FTC use its broad authority under Section 5 to take on some of Amazon’s most egregious conduct. If I had to guess, I imagine the FTC in a potential future Amazon lawsuit will likely charge the company under both the Sherman Act and the FTC Act’s Section 5 for some conduct it feels the traditional anti-monopoly statute can reach, and will rely solely on Section 5 for conduct that it believes is unfair and anticompetitive, but beyond the scope of the Sherman Act in its current, judicially constrained form. For example, while the FTC could potentially use the Sherman Act to address Amazon’s decision to tie success on its marketplace to its logistics and advertising services, the agency’s statement makes clear that Section 5 has been and can be used to address “loyalty rebates, tying, bundling, and exclusive dealing arrangements that have the tendency to ripen into violations of the antitrust laws by virtue of industry conditions and the respondent’s position within the industry.”
Might this describe Amazon’s conduct? Very possibly, but that will ultimately be up to the FTC to decide. Suing Amazon under both statutes would invite the court to make better choices around the Sherman Act that are more critical of monopoly abuses, and help develop the law so that the FTC can eagerly embark on its core mission under Section 5: to help ensure markets are fair for all.
Ron Knox is a Senior Researcher and Writer for ILSR’s Independent Business Initiative.
For those not steeped in antitrust law’s treatment of single-firm monopolization cases, under the rule-of-reason framework, a plaintiff must first demonstrate that the challenged conduct by the defendant is anticompetitive; if successful, the burden shifts to the defendant in the second or balancing stage to justify the restraints on efficiency grounds. According to research by Professor Michael Carrier, between 1999 and 2009, courts dismissed 97 percent of cases at the first stage, reaching the balancing stage in only two percent of cases.
There is a fierce debate in antitrust circles as to what constitutes a cognizable efficiency. In April, the Ninth Circuit upheld Judge Yvonne Gonzalez Rogers’ dismissal of Epic Game’s antitrust case against Apple on the flimsiest of efficiencies.
A brief recap of the case is in order, beginning with the challenged conduct. Epic alleged Apple forces certain app developers to pay monopoly rents and exclusively use its App Store, and in addition requires the use of Apple’s payment system for any in-app purchases. The use of Apple’s App Store, and the prohibition on a developer loading its own app store, as well as the required use of Apple’s payment system are set forth in several Apple contracts developers must execute to operate on Apple’s iOS. The Ninth Circuit found that Epic met its burden of demonstrating an unreasonable restraint of trade, but Epic’s case failed because Apple was able to proffer two procompetitive rationales that the Appellate Court held were non-pretextual and legally cognizable. One of those justifications was that Apple prohibited competitive app stores and required developers to only use Apple’s payment system because it was protecting its intellectual property (“IP”) rights.
Yet neither the District Court nor the Ninth Circuit ever tell us what IP Apple’s restraints are protecting. The District Court opinion states that “Apple’s R&D spending in FY 2020 was $18.8 billion,” and that Apple has created “thousands of developer tools.” But even Apple disputes in a recent submission to the European Commission that R&D has any relationship to the value of IP: “A patent’s value is traditionally measured by the value of the claimed technology, not the amount of effort expended by the patent holder in obtaining the patent, much less ‘failed investments’ that did not result in any valuable patented technology.”
Moreover, every tech platform must invest something to encourage participation by developers and users. Without the developers’ apps, however, there would be few if any device sales. If all that is required to justify exclusion of competitors, as well tying and monopolization, is the existence of some unspecified IP rights, then exclusionary conduct by tech platforms for all practical purposes becomes per se legal. Plaintiffs challenging these tech platform practices on antitrust grounds are doomed from the start. Even though the plaintiff theoretically can proffer a less restrictive alternative for the tech platform owners to monetize their IP, this alternative per the Ninth Circuit must be “virtually as effective” and “without increased cost.” Again, the deck was already stacked against plaintiffs, and this decision risks making it even less likely for abusive monopolists to be held to account.
Ignoring the Economic Literature on IP
In addition to bestowing virtual antitrust immunity on tech platforms in rule-of-reason cases, there are important reasons why IP should never qualify as a procompetitive business justification for exclusionary conduct. Had the Ninth Circuit consulted the relevant economic literature, it would have learned that IP is fundamentally not procompetitive. Indeed, there is virtually no evidence that patents and copyrights, particularly in software, incentivize or create innovation. As Professors Michele Boldrin and David Levine conclude, “there is no empirical evidence that [patents] serve to increase innovation and productivity…” This same claim could be made for the impact of copyrights as well. Academic studies find little connection between patents, copyright, and innovation. Historical analysis similarly disputes the connection. Surveys of companies further find that the goals of patenting are not primarily to stimulate innovation but instead the “prevention of rivals from patenting related inventions.” Or, in other words, the creation of barriers to entry. Innovation within individual firms is motivated much more by gaining first-mover advantages, moving quickly down the learning curve or developing superior sales and marketing in competitive markets. As Boldrin and Levine explain:
In most industries, the first-mover advantage and the competitive rents it induces are substantial without patents. The smartphone industry-laden as it is with patent litigation-is a case in point. Apple derived enormous profits in this market before it faced any substantial competition.
Possibly even more decisive for innovation are higher labor costs that result from strong unions. Other factors have also been found to be important for innovation. The government is responsible for 57 percent of all basic research, research that has been the foundation of the internet, modern agriculture, drug develop, biotech, communications and other areas. Strong research universities are the source of many more significant innovations than private firms. Professor Margaret O’Mara’s recent history of Silicon Valley demonstrates how military contracts and relationships with Stanford University were absolutely critical to the Silicon Valley success story. Her book reveals the irony of how the Silicon Valley leaders embraced libertarian ideologies while at the same time their companies were propelled forward by government contracts.
In an earlier period, the antitrust agencies ordered thousands of compulsory licensing decrees, which were estimated to have covered between 40,000 and 50,000 patents. Professor F.M. Scherer shows how these licenses did not lead to less innovation. Indeed, the availability of this technology led to significant economic advances in the United States. In his book, “Inventing the electronic Century,” Professor Alfred Chandler documents how Justice Department consent decrees with RCA, AT&T and IBM, which made important patents available to even rivals, created enormous competition and innovation in data processing, consumer electronics, and telecommunications. The evidence is that limiting or abolishing patent protection has far more beneficial impact than its protection, let alone allowing its use to justify anticompetitive exclusion.
Probably the weakest case for the economic value of patents exists in the software industry. Bill Gates, reflecting on patents in the software industry said in 1991 that:
If people had understood how patents would be granted when most of today’s ideas were invented and had taken out patents, the industry would be at a complete standstill today…A future start-up with no patents of its own will be forced to pay whatever price the giants choose to impose.
The point is that there is very little support for antitrust courts to elevate IP to a justification for market exclusion. The case for procompetitive benefits from patents is nonexistent, while much evidence supports an exclusionary motive for obtaining IP by big tech firms.
As Professors James Bessen and Michael Meurer show, patents on software are particularly problematic because they have high rates of litigation, are of little value, and many appear to be trivial. In particular, Bessen and Meurer argue that many software patents are obvious and therefore invalid. Moreover, the claim boundaries are “fuzzy” and therefore infringement is expensive to resolve.
When asserted in a rule-of-reason case under the Apple precedent, software patents would seem to escape all scrutiny. The defendant would simply assert IP protection without any obligation to reveal with specificity the nature of the IP. The plaintiff then would have no way to challenge validity or infringement or to be able to demonstrate an ability to design around the defendant’s IP. Instead, they must show, per the Ninth Circuit’s opinion, that there is a less restrictive way for the plaintiff to be paid for its IP that is “virtually as effective” and “without increased cost.” This makes no sense at all. It would make far more sense to force any tech platform that seeks to exclude competitors on the basis of IP to simply file a counterclaim to the antitrust complaint alleging patent or copyright infringement and seeking an injunction that excludes the plaintiff. In such a case, the platform’s IP can be tested for validity. The exclusion by the antitrust defendant can be compared to the patent grant, and patent misuse can be examined.
Ignoring Its Own Precedent
It is unfortunate that the Apple court did not take seriously the Circuit’s earlier analysis in Image Technical Services v. Eastman Kodak. There, Kodak defended its decision to tie its parts and service in the aftermarket by claiming that some of its parts were patented. The Court noted that “case law supports the proposition that a holder of a patent or copyright violates the antitrust laws by ‘concerted and contractual behavior that threatens competition.’” The Kodak Court’s example of such prohibited conduct was tying, a claim made by Epic. Because we know that there are numerous competing payment systems, and because nothing in the Ninth Circuit’s opinion addresses the specifics of Apple’s IP that must be protected, it is likely the case that Apple does not have blocking patents that preclude use of alternative payment systems. And if this is the case, Epic alleged the very situation where the Ninth Circuit earlier (citing Supreme Court precedent) found that patents or copyrights violate the antitrust laws. Moreover, the Ninth Circuit thought it was significant that Kodak refused to allow use of both patented or copyrighted products and non-protected products. This may also be true of Apple’s development license in the Epic case. The Court didn’t seem to think that an inquiry into what IP was licensed by these agreements to be significant.
In sum, use of IP as a procompetitive business justification has no place in rule-of-reason cases. There is no evidence IP is procompetitive, and use of IP as a business justification relieves the antitrust defendant of the burden to demonstrate validity and infringement required in IP cases. It further stacks the deck in rule-of-reason cases against plaintiffs, and unjustly favors exclusionary practices by dominant tech platforms.
Mark Glick is a professor in the economics department of the University of Utah.
In the last thirty years, the United States has experienced a whirlwind of concentration among food suppliers. This elimination of competition is an urgent problem not only because consumers are faced with higher prices and less food choices in grocery stores, but also because the largest agribusinesses on Earth (“Big Ag”), as a result of their massive economic and political power, clog up the workings of our political system to the detriment of democracy and the planet.
Big Ag’s rising profits have been shown to be a driving force behind inflationary food prices again and again. A recent analysis by the White House explained that “If rising input costs were driving rising meat prices, those profit margins would be roughly flat, because higher prices would be offset by the higher costs.”
In addition to these already egregious displays of power and control, Big Ag also destroys the planet’s natural resources, violates existing labor laws, engages in atrocious and inhumane animal processing practices, and puts small farms out of business. Both the legal and economic arrangements that enable this behavior create an unfair political economy that’s immensely profitable and partial to large agribusinesses; these forces allow massive corporations like Monsanto, Tyson, Cargill, and John Deere to largely evade antitrust scrutiny.
As a result, Big Ag players garner enormous market power and uneven political clout, positioning themselves to create even more favorable legislation with which to entrench their dominance in each sector of agriculture, from beef to farming equipment to poultry to seeds.
It Begins on the Farm
An immediate example of Big Ag’s might is in farming equipment. Before the 1930s, over 160 companies sold farm equipment in response to growing industrialization and mechanization of farming. Through industry consolidation, however, John Deere emerged as the leading supplier of agricultural machinery in the United States. Today, John Deere stands alone as the dominant player, commanding roughly 53 percent of the market for large tractors and 60 percent for combines. From 2005 to 2018, John Deere acquired a staggering twelve companies that specialized in sectors ranging from farm equipment to precision technology.
In February, the Department of Justice filed six lawsuits in an effort to crack down on Deere’s monopoly power, engaging in a right-to-repair battle in four states. The lawsuits allege that Deere has illegally attempted to control the repair of Deere equipment, such as tractors and combines, using electronic-control units. The filing contends that the farming equipment giant and its dealerships monopolize the market for repair and maintenance services by designing proprietary Deere equipment, which requires Deere-controlled software for the diagnosis and maintenance functions. That software is exclusively available to technicians authorized by Deere. This arrangement leaves many independent shops and farmers beholden to Deere-authorized vendors when repairing their equipment. In this way, Big Ag poses a sort of private tyranny over those who have to rely on their equipment to make a living, and they are largely left unaccountable to the public and consumers.
The tentacles of Big Ag reach beyond equipment into our milk and meat supply. Industry concentration in dairy has led to fewer farms and more mega-dairy operations, diminishing the profits of small family farms. The beef industry similarly has become more heavily concentrated. Today, only four firms—Tyson, Cargill, JBS, and National Beef Packing Co.—control over 70 percent of the nation’s beef supply, and they processed roughly 85 percent of cattle in the United States in 2018.
The level of concentration occurred at such a breakneck pace since the 1980s that Department of Agriculture economists characterized this wave of mergers as “merger mania,” during which concentration soared from 35.7% in 1980 to 71.6% by 1990 in the beef packing sector.
For instance, through mergers in the agriculture industry, “the four largest meatpackers have increased their share of the market from 36% to 85%, and the largest four sellers of corn seed accounted for 85% of U.S. corn seed sales in 2015, up from 60% in 2000.
Due to the resulting power over consumers and input providers, these mega-corporations are doing better than ever. The level of concentration, and the control over factory farming that it grants, are partially responsible for Tyson Foods’ beef sales jumping to $5 billion in the first quarter of 2022, lifting overall sales to $12.93 billion. Tyson Foods realized over a billion dollars in new dividends and stock buybacks. Add this to the more than $3 billion already they paid out to shareholders since the pandemic. In beef processing, corporate profits skyrocketed by $96.9 billion in the third quarter of 2021 alone.
Economic Power Translates into Political Power
Though it is hard to pinpoint a specific and clear approximation of the political power large agribusiness has achieved, each industry as a whole has immense political power resulting from their economic growth and profits from concentration. This is malfeasance in the highest order. Food monopolists and other dominant players in our agriculture system have the ability to contribute a large amount of campaign funds to key lawmakers in charge of legislating the sectors where mega corporations have a direct interest.
Farm subsidies in the United States largely support private associations and large corporations. These subsidies account for roughly 39 percent of farm income while the biggest agriculture firms continue to make record-breaking profits. The United States government gives away free money to private corporations that continue to increase their profits without contributing back into the public coffers or without providing adequate care to farm animals or adequate compensation (or safety) to the labor that generates the profit.
One example is the National Cattlemen’s Beef Association (NCBA). Researchers have long understood how clear the intent to monopolize is through the political clout of large, private trade associations, like the NCBA, which is directly paid a proportion of the proceeds from the U.S. government from every beef sale (like supermarkets steaks or hamburgers from a fast-food restaurant). In addition to lobbying for the further consolidation of the meat-processing industry, the NCBA uses these proceeds to lobby for Americans to eat more meat and to oppose district court judges who are sympathetic to animal rights.
The Social Costs Are Adding Up
Food production and industrial farming pose existential threats to critical ecosystems and rural populations, accelerating climate change by polluting and contributing massively to greenhouse gasses. The natural resources needed to sustain the increasing industrialization of our agricultural infrastructure are exhausted at the behest of large industry titans not in the least bit compelled to employ sustainable environmental practices. These effects are undesirable to everyone but to large agribusiness polluters, which perversely gain a greater capacity to pollute and contribute to climate change to a meaningful degree as they grow in scale and size.
The broader societal costs of the size, power, and dominance of food monopolies are far reaching. Economic power garnered from consolidating food industries, especially during the ongoing COVID-19 pandemic, yields uneven political influence—where corporations shape laws to get enacted in their favor, which in turn garners them more control of the food system. In the legal system, the problem of agriculture monopolies cannot be adequately dealt with on purely economic grounds either. This is because of the popularized role that economic analysis plays in assessing anticompetitive harm. With its fixation on short-run consumer price effects, the current economic lens cannot fully capture the ways in which Tyson, Bayer, or Monsanto grow their market power. Like other dominant players in industries, major corporations within Big Ag also mold political outcomes in their favor to avoid critical enforcement. They achieve this by influencing the anti-monopoly policies enacted to proscribe and limit their size in the first place, positioning themselves to dictate the terms for which market activity is stimulated.
When applying the law, antitrust courts should abandon the antiquated Chicago School dogma, which naively assumes that markets are self-correcting and that consumer welfare is paramount. When it comes to assessing the true harms of food monopolies and food barons, which undermine the rights of local farming operations, antitrust authorities should instead consider a broader set of anti-monopoly goals in order to disperse power more evenly among local farming operations nationwide.
To continue to permit consolidation in the aforementioned ways is anti-democratic. A strategy to implement these tools simply requires the political will to hold Big Ag corporate titans accountable by legally compelling them to relinquish control of their hordes of wealth, industry control, and attendant political influence.
Tyler Clark is an economist working on anti-monopoly, corporate power, and antitrust research. A recent graduate of the M.S. program in economics at the University of Utah, Tyler hopes to return and pursue a JD specializing in antitrust law. You can follow him on Twitter @traptamagotchi.
As the frontline against illegal monopolies and deceptive corporate behavior, the Federal Trade Commission (FTC) has a critical role to play in building an economy that works for consumers and small businesses. Since becoming FTC Chair, Lina Khan’s efforts to rein in anti-competitive behavior and protect consumers has been met with fierce resistance from powerful special interests and hostile editorials in the The Wall Street Journal.
Unfortunately, given the FTC’s role in combating unfair corporate behavior, this pushback is to be expected. I should know: I had the privilege of being an FTC commissioner, serving in both the Clinton and Bush administrations. I’ve seen fair, and unfair, criticism targeted at Republican and Democratic FTC chairs alike.
As a commissioner, I served under Chair Tim Muris, who was appointed by George W. Bush and whose aggressive stewardship of the agency resembled in many ways the current leadership of Chair Lina Khan. While at the helm of the FTC, Chair Muris pursued one of the most aggressive regulatory agendas of any Bush-appointed agency heads. His agenda was assisted by his chief of staff, Christine S. Wilson, who went on to be appointed to the FTC by Donald Trump.
Despite this history, Wilson made big news when, as part of her resignation announcement, she attacked Chair Khan’s “honesty and integrity” and accused her of “abuses of government power” and “lawlessness.” This turned many heads in Washington, particularly mine because of how detached this viewpoint was from my prior experience of serving at the FTC under Wilson’s own stewardship of the agency.
In his 2021 Executive Order on Promoting Competition in the American Economy, President Biden acknowledged that “a fair, open, and competitive marketplace has long been a cornerstone of the American economy.” Unfortunately, corporate concentration has grown under both parties for many years, especially in the technology industry. It is fortunate, and past time, to see the White House, the FTC, Department of Justice, and other agencies working to swing back the pendulum and reinvigorate competition in the American economy.
Despite the ongoing crisis of corporate concentration, Ms. Wilson took objection to an antitrust policy statement the FTC adopted in November and to Chair Khan’s statements in favor of strong enforcement. I found this odd having seen up close Ms. Wilson zealously advance Chair Muris’s enforcement agenda. In office, Muris “challenged mergers in markets from ‘ice cream to pickles,’” as the Wall Street Journal once noted, including in the technology industry, where Lina Khan has devoted significant attention.
During his tenure, Muris used the power available to him as Chair on behalf of consumers and for the good of the economy. He evolved the theory behind FTC regulatory authority so he could take new action to protect consumers—like creating the DO NOT CALL registry—over frivolous legal objections by the telecommunications industry. Like Khan, he coordinated with the DOJ to ensure that they were addressing anticompetitive behavior.
Ms. Wilson claims that Chair Khan should have recused herself from a Facebook acquisition case because of opinions she had expressed as a Congressional staffer. But both a federal judge and the full Commission found no basis to these claims of impropriety, and it is clear that Chair Khan had no legal or ethical obligation to recuse in this case. FTC Commissioners including Khan, like judges, are required to set their personal opinions aside and evaluate cases on the merits, and they do. The FTC Ethics Guidelines tells commissioners to ”not work on FTC matters that affect your interests: financial, relational, or organizational.” When it comes to ethics guidelines, it doesn’t get any plainer than that, and Chair Khan’s participation in the case clearly does not violate these guidelines.
In a hyper-partisan environment, Ms. Wilson’s attacks on the FTC’s credibility appear to me as an attempt to slow antitrust enforcement and ultimately obfuscate Chair Khan’s pro-consumer agenda.
The U.S. Chamber of Commerce, which lobbies against pro-consumer regulations, sent an open letter to Senate oversight committees demanding an investigation of “mismanagement” at the FTC, including congressional hearings. No wonder the Chamber is upset. The Biden Administration is taking the crisis of corporate concentration seriously and is taking steps to bolster antitrust and consumer protection enforcement. That’s a development American consumers should cheer, because when corporate consolidation rises, competition is inevitably diminished, leading to higher prices and fewer choices for consumers.
Fortunately, Chair Khan is building on the legacy of strong leaders like Muris to build an economy that works for consumers, not harmful monopolies. Ultimately, she will be remembered for that and not cynical, distracting attacks on her.
Sheila Foster Anthony, a FTC commissioner from 1997-2003, previously served as Assistant Attorney General for Legislation at the U.S.Department of Justice. Prior to her government service, she practiced intellectual property law in a D.C. firm.
I love eggs. I really do. There was a year in law school where I religiously made and ate an egg sandwich for breakfast every day. To this day, I believe an egg fried in olive oil until the yolks are jammy and the edges are crispy is a perfect food.
Since last year, however, my egg-loving style has been cramped. As everyone knows, the price of eggs at the grocery store more than doubled in 2022, increasing from $1.78 a dozen in December 2021 to over $4.25 in December 2022. This 138-percent increase in egg prices far outstripped the 12-percent increase Americans saw in grocery prices generally over the same period. And some Americans have had it much worse, as average egg prices reached well over $6 a dozen in states ranging from Alabama to California and Florida to Nevada.
What’s behind the skyrocketing retail price of the incredible edible egg? Well, for one thing, the skyrocketing wholesale price of that egg. Between January 2022 and December 2022, wholesale egg prices went from 144 cents for a dozen Grade-A large eggs to 503 cents a dozen. This was the highest price ever recorded for wholesale eggs. Over the entire year, wholesale egg prices averaged 282.4 cents per dozen in 2022. When we consider that average retail egg prices for the same year were only about 3 cents higher at 285.7 cents per dozen, it becomes clear that the primary contributor to rising egg prices at the grocery store has been the dramatic increase in the wholesale prices charged by egg producers.
If this gives you hope that relief might be around the corner because you’ve heard something about a recent “collapse” in wholesale egg prices, sadly your hope would be misplaced. Despite this much-ballyhooed collapse, the average wholesale egg price has simply gone from 4-to-5 times what it was in January of last year to 2-to-3 times that number. If that weren’t enough, prices are expected to spike again when egg demand picks up in the run-up to Easter. Ultimately, the USDA is projecting that the average wholesale egg price in 2023 will be 207 cents a dozen—or only about 25% lower than the average price for 2022. So much for a collapse.
Are you wondering who sets these wholesale prices? Why, an oligopoly, of course. The production of eggs in America is dominated by a handful of companies led by Cal-Maine Foods. With nearly 47 million egg-laying hens, Cal-Maine controls approximately 20% of the national egg supply and dwarfs its nearest competitor. The leading firms in the industry have a history of engaging in “cartelistic conspiracies” to limit production, split markets, and increase prices for consumers. In fact, a jury found such a conspiracy existed as recently as 2018, and a wide-ranging lawsuit was brought just a couple of years ago accusing several of the largest egg producers (including Cal-Maine) of colluding to increase prices during the COVID-19 pandemic.
When asked about the multiplying price of their product, these dominant egg producers and their industry association, the American Egg Board, have insisted it’s entirely outside their control; an avian flu outbreak and the rising cost of things like feed and fuel, they say, caused egg prices to rise all on their own in 2022. And, sure enough, those were real headaches for the egg industry last year—about 43 million egg-laying hens were lost due to bird flu through December 2022, and input costs for producers certainly increased over 2021 levels. As my organization, Farm Action, detailed in letters to federal antitrust enforcers last month, however, the math behind those explanations for the steep increase in wholesale egg prices just doesn’t add up.
The reality, we argued, is that wholesale egg prices didn’t triple in 2022, and aren’t projected to stay elevated through 2023, because of “supply chain, ‘act of God’ type stuff,” as one industry executive has tried to spin it. Rather, the true driver of record egg prices has been simple profiteering, and more fundamentally, the anti-competitive market structures that enable the largest egg producers in the country to engage in such profiteering with impunity.
According to the industry’s leading firms, rising egg prices should be blamed on two things: avian flu and input costs. We can stipulate for the sake of argument that, if a massive amount of egg production and, hence, potential revenue were lost due to avian flu, the largest producers would be justified in trying to recoup some of that lost revenue by raising prices on their remaining sales. Likewise, if there were a sharp rise in egg production costs, we can stipulate that producers would be justified in trying to pass them on to wholesale customers. But was there a nosedive in egg production? Did the cost of egg inputs multiply dramatically? Short answer: No, and No.
The bottom line on the avian flu outbreak is that it simply did not have a substantial effect on egg production. Although about 43 million egg-laying hens were lost due to avian flu in 2022, they weren’t all lost at once, and there were always over 300 million other hens alive and kicking to lay eggs for America. The monthly size of the nation’s flock of egg-laying hens in 2022 was, on average, only 4.8 percent smaller on a year-over-year basis. If that isn’t enough, the effect of losing those hens on production was itself blunted by “record high” lay rates throughout the year, which were, on average, 1.7 percent higher than the lay rate observed between 2017 and 2021. With substantially the same number of hens laying eggs faster than ever, the industry’s total egg production in 2022 was—wait for it—only 2.98 percent lower than it was in 2021.
Turning to input costs, it’s true they were higher in 2022 than in 2021, but they weren’t that much higher. Farm production costs at Cal-Maine Foods—the only egg producer that publishes financial data as a publicly traded company—increased by approximately 20 percent between 2021 and 2022. Their total cost of sales went up by a little over 40 percent. At the same time, Cal-Maine produced roughly the same number of eggs in 2022 as it did in 2021. If we take Cal-Maine Foods as the “bellwether” for the industry’s largest firms, we can be pretty sure that the dominant egg producers didn’t experience anywhere near enough inflation in egg production costs to account for the three-fold increase in wholesale egg prices.
Against the backdrop of these facts, the industry’s narrative simply crumbles. It’s clear that neither rising input costs nor a drop in production due to avian flu has been the primary contributor to skyrocketing egg prices. What has been the primary contributor, you ask? Profits. Lots and lots of profits.
Gross profits at Cal-Maine Foods, for example, increased in lockstep with rising egg prices through every quarter of the last year. They went from nearly $92 million in the quarter ending on February 26, 2022, to approximately $195 million in the quarter ending on May 28, 2022, to more than $217 million in the quarter ending on August 27, 2022, to just under $318 million in the quarter ending on November 26, 2022. The company’s gross margins likewise increased steadily, from a little over 19 percent in the first quarter of 2022 (a 45 percent year-over-year increase) to nearly 40 percent in the last quarter of 2022 (a 345 percent year-over-year increase).
The most telling data point, however, is this: For the 26-week period ending on November 26, 2022—in other words, for the six months following the height of the avian flu outbreak in March and April—Cal-Maine reported a five-fold increase in its gross margin and a ten-fold increase in its gross profits compared to the same period in 2021. Considering the number of eggs Cal-Maine sold during this period was roughly the same in 2022 as it was in 2021, it follows that essentially all of this profit expansion came from—you guessed it—higher prices.
On their own, these numbers plainly show that dominant egg producers have been gouging Americans, using the cover of inflation and avian flu to extract profit margins as high as 40 percent on a dozen loose eggs.
Some agriculture economists and market analysts, however, have questioned whether this price gouging should raise antitrust concerns. The dramatic escalation in egg prices over the past year, they’ve argued, has just been “normal economics” at work. Per Angel Rubio, a senior analyst at the industry’s go-to market research firm, Urner Barry, the runaway increase in wholesale egg prices was simply a function of the “compounding effect” of “avian flu outbreaks month after month after month.” These outbreaks repeatedly disrupted egg deliveries, he presumes, driving customers to assent to spiraling price demands from alternative suppliers. In a blog post on Urner Barry’s website, Mr. Rubio further hypothesized that jittery customers may have “increased their ‘normal’ purchase levels to secure more supply,” goosing up prices even higher.
There are several reasons to doubt this theory of the case. To begin with, Mr. Rubio’s analysis presumes that avian flu outbreaks caused significant disruptions in the supply of eggs even though, as discussed above, the aggregate production data suggests that was not the case. But let’s assume that there were supply disruptions, and that these disruptions did lead to a glut of demand for reliable suppliers, giving them pricing power. If that were the case, it would stand to reason that Cal-Maine—which did not report a single case of avian flu at any of its facilities in 2022—had an opportunity to sell a whole lot more eggs in 2022 than in 2021, and to sell them at record-high profit margins. But Cal-Maine didn’t sell a whole lot more eggs. It sold roughly the same number of eggs. If Mr. Rubio’s theory were right, why did Cal-Maine leave money on the table?
Once we start applying this question to the pricing and production behavior of the egg industry’s dominant firms more broadly, a whole variety of competition red flags start cropping up
Let’s talk about pricing first. In a truly competitive market, one would have expected rival egg producers to respond to a near-tripling of average market prices with efforts to undercut Cal-Maine’s skyrocketing profit margin and capture market share. Alas, that did not happen. In researching Farm Action’s letter to antitrust enforcers, we found no evidence of aggressive price competition for business among the largest egg producers. Yet everything about the mechanics of egg sales suggests that they should be competitive. Wholesale customers generally buy their eggs directly from producers. Long-term or exclusive contracts for egg supplies are rare. And the price of eggs in each purchase is individually negotiated. In other words, for each delivery of eggs they need, a wholesale customer is in all likelihood free to shop around and give rival suppliers an opportunity to undercut their incumbent supplier. Given this fluid sales environment, how did Cal-Maine manage to raise prices so much that its profit margin quintupled in one year without any other major producer coming to eat its lunch?
Another head-scratcher has been how the industry has managed to throttle production in the face of sustained high egg prices. As early as August of last year, the USDA was observing that favorable conditions existed, both in terms of moderating input costs and record-high egg prices, for producers to invest in expanding their egg-laying flocks. Yet such investment never materialized.
Even as prices reached unprecedented levels between October and December of last year, the number of eggs in incubators and the number of egg-laying chicks hatched by upstream hatcheries both remained flat, and were even below 2021 levels in December. As the year drew to a close, the USDA observed that “producers—despite the record-high wholesale price—are taking a cautious approach to expanding production in the near term.” The following month, it pared down its table-egg production forecast for the entirety of 2023—while raising its forecast of wholesale egg prices for every quarter of the coming year—on account of “the industry’s [persisting] cautious approach to expanding production.”
Because of this “caution” among egg producers, the total number of egg-laying hens in the U.S. has recovered from the losses caused by avian flu outbreak of 2022 at less than one-third of the pace it recovered from the (relatively more severe) avian flu outbreak of 2015, according to data from the USDA’s National Agricultural Statistics Service. At its lowest point in the aftermath of the 2022 avian flu outbreak—in June of last year—the egg-laying flock counted a little under 300.5 million hens, or around 30 million (or 9%) fewer hens than it started the year with (330.8 million). For comparison, at its lowest point following the 2015 outbreak—which was also in June of that year—the egg-laying flock totaled 280.2 million and had nearly 35 million (or 11%) fewer hens than it did at the start of 2015 (315 million).
As you can see from the chart above (Fig. 1), in 2015, it took the industry less than 8 months to rebuild the egg-laying flock from its June low point; by the end of February 2016, producers had added over 30 million hens, bringing the total size of the egg-laying flock back up to 310.2 million. Contrast this pace of flock recovery between 2015 and 2016 with the pace of recovery we’ve seen over the past year. In the 8 months that have passed since June of last year, the industry has added less than 9 million hens—leaving the flock at an anemic 309.4 million at the start of February 2023.
On its own, this comparison shows that large egg producers almost certainly could have rebuilt their hen flocks in the wake of last year’s avian flu outbreaks much faster than they have. When considered alongside the fact that, in 2015, the monthly average wholesale price reached its highest point in August and never exceeded $2.71 per dozen, the sluggishness of the 2022-2023 recovery becomes objectively suspicious. According to Urner Barry, in 2015, wholesale egg prices rose 6-8% for every 1% decrease in the number of egg-laying hens caused by the avian flu; that is barely half the 15% price increase for every 1% decrease in hens observed last year. The monthly price for a dozen wholesale eggs in 2022 cleared the 2015 high of $2.71 per dozen as early as April, and stayed at comparable or higher levels through the rest of the year. And yet, egg producers have been “cautiously” adding hens at a third of pace they did in 2015-2016 since June of last year. What gives?
As Senator Elizabeth Warren and Representative Katie Porter noted in recent letters to dominant egg producers seeking answers about ballooning prices, producers appear to be “impervious to the basic laws of supply and demand.” This is the case not only in terms of their willingness to invest in new capacity, but also in terms of their willingness to utilize existing capacity. The rate at which hens lay eggs is the basic measure of flock productivity in the industry. Several factors can affect lay rates, including hen genetics and age, but within physical limits, producers can speed or slow egg-laying by their hens through nutrition, lighting, and other flock management choices. Yet, even as millions of hens were being lost to avian flu and eggs were fetching unprecedented prices last year, producers seemed to make choices that depressed, rather than maximized, their remaining hens’ lay rates.
The average table-egg lay rate reached its highest level ever (around 83.5 eggs per 100 hens per day) in the early, most severe, months of the avian flu epidemic—between March and May of last year—but then it nosedived. By June, the national average lay rate had dropped to about 82.5 eggs per 100 hens per day. This was consistent with seasonal trends in years past; it’s typical for lay rates to moderate as Spring turns to Summer. What happened after June, however, was curious. Normally, the average lay rate would start climbing again in July and stay on an upward trend through the end of the year, with the strongest lay rates often reported in the last 2 or 3 months of the year. In 2022, however, the opposite occurred. Lay rates flat-lined from June through the Fall before dipping to their weakest level in the last three months of the year. In other words, during the exact period when egg prices were hitting their stride—the last six months of 2022—the industry somehow managed to orchestrate a wholesale deviation from historical trends in the direction of getting fewer eggs out of the hens they already had.
Together, these dynamics of throttled production and unrestrained pricing are unmistakable red flags that deserve investigation by enforcers. Take Cal-Maine as an example again. They are the leader in a mostly commoditized industry. They presumably have the most efficient operations and the greatest financial power of any firm in the industry—allowing them to stand up hen capacity as fast as anyone and sell at competitive prices to capture unmet or up-charged demand. Instead of doing that, however, it appears they simultaneously abandoned price competition and refrained from expanding production to satisfy demand last year. This begs the question: What made Cal-Maine so confident that other large producers wouldn’t produce more eggs and undercut its prices? More to the point, why didn’t they?
Whatever the answers to these questions might be, this much is clear: Cal-Maine behaved as if its dominant position were entrenched, and its strategy worked. As rival egg producers have gone along instead of competing on price and production, the industry has been able to sustain elevated egg prices from one year to the next without any legitimate justification. Even as egg prices have started ameliorating this year, the USDA is still forecasting an average wholesale price for 2023 that is 70-to-80% higher than the 2021 average, suggesting that whatever “bottom” egg prices might reach this year will, in all likelihood, be at least an order of magnitude higher than 2021 levels.
This pattern of behavior by dominant egg producers over the past year is consistent with longstanding research beginning in the 1970s—from Blair (1972) to Sherman (1977) to Kelton (1980)—on how leading firms in consolidated industries “administer prices” to achieve higher-margin “focal points” during economic shocks and periods of high inflation. And, make no mistake, the egg industry is consolidated. While the top 10 egg producers control 53%—and Cal-Maine alone controls 20%—of all egg-laying hens in the U.S., these numbers understate concentration in actual egg markets. Smaller egg operations (the ones that control the other 47% of America’s hens) tend to produce specialty, not conventional, eggs for sale at premium price points; as such, they typically have neither the scale nor the capacity to supply national grocery chains with the conventional eggs bought by most consumers. Only the largest egg producers can fill this need—a fact that likely makes the submarket for conventional eggs sold to national customers substantially more concentrated than the total egg supply. Was it pure coincidence that prices barely climbed In the fragmented specialty-egg segment but skyrocketed in the consolidated conventional-egg segment?
The honest answer is that I don’t know. In the end, I’m just a country lawyer with a laptop and a love for fried eggs. But smart people at the Boston Fed, the University of Utah, and a few other places have recently shown—empirically, I’m told—that it’s easier for competitors to coordinate for higher profits during a crisis when their industry is concentrated. Maybe that’s what happened here. Maybe it’s not. The only people who can find out for sure—and get the American people some restitution if it is what happened—are the fine public servants at the Federal Trade Commission, the Justice Department Antitrust Division, and state Attorneys General offices across the country. They should do nothing less.
For nearly 12 months now, dominant egg producers have demonstrated their ability to charge exorbitant prices for a staple we all need for no reason beyond having the power to do it. The “philosophy” of our antitrust laws, as Justice Douglas once reminded his colleagues on the Supreme Court, is that such power “should not exist.” With hundreds of millions of dollars missing from Americans’ pockets to enrich the profits of a handful of robber barons in the egg industry, antitrust enforcers owe the public a duty to investigate, and to see to it that the nation’s laws are enforced—even against entrenched giants.
Basel Musharbash is Legal Counsel at Farm Action, a farmer-led advocacy organization dedicated to building a food and agriculture system that works for everyone rather than a handful of powerful corporations. Basel is also the Managing Attorney of Basel PLLC, a mission-driven law firm in Paris, Texas, focused on the intersection of community development and antitrust law.
Congressional Democrats managed to pass a few crucial measures during December’s lame duck session. One tiny fraction of the omnibus bill to fund the government was the Merger Filing Fee Modernization Act, a measure for which anti-monopoly advocates have long been pushing.
The Act reforms the Hart-Scott-Rodino (HSR) filing fee structure, the program through which the Federal Trade Commission (FTC) and Department of Justice (DOJ) collect fees from corporations seeking to merge and gain federal approval. The HSR program takes significant resources to administer, and the number of companies seeking to merge has increased in recent years — between 2020 and 2021, filing more than doubled from 1,637 to 3,644, but the fee system had not been updated to account for increased burden upon the antitrust enforcers. Due to the Merger Filing Fee Modernization Act increasing the cap on fees, the Congressional Budget Office estimates that the new fees will result in $325 million in each of the first five years, with the two antitrust agencies splitting the fees and receiving $162.5 million each per year.
Congress appropriated $430 million for the FTC and $225 million for the DOJ Antitrust Division for FY2023. These budgets represent only a 22.5% and 11.9% increase from FY2022, respectively, and fall well short of the agencies’ respective requests of $490 million and $273 million. Since 2010, when adjusted for inflation, the FTC has received only a $40 million increase and the Antitrust Division a measly $7 million extra, despite processing more than double the number of HSR transactions in 2022 that they did in 2010. The agencies didn’t request more funding because they’re greedy; they need more funding to carry out their enormous missions, and Congress should support the missions.
The Merger Filing Fee Modernization Act, while an important reform, only increases what share of the FTC and DOJ Antitrust budget comes from HSR fees, and does not increase the overall budget independent of congressional funding. The recent flood of mergers (and higher valuations of those mergers) necessitates additional staff and resources at the agencies to properly review each transaction. Without more investment by Congress, the FTC and DOJ will remain pitifully short-staffed and under-resourced relative to the thousands of mergers and acquisitions that take place each year.
The perpetual underfunding of antitrust regulation has been known for years. As anti-monopoly researcher Matt Stoller pointed out, “spending on antitrust today is about a third what it was throughout most of the 20th century, and with a much bigger economy today. To get back to the level of antitrust enforcement we had in 1941 would require increasing the budgets of the agencies by ten times.”
And beyond the DOJ Antitrust and FTC’s edict to enforce competition, the FTC has another underfunded but crucial mission: consumer protection.
The FTC’s Mission To Protect Consumers Is Just As Important As Protecting Competition
In 2022, the headlines were filled with stories of corporate misdeeds, oftentimes involving deceit of customers. The FTC has a legal mandate and enforcement power to crack down on many such businesses. Through Section 5 of the FTC Act, the FTC can take legal action against companies that engage in “unfair or deceptive acts.”
The FTC has two options for enforcement under Section 5 — administrative and judicial. Administrative enforcement happens after a problem has already arisen. It involves a proceeding in front of an administrative law judge, who issues a cease and desist order if they find a given practice illegal under Section 5. It is then up to the FTC to determine whether the illegal practices warrant additional penalties, mainly through consumer redress or civil fines. Judicial enforcement, on the other hand, is a preventive measure used by the FTC while the administrative process is still underway. For example, the FTC can use judicial enforcement to enjoin a merger that will hurt consumers while the administrative judge is still determining its legality.
One of the FTC’s “top priorities” is to protect older consumers. A 2022 FTC report found that older Americans were more likely to be victims of scams and lost more money when being scammed. The best-known of these are telemarketing scams in which fraudsters convince people to transfer money by impersonating a friend or government agent, or convincing them they’d won a prize or lottery. The fraudsters can’t carry out these schemes alone — and the FTC is cracking down.
FTC Chair Lina Khan has made good on the promise to prioritize cases that harm elderly Americans. In June 2022, the FTC filed a lawsuit against Walmart for its part in facilitating fraudulent transactions that targeted the elderly. The lawsuit alleges that Walmart’s money-transfer service routinely turned a blind eye to fraudulent transactions by not training their employees or warning consumers, thus allowing the scammers to collect the ill-gotten money. Over a five-year period, over 200,000 fraud-induced money transfers were sent to or from Walmart stores, costing consumers nearly $200 million. If the FTC is successful, Walmart will have to compensate consumers for the lost money, pay civil penalties, and be subject to a permanent injunction that forces them to end money-transfering practices that result in fraud.
While older consumers are more likely to fall victim to telemarketing scams, children are unknowingly being tricked by corporations to increase their profits. Epic Games, the video game company that owns Fortnite, was fined $520 million for numerous privacy violations and “deceptive interfaces” that resulted in users, many of whom were children, making unintended purchases.
The FTC also cracked down on so-called “dark patterns” — underhanded tactics that companies use to squeeze more money from consumers including junk fees, misleading advertising, data sharing, and making it difficult to cancel subscriptions. The agency has prosecuted LendingClub, ABCmouse, and Vizio for these dark patterns, and returned millions of dollars to consumers. The public benefits greatly from this work, both by cracking down on shady schemes and putting money back in the victim’s pockets.
Although it carries out work that clearly benefits everyday Americans, the consumer protection side of the FTC often gets less press than high-profile mergers and acquisitions. But Americans are weary of corporations deceiving them to make more money off their private information. According to a 2019 study by Pew Research, 79% of Americans are very or somewhat concerned about how companies are using their personal data. Enforcing laws we already have in place shows people how the Biden Administration can help them by reining in corporate misbehavior and putting money back in their pockets.
In FY 2022, the FTC returned a total of $459.6 million to 2.3 million consumers who lost money to illegal business practices. These are material results demonstrating to people that the government can protect them from corporate shenanigans. And yet, the budget for FY 2023 underfunded the FTC by $60 million. The FTC’s budget request included funds for an additional 148 full-time staff members specifically dedicated to consumer protection, a worthy investment for addressing more of these complaints. Without the full amount of requested funds, it’s unclear how many staffers the FTC will be able to hire, but it certainly will not be enough.
The FTC should make bold requests for adequate staffing, and the Biden Administration should be willing to elevate any resistance from Congress. And don’t just take our word on why such a fight would be good politics – Biden’s prioritizing consumer protection in his State of the Union address demonstrates that he and his team see consumer protection as a political winner.
Going After Dominant Firms Is Not Enough To Protect Consumers
As with antitrust enforcement, the FTC looks to “maximize impact” of its limited resources for enforcing data privacy by going after “dominant” and “intermediary” companies. While this makes the best of the situation, this approach means plenty of abuses are falling through the cracks formed by inadequate funding for enforcement. Compare this to how the Securities and Exchange Commission often targets well-known celebrities when they engage in petty financial fraud — these cases are relatively easy to prosecute and generate headlines that hopefully give the impression of a tough agency on the beat, but these are all ultimately efforts to make do with far too little.
The actions the FTC does take against privacy-violating corporations are isolated and have limited power to deter future misconduct. For example, in 2019, the FTC fined Facebook $5 billion for misleading users by sharing personal information to third parties without their knowledge. While the fine was the largest ever levied by the agency, Facebook was using this misleading tactic for seven years in violation of a 2012 FTC order following previous allegations of even more brazenly deceptive practices.
And it is far from clear if the Trump-era FTC would have taken enforcement action but for the horrendous press Facebook generated for their relationship with Cambridge Analytica. Reliance on high stakes and high stress journalism is not a dependable basis of law enforcement – especially as journalism declines as an industry (ironically, in large part due to abuses by social media platforms). The fact that Facebook, one of the largest companies in the world, got away with deceptive data sharing for seven years also indicates that the FTC needs more resources to go after the dominant firms in addition to ensuring that smaller companies are not engaging in similar tactics. And the $5 billion fine, while historic, was a drop in the bucket for a company that hit a $1 trillion market cap not long after.
The limited financial impact of historic fines would be true for other large corporations profiting off their customer’s information as well. As Marta Tellada of Consumer Reports pointed out, “fines alone will not reform [the] market,” and the tech giants view fines “as a cost of doing business.”
And it’s not just Facebook which collects personal information on its users — today, 73% of companies in the United States do so, from small businesses to monopolies, with many opportunities for corporate malfeasance. When a potentially unfair or deceptive business practice becomes endemic across the economy, regulators cannot meaningfully “set examples” and hope the rest of the market complies. Yes, the FTC needs new rulemaking as well as congressionally-mandated tools for protecting consumers, but ramping up capacity in the meanwhile can tangibly benefit millions of Americans. The FTC needs the resources to properly enforce the laws it is already charged with carrying out.
Andrea Beaty is Research Director at the Revolving Door Project, focusing on anti-monopoly, executive branch ethics and housing policy. KJ Boyle is a research intern with the Revolving Door Project. The Revolving Door Project scrutinizes executive branch appointees to ensure they use their office to serve the broad public interest, rather than to entrench corporate power or seek personal advancement.
Wielding a sink, Elon Musk entered Twitter HQ on October 26, 2022 and proceeded to do precisely that to the platform’s reputation. Since his ascension to the Twitter throne, Musk’s actions have drawn widespread ethical and moral repudiation, motivated in part by his courting of accounts known for promoting hatred and anti-Semitism. Undaunted, Musk has pressed forward on the same path, ownership of the company bestowing upon him the latitude to act as its sovereign, unencumbered by ethical considerations and guardrails implemented in Twitter’s previous corporate structure as a public entity.
Actions have consequences, as the Merovingian once so eloquently explained in The Matrix, and Musk’s steps prompted a predictable exodus of advertisers eager to avoid any association between their brands and the sort of voices that Twitter has recently released from its digital Tartarus. Concurrently, users also began to seek out possible alternatives to the platform, as racist tweets proliferated under the new management’s permissive (if not outright sympathetic) attitude toward the far right (conducted under a “free-speech” pretext, of course). While potential Twitter alternatives had previously risen to meet consumer interest, the recent demand for Twitter substitutes represents a clear ideological reversal of the pre-Musk era. Then, the far-right wing of the Republican party, feeling slighted by the banishment of extremist voices in its ranks from Twitter, sought a suitable safe space in Parler, Gettr, and Truth Social. Notably, none of these have mustered anywhere near the audience to rival Twitter despite substantial funding and the presence of the former White House occupant on Truth Social.
Of late, potential competition to Twitter has arisen in the form of Mastodon, a six year-old open-source software platform for operating decentralized social networking services (i.e., you sign up on Mastodon on a specific server, some of which are invitation-only). Other nascent players include Nostr (an open-source protocol) and Post (a self-described source for premium news content without ads or subscriptions founded by former Waze CEO Noam Bardin). Whether these sites emerge as anything more than fringe competitors remains to be seen, though Twitter’s recent actions reveal some concern as to their likelihood of success. Which brings us to the topic of this article.
In the past several days, Twitter adopted new policies, engaging in conduct that has drawn attention in antitrust circles. Specifically, Twitter 1) suspended the official Mastodon account (@joinmastodon) then 2) implemented a new Promotion of alternative social platforms policy on December 18, 2022. The policy prohibited users from promoting themselves on other platforms while on Twitter (examples of prohibited phrases include “follow me @username on Instagram” and use of ”firstname.lastname@example.org”). Notably, the policy allowed alternative platforms to advertise on Twitter, but prohibits users from promoting their own presence on those sites. As I explain below, this distinction informs the nature of Twitter’s anticompetitive conduct.
The fact that Twitter rolled out this policy without apparent regard for its alignment with antitrust laws outside the United States offers some insight into Musk’s haphazard and whimsical leadership. As many across social media pointed out, the new policy appears to violate the European Commission’s Digital Markets Act, which, inter alia, states that gatekeeper platforms may not “prevent consumers from linking up to businesses outside their platforms.” The DMA levies significant penalties for non-compliance: up to 100% of total worldwide annual turnover (sales) and up to 20% in the event of repeated infringements. The question of whether Twitter qualifies as a gatekeeper platform remains, though the likelihood of the answer being yes appears rather evident: Twitter deleted the policy from its web site by approximately 10:15pm on the same day it published it (December 18, 2022).
The question of whether the policy still exists in substance if not in form aside, let’s see how it would fare in the generally more permissive US antitrust arena.
Judging by some of the articles recently posted, the immediate focus appears to lie with whether Twitter has the freedom of refusal to deal. In other words, does Twitter have any statutorily-enforceable duty to deal with its actual or potential competitors? While regulatory agencies generally permit a business to choose its partners as it deems fit, the existence of market power and its likely exercise thereof place some boundaries on such freedom. For example, in US v. Dentsply, the 3rd Circuit explained that “Behavior that otherwise might comply with antitrust law may be impermissibly exclusionary when practiced by a monopolist.” Further, Twitter’s refusal to deal in this case rests less vis-à-vis its competitors and more with regard to its own customers. In other words, the extent to which Twitter has “refused” to deal with Mastodon, for example, is only by suspending its Twitter account. (Notably, it has NOT done the same for Facebook, Instagram, TikTok, or even Parler.) In contrast, Twitter has flexed its power over its own users, threatening them with requiring deletion of tweets and temporary account suspension for isolated incidents or first offenses to permanent suspensions for subsequent offenses.
Condemnation of such actions under antitrust law (i.e., the Sherman Antitrust Act) has legal precedent. For example, Lorain Journal Co. v. United States involved the case of a dominant newspaper (the Lorain Journal), which sought to foreclose competition from the Elyria-Lorain Broadcasting Company, which operated a radio station called WEOL located eight miles south of Lorain. A “substantial number of journal advertisers” also sought to advertise on WEOL, to the chagrin of the Lorain Journal, which conceived a plan to decline “local advertisements in the Journal from any Lorain County advertiser who advertised or who appellants believed to be about to advertise over WEOL.” The Journal monitored the radio station’s advertisers, and terminated the contracts of those that advertised there, agreeing to reinstate their ability to advertise in the Journal only after they had ceased advertising on WEOL.
The court found that the Journal’s intent was to “destroy the broadcasting company” and that “Having the plan and desire to injure the radio station, no more effective and more direct device to impede the operations and to restrain the commerce of WEOL could be found by the Journal than to cut off its bloodstream of existence — the advertising revenues which control its life or demise.”
Note the Court’s use of the term “direct”. We’ll come back to that in a moment.
Importantly, the Supreme Court did not find that the Journal’s scheme had to be successful to establish a case of attempted monopolization. Rather, the injunctive relief “sought to forestall that success” and save WEOL.
Why would such precedent apply here? After all, Twitter does not prevent users from having accounts on Mastodon, for example, they just cannot advertise doing so on Twitter. The goal, however, remains the same: to deprive Mastodon or a similar competitor of the required competitive oxygen required for critical mass. In the case of digital platforms, that oxygen comes in the form of network effects. The necessity of benefiting from such effects forms a substantial barrier to entry for nascent platforms.
Network effects occur when one customer of a particular product benefits from its use by other customers. For example, part of the attraction of a dance club lies in its popularity with other individuals. The term social “network” implies exactly such effects – users benefit from interaction with each other. Curtailing a club or a social network’s ability to increase its customer base threatens its very existence. Such network effects carry critical importance among digital platforms – they sustain industry behemoths like Facebook, Google, Amazon, YouTube and others, and they serve the same function with Twitter.
But wait, you ask, this explanation still doesn’t address the key point: can’t people just establish the same network effects at Mastodon? To address this question, let’s introduce two more related economic concepts: (1) transaction costs and (2) lock-in.
Transaction costs are just that: costs that a participant in an exchange must incur to consummate that transaction. In this case, such costs take two primary forms – the costs of moving one’s own account, and the cost of others not moving theirs, the latter reflecting a coordination problem. Take the case of a popular Twitter user with many followers. That users has expended substantial effort in establishing a follower base and will be loath to migrate to a different platform if that base does not follow or if she risks losing a substantial portion of it and must rebuild the rest. In turn, the user with few followers maybe less concerned with losing their own base and more concerned with moving to a platform that does not have key accounts of interest, requiring the user to multi-home (expend effort across multiple platforms rather than just one).
Such risks of starting anew elsewhere represent transaction costs associated with that migration. These costs also create lock-in, an economic inertia that occurs when a customer becomes dependent on the services of a single vendor, allowing that vendor to exert some degree of market power over the consumer (more on this in a second). For example, lock-in features prominently among legacy mainframe users, who cannot readily migrate certain workloads off the mainframe to the cloud, in large part because their mission critical applications rely on legacy code written in COBOL over the last fifty-plus years. Readers may remember New Jersey Gov. Phil Murphy’s April 2020 call for volunteer COBOL programmers who could help the distribution of unemployment aid during the initial phases of the COVID-19 pandemic.
The same concept applies here. Many Twitter users have established deep roots on Twitter, which has become a de facto archive of evidence. One can search for posts, articles, and the like for years prior, from institutions and users across the world. When autocracies crack down on dissidents or mass protests rise up to voice the will of the people, images often first appear on Twitter, where they are recorded for posterity and remain as a chronicle of humanity’s early experimentation with technology. While Twitter users can save and download their own archives, their whole as it appears on Twitter is surely greater than the sum of their individually-distributed parts.
Critically, the presence of lock-in indicates that the company that wields it has market power, commonly a critical ingredient when evaluating actual or potential anticompetitive conduct. What do we mean by that? In a recent CNN piece, Brian Fung defined the term as “dominance in a specific market that regulators would be expected to describe and explain in any lawsuit.” While this definition may reflect its understanding in the vernacular, Mr. Fung’s definition doesn’t accurately capture the concept.
In economic terms, market power just means the ability to set price above marginal cost. In other words, market power arises when a firm can set its own price above levels that would predominate under competitive conditions. Monopoly power, in cases of unilateral conduct (such as the present), “is the power to control prices or exclude competition.” But doing so may just reflect a firm’s superior business acumen, exploitation of which does not invite antitrust scrutiny, as the Supreme Court established in US v. Grinnell Corp. (1966). However, “Where monopoly power is acquired or maintained through anticompetitive conduct, however, antitrust law properly objects.” The relevant question at hand is whether Twitter’s recent conduct falls into this category. More importantly, however, the question we truly want to answer is whether Twitter’s actions harm competition or, as the Supreme Court explained in FTC v. Indiana Fed’n of Dentists, its actions generate “potential for genuine adverse effects on competition.”
As the late legal scholar Phillip Areeda noted (and the Court cited in Indiana Fed’n), market power is but “a surrogate for detrimental effects.” Economists and competition scholars have two primary methods of informing the existence of such detrimental effects (i.e., harm to competition) at their disposal: (1) direct evidence, and (2) indirect inference. Direct evidence is exactly that: observational evidence that a company (or a group in the case of collusive conduct) has attempted to exclude competition or raise its price above competitive levels (or lower output).
Absent such direct evidence, we may infer anticompetitive effects indirectly by defining a “relevant market” and calculating market shares. But market definition is not a requirement nor does it exist in a vacuum – its sole purpose is to illuminate market power and permit the inference of anticompetitive conduct. (It is however true that Courts have commonly required that Plaintiffs delineate at least “the rough contours” of a relevant market.)
For example, in the NCAA antitrust cases, the fact that defendant schools colluded to fix athlete wages below competitive levels was clear and obvious. Defendants admitted as such and the bylaws enshrined their conduct. These facts represented direct evidence of anticompetitive conduct. Attempting to define a relevant market adds little, if anything at this point and represents largely a Rube Goldberg machine, a complex exercise designed to prove the already obvious.
Nonetheless, let’s apply both methods here. First, do we have any evidence of monopoly power and its exercise to the detriment of competition? Absolutely. Twitter’s recent actions have illuminated the existence of lock-in through power it affords the platform over its users. You might respond, “Wait a second, Twitter offers a freemium model – using the platform is free, unless one wants blue checkmark available through the $8/month Twitter Blue subscription.”
Not quite. Digital platforms like Twitter, Facebook, or YouTube are not “free”. Just as in a barter economy, they require in-kind payment. The platforms give users access, while the users provide critical data that the platforms then sell to advertisers. As Judge Koh explained in her January 13, 2022 order in Klein et al. v. Facebook,
“In other words, users provide significant value to Facebook by giving Facebook their information—which allows Facebook to create targeted advertisements—and by spending time on Facebook—which allows Facebook to show users those targeted advertisements. If users gave Facebook less information or spent less time on Facebook, Facebook would make less money.”
The same applies to Twitter. In-kind transfers represent the operative currency on digital platforms that use such models. The platform can “raise the price” to the user by 1) diluting the quality of the user’s experience on the site or 2) taking steps to prevent the user from multi-homing or de-platforming entirely. The European Commission’s prohibition of such actions through the DMA reflects precisely these concerns.
Twitter has done exactly that, as evidence by the increase in racial animus, decline of content moderation and gutting of staff responsible for maintaining site quality. More directly, Twitter has threatened its users with banishment if they reveal their use of another platform or solicit actual or prospective followers to follow them on another platform. Doing so increases the user’s costs, particularly to the extent that a user leverages such platforms for brand-building and cannot cross-pollinate across them. A user may do so, to avoid lock-in – Twitter’s actions reflect an acknowledgement of this motivation and a desire to maintain the power that lock-in grants it over its users. Its recent ban on multiple journalists under the specious pretext of “security” represented a disciplinary tool for its broader user base, not so subtly implying that “if we can ban them, we can certainly ban you.” If users could decouple from Twitter without losing their efforts and temporal investments, such threats would be self-defeating on the part of the platform. Such threats reflect no exercise of superior business acumen but rather a desire to maintain a dominant position by undercutting possible alternatives and avoiding the crucible of competition.
Now let’s turn to the second means of establishing harm to competition: indirect inference through a relevant market definition. First, let’s clarify one point that motivates this exercise: We want to determine which competitors, if any, could discipline Twitter’s ability to raise prices to its users or otherwise harm competition.
In case of regulatory intervention, market definition would likely involve substantial amount of data analysis. Fortunately, given the digital nature of such markets, data are plentiful. Aside from Defendant data, third parties such as Comscore, Nielsen, and Semrush either collect their own data, contract with third parties to obtain it, or both (as in the case of Comscore, for example.) Of course, for the purposes of this article, I did not have access to the more expensive sources, so I relied on Semrush’s collected data.
Visitor Data for Major Microblogging Services, November 2022
Source: Semrush Traffic Analytics
|Site||Visits||Unique Visitors||Market Share (Visits)||Market Share (Visitors)|
Of course, a likely rejoinder would posit that a market definition should include social media giants Facebook and its subsidiary Instagram, along with perhaps TikTok and Reddit. However, if these platforms could discipline Twitter, we would not have observed the proliferation of right-wing microblogging sites Truth Social, Parler, Gettr, and Rumble (nor their spectacular failures). For the interested reader, I’ve included a larger table that may be of interest at the conclusion of this article.
Twitter’s dominant market share reflects the direct evidence of anticompetitive harm: the platform has sufficient market power to deprive nascent competitors that could threaten its hegemony and increase users’ costs of using the site (even if such costs are not measured in fiat currency).
Whether such evidence prompts regulatory agencies to take steps to curtail Twitter’s antics remains to be seen. The harms appear to align with the type of conduct prohibited by Section 2 of the Sherman Act (unilateral attempt to monopolize) and Section 5 of the FTC Act (unfair or deceptive acts or practices). Nonetheless, as this article demonstrates, the evidence indicates that Twitter has the ability to harm competition and has already launched an attempt to do so by restricting users’ abilities to migrate to other nascent platforms. As Musk himself tweeted:
Finally, for readers interested in the various performances of microblogging and social media sites in this market periphery, I report the November 2022 data from Semrush for these platforms. The green and yellow figures below the November data reflect the performance relative to October 2022 (e.g., Twitter visits fell by 6.14% but unique visitors rose by 1.79%).
November 2022 Traffic for Social Media/Microblogging Platforms
(Source: Semrush Traffic Analytics)