Economic Analysis and Competition Policy Research

Home   •   About   •   Analytics   •   Videos

For years, journalists have reported numerous instances of worker exploitation, hazardous working conditions, and poverty wages in the nail salon and fast food industries. In a blockbuster 2015 New York Times investigation, for example, journalists found that New York nail salonists were “paid below minimum wage; sometimes they are not even paid…[and] endure all manner of humiliation, including having their tips docked as punishment for minor transgressions, constant video monitoring by owners, even physical abuse.” For Californian fast food workers, other investigations have revealed they endure routine wage theft, verbal abuse, and unsafe working conditions, including frequent assaults and robberies.

To combat these appalling conditions, in their recent sessions, the New York and California legislatures considered enacting new laws that would transform each industry. Among other obligations, both proposals create state regulatory councils, sometimes referred to as “wage boards” or “labor boards,” overseeing the respective industries that are staffed with lawmakers, industry workers, and experts endowed with the power to enhance wages, benefits, and working conditions. Such bold legislative actions are sensible given the abhorrent environment in each industry. They also underscore the idea that a well-functioning democracy requires that the people should be able to structure markets – through their political institutions – to meet their economic needs and agree upon a minimum set of fair working conditions for all workers in any industry.

New York’s bill is effectively stalled in the legislative labor committee. A version of California’s bill was signed in September. When both proposals become formally enacted and enforced, workers will undoubtedly benefit, and such policies should unquestionably be replicated across other industries.

Such praiseworthy reforms carry significant legal risk, however. Parties opposed to such measures have historically used the antitrust laws, the laws designed to protect the public against corporate power and ensure businesses use fair strategies to compete, to challenge these reforms – and could do so again. The antitrust laws, like the landmark Sherman Act of 1890, are sweeping in their application. Congress included provisions restraining all monopolistic practices by dominant corporate actors and restrictions on a host of unfair conduct. The restrictions imposed by the Sherman Act incentivized all firms in the economy to use fair methods of competition that enhance the public’s welfare to succeed in the marketplace. Most applicable here is the first section of the Sherman Act, which prohibits “every contract, combination, or conspiracy … in restraint of trade.” At first glance, such broad wording appears to set limits on the kinds of regulations enacted by state governments since they could ostensibly authorize conduct that a judge could be convinced to classify as a “restraint of trade” and, therefore, be prohibited by the Sherman Act.

However, to prevent the Sherman Act from becoming a law that empowers the federal judiciary to inhibit any conduct it solely deems as a “restraint of trade” – including regulations mandated from states legislatures – during the New Deal in the 1940s, the Supreme Court created a legal doctrine that facilitated Congress’s legislative intent with the Sherman Act by exempting certain conduct classified as “state action” from the antitrust laws. The state action doctrine, also known as Parker Immunity after the 1943 Supreme Court decision where the idea was formally codified, is profoundly important because how it is applied implicates who should control local economies – the federal judiciary misusing the Sherman Act or state governments working in conjunction with federal statutory law.

When Parker Immunity is interpreted in line with Congress’s intent with the Sherman Act, the doctrine encourages states to liberally use their regulatory power alongside the Sherman Act to structure markets to protect small businesses, enhance standards and wages for workers, and restrain unfair business practices – all while preventing powerful corporations from controlling the national economy. In effect, both Parker Immunity and the Sherman Act operate as two sides of the same coin on regulating corporate conduct to strengthen worker power and the vitality of independent businesses and local communities. Parker Immunity ultimately encompasses what is politically possible when governments are empowered to solve problems afflicting the public and is what will allow states like New York and California to be able to enact their respective policies.

The Beneficial Applications of Parker Immunity

The origins of Parker Immunity are rooted in situations analogous to the modern nail salon and fast food industries. In the 1930s, California raisin farmers faced a destructive price spiral whereby frenzied competition among raisin sellers led to increasingly and unsustainably low prices. To ensure a stable, sustainable, and fairer marketplace, California enacted a law that allowed producers to obtain fairer prices for their products – allowing all firms to be able to compete sustainably.

Specifically, California’s law allowed the creation of producer plans that established uniform standards for the selling of their products. The plan at issue established standards for when farmers’ raisins could be sold, the prices they could be sold for, and imposed limitations on how many raisins could be sold. California’s law also created an administrative agency to review the created plan and monitor the farmers to ensure compliance with the law.

The regulations were eventually challenged under the Sherman Act as unlawful restraints. The Supreme Court, however, would subsequently hold that California’s regulatory program did not violate the antitrust laws because the program was a result of “the execution of a government policy” derived from “state action” or “official action directed by a state.”

The Supreme Court justified its decision primarily based on two circumstances. First, the Supreme Court recognized that Congress specifically sought for the antitrust laws to set limits on how corporations could succeed in the market by restricting methods of competition that were unfair and inhibiting the power of dominant corporations. In other words, powerful corporations, not small businesses and workers, were the target. During the legislative debates, Senator Sherman articulated that his namesake act would not “interfere with” but instead cooperate with state regulatory efforts to “prevent and control combinations within the limit of the State” and that the aim of the law was to promote the “industrial liberty” of the people by “checking, curbing, and controlling the most dangerous [corporate] combinations.” In this sense, striking down California’s law would subvert Congress’s intent because it was duly enacted state law explicitly designed to support producing farmers. Second, the Court also recognized that Congress did not intend for the Sherman Act to undermine state governments’ ability to regulate their economies. Instead, Congress explicitly wanted the Sherman Act to work alongside state regulations.

With this viewpoint, the Supreme Court positioned the Sherman Act to not just prevent markets from being controlled by dominant businesses, but also as a legal tool of democratic market governance to facilitate responsive government by empowering state lawmakers to enact regulations with the Sherman Act acting as the foundation. Therefore, the Supreme Court at the time cast the Sherman Act as a democratizing law meant to ensure that the people maintained control over businesses operating within their communities. The public retaining their ability to shape their local economies through their state government while having their federal legislature establish minimum national standards for permissible corporate conduct were intertwined and complementary goals.

Empowered by Parker Immunity, states have enacted many policies designed to make competition fairer and promote other policy goals, such as supporting small businesses and ensuring appropriate workplaces. For example, alcohol distributors in many states like Connecticut are regulated under “post and hold” laws, where they must publicly post their alcohol prices and maintain those prices for a set period. Such regulations also work in conjunction with other regulations to restrict and inhibit the adverse effects of discriminatory volume discounts and alcohol from being sold below cost, and thereby inhibit national retailers from crushing smaller and local outlets. In addition to protecting the public by preventing the excessive consumption of alcohol, these laws are designed to promote local alcohol distributors and ensure fair competition between retailers.

Occupational licensing is also a product of Parker Immunity. State licensing, while seen as “deputiz[ing] incumbent firms in restricting the marketplace against new entrants” by conservative think tanks, actually protects workers and the public by establishing minimum professional standards and facilitating the creation of stable and fair-paying jobs for workers.

Parker Immunity can even authorize states to completely remake markets to ensure socially beneficial competition. States can facilitate the creation of cooperatives, alternative types of business entities where workers or small producers can come together to serve their interests, such as negotiating better pricing or obtaining ownership in a firm. Cooperatives can help workers and producers obtain fairer wages and prices.

More Democracy Inhibits Political Capture by Dominant Firms

For all its virtues, the state action doctrine can be a tool for corporate abuse. Consider a recent example from the state of North Carolina. The North Carolina legislature is considering enacting a new law to turn the University of North Carolina Health Care System into a state agency and allow it to acquire companies and engage in other collusive conduct without fear of violating the antitrust laws. Instead of creating a fair market, this action seeks to use state power to immunize a dominant corporate actor from the laws designed to restrain their conduct. The state action doctrine, therefore, can potentially serve as a potent legal vehicle for powerful corporations with access to near-unlimited financial resources to lobby state governments to enact legislation that will shield them from the laws specifically designed to create open, competitive, and fair markets by restraining monopolistic conduct.

Despite this kind of nefarious immunization, which in this case compelled the Federal Trade Commission to write a letter in June condemning the North Carolina bill, throwing the entire doctrine out with the proverbial bath water is unnecessary and undermines both Congress’s intent with the antitrust laws and the Supreme Court’s original construction of the state action doctrine it articulated in its Parker decision. As the highlighted examples above show, a broad state action doctrine that promotes fairer markets, better wages, and working conditions for workers can co-exist with antitrust law’s provisions condemning conduct that unfairly entrenches dominant corporations.

While such political-capture scenarios reveal the potential risks associated with Parker Immunity, the solution is more democracy, not relying solely on judges to manage state and local economic life through wielding the Sherman Act. Potential misuse does not necessitate abandoning Congress’s intent with the Sherman Act and completely barring state political institutions from governing their economies. Rather than leave marketplace rules to be determined by generalist judges, scenarios like that of the proposed legislation in North Carolina, which epitomizes deficient or misused governance, can be resolved with morenot less, democratic involvement.

Unionization Cannot Fill the Void

While by no means giving rise to its existence, granting antitrust immunity to state laws allows space for increased democratic market governance. Parker Immunity, therefore, symbolizes the importance of political engagement and responsive government by providing the opportunity and ability of the democratic process to become (even more) tightly integrated and present with actively governing the economy rather than substitute such a mechanism with judicial supervision and control. Those that are opposed to the state action doctrine could appear to innately fear the democratic process and would prefer unaccountable judges to govern the economy.

Moreover, consider if opponents of Parker Immunity got their way and the doctrine was abolished. Beyond the obvious moral implications of allowing workers, like nail salonists, to receive poverty wages and forcing them to tolerate inhumane working conditions, such as air so toxic it causes women to become practically infertile, in the name of hostility toward regulation and keeping prices low for consumers (the North Star of conservative antitrust policy), other options afforded to nail salonists to obtain fair conditions are problematic and hardly failsafe.

Nail salonists could attempt to unionize, as many workers across the United States are currently trying to do. Yet U.S. labor law makes unionization incredibly difficult. For one, unions can only be created in a piecemeal firm-by-firm fashion – think just because one Starbucks location unionizes does not mean the others become unionized as well. Unionization is also a protracted process – in almost 50% of instances, it takes more than a year for a union to obtain its first collective bargaining agreement. Of course, despite these obstacles, unionization should still be pursued as a critical means to enhance worker power and restrain corporate power. With a surfeit of nail salons and fast food restaurants, however, seeking to protect all workers through unionization will be an arduous task and will not set minimum standards across the industry. Moreover, given the weak penalties, corporations like Starbucks and Amazon are more than eager to flagrantly violate federal labor law to thwart their workplaces from becoming unionized.

Another alternative available to the salonists to potentially raise their wages is through merging their operations. As history has all too frequently shown, mergers consolidate markets and concentrated corporate power almost always inevitably hurt workers through the loss of jobs, depriving them of additional work opportunities and lowering wages. Furthermore, because mergers do not lead to workers obtaining power over firm decisions, merging their operations will almost certainly not solve the workplace conditions afflicting them. Overuse of mergers could also lead companies to confront the antitrust laws from another angle, since Congress has imposed a heavy restriction on merging to consolidate power.

A Wage Board for Every State

Here again, there is another way; while certainly not a trivial task, the state legislature can establish regulatory agencies to supervise the industry and enact market-wide standards to prevent unfair corporate practices and increase pay and working conditions, all while maintaining a deconcentrated market and providing consumers multiple options to obtain their services. Therefore, state-created regulations are perhaps the most practical, expeditious, and democratic means to alleviate the ills in the industries while concurrently preventing the harmful effects that Congress intended the antitrust laws to thwart. In other words, without Parker Immunity preventing the antitrust laws from proscribing state regulations, workers suffer as the Sherman Act effectively becomes a legal cudgel that only authorizes the federal judiciary’s conception of appropriate market regulation and work standards.

It is almost a foregone conclusion that reactionary legal advocates will attempt to use the antitrust laws as a pretext to challenge New York and California’s policies aimed at addressing the deplorable working conditions endured by nail salon and fast food workers. Regardless of the prospect of litigation and the heightened barriers the Supreme Court has imposed on obtaining Parker Immunity since the 1970s, it exists. Like New York and California, more states should enact and be prepared to fight for their regulations to enhance and protect the lives of working people by taking advantage of the immunity granted by the Supreme Court more than 80 years ago.

Daniel A. Hanley is a Senior Legal Analyst at the Open Markets Institute.

It has become quite common to accuse antitrust enforcers of bias and seek their recusal. FTC Chair Lina Khan and DOJ Antitrust Division AAG Jonathan Kanter have been the subject of calls for recusal in cases involving corporate giants such as Meta, Amazon, and Google.

The argument is that these individuals are biased in their enforcement of antitrust law. Ideology, in the case of Chair Khan, was initially honed by writing an article in law school and working for a non-profit. Learned and developed understanding is something that is somehow problematic to some in the pursuit of antitrust enforcement.

In contrast, nominees to run the DOJ and the FTC frequently have experience defending against agency enforcement. They will spend time at the agency, to varying degrees, making minute changes to the state of current enforcement (or lack thereof). Then, they will leave and go back to the defense bar. That does not create calls of bias. In fact, that time “in the trenches” is celebrated as valuable experience.

Whether defending an action or enforcing an action before returning to the defense bar, the reason that no one objects to that “bias” in those realms is that they share the same faith. Thus, it doesn’t matter if you once represented corporations—the common belief is that antitrust enforcement agencies should not delve too deeply into monopolization, there should be blessing of efficiencies in mergers, and that the risk of improper enforcement is greater than the risk of non-enforcement. All believers of the same principles cannot be biased, after all.

The faith is called “Consumer Welfare.”

But these new enforcement officials aren’t practitioners of that faith. And that is perhaps the primary reason that there is much ire about the draft Merger Guidelines and one if its (many) drafters, FTC Chair Lina Khan. The drafters of those guidelines are seeking to disrupt the Consumer Welfare faith, replacing it with the science of modern economics and a return to the statutory goals.

Yet disciples of the faith do not like change. And their belief system has been beneficial to everyone who spins through the revolving door, often times at rapid velocity. But that same system harms consumers, workers, independent business people, citizens, and anyone else lacking voice and who are not members of this particular faith.

In what follows, I detail why Consumer Welfare Theory is a faith, not science. I then explore how there are multiple sects within that faith and how they interplay with one another, all the while perpetuating the faith. I then detail ways in which the faith protects itself from challenge, both scientific and policy based. Finally, I propose a solution.

The Faith of Consumer Welfare

Consumer Welfare Is Internally and Irretrievably Flawed

A faith, according to one definition in Merriam-Webster dictionary, is a “firm belief in something for which there is no proof.”  Faith is closely held, and not readily dismissed even in the face of evidence to the contrary. Faith is powerful and should not be discounted.

But faith is not science. There is no ability to verify empirically someone’s faith. Nor is it necessarily logical. Logic suggests that should some assumption prove false; the claim is rejected. Faith exists even when there is evidence to the contrary.

Consumer Welfare, which is the faith adhered to by agency heads past, is not logical because it is based on disproven theory. Modern economics—the social science of economics and not the Consumer Welfare faith of antitrust practitioners—has disproven consumer welfare theory and surplus approaches to welfare. My coauthors and I have detailed this literature here, here, here, and here. In some instances, we have made new claims of intractable problems, and have been met with silence.

Consumer Welfare and its assumptions have also been empirically disproven in many aspects.

For example, mergers do not create efficiencies by and large, except perhaps by happenstance. But notions of efficiency that flow through merger have been disproven. Much of Consumer Welfare has even been disproven by its own followers, the Post-Chicago School. Yet its disciples cling on. There is also strong evidence of increasing concentration in industries across the United States, lower productivity, greater disparities of income. As a policy for the goals of antitrust, evidence suggests Consumer Welfare theory empirically performs poorly.

Despite these criticisms that condemn the theory, and even though modern economics as a science has moved on from it, Consumer Welfare Theory is embraced today in antitrust law. One might hope for a Kuhnian scientific revolution, but rejection of empiricism and logic is a faith-based decision. Indeed, it is fanatical devotion.

“Follow the Gourd! Follow the Shoe.”

In Monty Python’s The Life of Brian, Brian, a false prophet, drops his shoe and his gourd (literal, not metaphorical gourd). His followers splinter into camps of shoe followers and gourd followers. But Brian remains the leader of both sects.

Similarly, the Chicago School and the Post-Chicago School still cling to the same prophet of Consumer Welfare. The Post-Chicago School debunked many of the claims of the Chicago School, but still clung to the religion. To mark their distinction, the Post-Chicago School argued for a kinder, gentler form of Consumer Welfare: A new testament, if you will. Consumer Welfare as embodied in modern antitrust law was the faith created by High Priest Robert Bork and his followers who sought to curb antitrust enforcement. Yet the faith has expanded, in large part due to defenses of the Consumer Welfare standard stemming from people claiming that they are pro-enforcement.

But it’s hard to know what the various sects of this faith are. As Professor Scott-Morton and Leah Samuel point out, there is no clear definition among the devout about what consumer welfare means:

This divergence in terminology means that participants in a debate about CWS are often talking about fundamentally different things. At some point, despite the best efforts of many economists at many antitrust conferences, this barrier to effective communication has become insurmountable. For reasons that are entirely understandable, Neo-Brandeisians have won the terminology debate in policy discourse and the media. Clear communication isn’t possible among Neo-Brandesians, Borkians, and academic economists when they use different definitions of the same term. Making things worse, any quotation from jurisprudence or analysis of past decades reflects the definition of its time.

The lack of an objective reference is further evidence of the faith-based quality of Consumer Welfare among antitrust practitioners and scholars. However, followers of this approach often blend together and come back to the original faith.

Consider the “output” sect led by Professors Herb Hovenkamp and Fiona Scott-Morton, who appear to believe the goal of antitrust is to maximize output. In this discussion, there is an explicit recognition that output does not increase welfare without strict assumptions. Yet this sect exhibits a faith-based defense of using output to measure welfare and defending consumer surplus:  “When those tools [regulation, consumer protection, and product labeling] do a good job, output returns to its role as a good proxy for consumer welfare.”  

The authors add: “When an economist examines a practice and concludes that it increases ‘welfare,’ the evidence supporting that claim is commonly that the practice increased output or reduced price.” But that simply isn’t true anymore.

Thus, it appears that the output sect is linking output to consumer welfare (with some herculean assumptions). But not necessarily:  Professor Hovenkamp has said that you “don’t even need a welfare metric.”  So, output as a sect of welfare assumes output on its own is good. But then again, apparently not always.

Post-Chicago economics added the Trading Partners sect. This sect claims a pro-enforcement stance, but cling to a version of surplus measurement known as “trading partners.” As they point out, as several post-Chicago lawyers and economists stated in their comments regarding the draft Merger Guidelines:

We understand merger analysis to be concerned with the risk that a merger will enhance the exercise of market power, thereby harming trading partners (i.e., buyers, including consumers, and suppliers, including workers). Market structure matters in merger policy when it is an indicator of the risk that firms will have the ability and incentive to lessen competition by exercising market power post-merger (or an enhanced ability and incentive to do so), to the detriment of trading partners (buyers or sellers) in the relevant market.

Professor Hovenkamp, an apparent advocate of this sect as well, states:  

What we really want is a name for some class of actors who is injured by either the higher buying price or the lower selling price that attends a monopolistic output reduction. In the case of a traditional consumer the primary cause of this injury is reduced output and higher prices. In the case of a supplier, including a supplier of labor, the primary cause is reduced output and lower selling prices. In both cases there are also injuries to those who are forced out of the market.

As my coauthors and I have stated elsewhere, “This is really a wrinkle on the original Consumer Welfare Standard because it simply adds the input market surplus to the consumer surplus.”  It does not account, however, for all the other things that modern economics would include for consideration and the original intent of Congress. However, “to their credit, the Post-Chicago School has demonstrated that even when the antitrust inquiry is limited to prices and costs (that is, total surplus), the Chicago School’s program of weak merger enforcement is not justified.”  A New Testament for Consumer Welfare, if you will, without the hellfire and damnation cast toward enforcement like the original.

But that original school of Consumer Welfare still exists. Namely, the original notion that most, if not all intrusions by the government into the market will do more harm than good. While there might be exception for realms of naked price fixing, the remainder of antitrust enforcement should be restrained to a great degree. Let’s call this group the Inner Chamber.

Members of the trading partners sect, the output sect, and the Inner Chamber may be talking past each other in ways that science would not be able to grasp. However, stepping outside the “trenches,” one finds a very common meaning of consumer welfare and a very common understanding of welfare in the science of economics.

Let’s Not Call It Death Grip Consumer Welfare Anymore

In The Wire, Stringer Bell is tasked with the sale of an inferior product. It simply doesn’t work. To attract new customers, Stringer, after consulting his economics professor, decides to rebrand. He has some things to teach his followers:

Stringer: “Alright. Let’s try this. Y’all get jacked by some narcos. But y’all clean. Y’all got an outstanding warrant, like everybody in here, and what do you do?”

Poot: “Give another name.”

Stringer: “Why?”

Bodie: “Because your real name ain’t no good,”

Stringer: “All right — it ain’t good, and? Follow through.”

[silence]

Stringer: “ Alright. ‘Death Grip’ ain’t shit.”

Poot: “We change up the name.”

Stringer: “What else?”

Shamrock: “Yo I got it. Change the caps from red to blue. Make it look like we got some fresh shit.”

To maintain popularity, oftentimes faiths “rebrand.” The goal of rebranding, perhaps, is to regain appeal as time passes.

Rebranding has been advocated in the faith of Consumer Welfare. Consider Leah Samuel and Professor Scott-Morton’s take

Economists face a huge problem with the label they use for the textbook consumer welfare concept if they want to be understood by a larger society that uses the now-common restrictive definition. To foster understanding, economists should be happy to rebrand what they used to call “consumer welfare.” In his presentation to the FTC in 2018, Carl Shapiro suggested a “Protecting Competition” standard (cheekily subtitled “The Consumer Welfare Standard Done Right With Better Name”). It would have almost the same textbook economic meaning as consumer welfare, but the legal meaning would be explicitly framed as broader than the “consumer welfare standard” that is so railed against in the press and employed by courts. 

Rebranding seems like an odd thing for a science to do. If the polestar of antitrust were gravity instead of Consumer Welfare, would we tell academics to stay out of the antitrust lane because in antitrust we mean something different by gravity?  Maybe not call it gravity?  Apparently.

Physics principles, from which much of neo-classical economics is taken, is still called Physics, although I recognize sub-disciplines emerge. But those sub-disciplines cling to the same principles of science. One simply does not rebrand “gravity,” despite it being the source of many failures of grace.

On the other hand, if no one knows to what they are referencing before the rebrand, there is a risk of Charlatanism. It is already the case that some of the sect leaders move freely between the sects, and there is great potential that even a well-meaning, pro-enforcement sect will be coopted by the dominant anti-enforcement sect within the faith. No, that is not what we mean. You are wrong as to what we mean. As we’ve seen in the Google trial already and in the Microsoft trial, given this confusion, even one’s deeply held core beliefs might change depending on time and place. Take, for example, the cross-examinations of Hal Varian and Richard Schmalensee with their own teachings. Even Robert Bork’s faith has been questioned.

Keeping the Faith

Practitioners of a faith can be harsh to those outside their belief system

Unlike the truly devout, New Brandeisians hate low prices and more output, the story goes:

… Neo-Brandeisians generally do not reject the conclusion that larger firms can benefit from economies of scale and scope. Indeed, Neo-Brandeisians often point to expected economies of scale as a cause for concern regarding mergers and acquisitions because combined entities may be able to lower prices and out-compete some other incumbent firms. This demonstrates that the implementation of Neo-Brandeisian policy prescriptions would likely burden consumers with higher prices and reduced output and/or quality in exchange for some mix of Neo-Brandeisian priorities, such as smaller firm sizes and a larger number of total firms.

Because New Brandeisians take into account other considerations, they must be seeking to injure consumers. Low prices and greater output are THE goal, and competing notions will injure that goal. They are thus labeled “activists” who engage “bad faith” arguments and “myths.”  

Faiths frequently use parable to defend the defenseless notion. Rather than debate the importance of measures which modern economics (and ancient Congresses) have deemed important, easier to pin the label of “high price heretic” on the New Brandeisians.

This is perhaps why FTC Chair Lina Khan has been the subject of so much hostility in the antitrust world. The Wall Street Journal, the oracle of Consumer Welfare, has devoted enough space to the FTC Chair that no one could question its devotion to the faith. The defense bar is up in arms. And, with the new draft Merger Guidelines, there are even suggestions that the lords of Consumer Welfare—the Courts—would surely cast out such heresy.

It is common to see the New Brandeisians being cast as the zealots. In recent conversations, people have described the New Brandeisians as “activists” and the draft Merger Guidelines as a “manifesto,” perhaps coming from “Marxists.” Heretics aligned with such heresy must be cast out and shunned.

Faiths Appeal to Established Beliefs, Often to Protect Against a Threat

Faith-based anti-intellectual arguments are not new to economics. Consider the reaction to Pierre Sraffa’s “double switching argument,” which proved that the neoclassical theory of distribution was untenable. Sraffa proved, that in “general, there is no logical way by which the “intensity of capital” can be measured independently of the rate of interest — and hence the widely held neoclassical explanation of distribution of income was logically untenable.”

Paul Samuelson sought to defend the neoclassical religion from the logical contradiction Sraffa proved through allegory–an allegory with heroic assumptions. It was an attempt to defend a lapsed logical proof, but one that was still appealing as a religious allegory. As E.K. Hunt points out by citing Bernard Harcourt:

The neoclassical tradition, like the Christian, believes that profound truths can be told by way of parable. The neoclassical parables are intended to enlighten believers and nonbelievers concerning the forces which determine the distribution of income between profit-earners and wage-earners, the pattern of capital accumulation and economic growth over time, and the choice of the techniques of production associ­ated with these developments. . . . [These] truths . . . were thought to be established . . . before the revelations of the false and true prophets in the course of the recent debate on double switching.

The required assumptions for Consumer Welfare to be measured by output are nigh-impossible to be satisfied on earth. The trading partner approach has flaws that are equivalent to the flaws repeatedly detailed about Consumer Welfare theory. In fact, no sect in the Consumer Welfare religion is immune for these damning scientific criticisms. Unlike scientific revolutions, faith continues merrily on, although some have chosen to cast those proving such criticisms (and moving on to more sound policy) as heretics who seek to do harm, lacking in understanding of science.  

Indeed, with regard to any faith lacking in evidence, the appeal is to emotion. C.E. Ferguson, in his preface to the book, admits this (in defense of Samuelson’s parable):  “Placing reliance upon neoclassical economic theory is a matter of faith. I personally have the faith; but at present the best I can do to convince others is to invoke the weight of Samuelson’s authority.”

Joan Robinson concluded the debate about double switching with a quote that is apt about consumer welfare and its unproven and unproveable theory. In lambasting Ferguson for failing to engage in scientific endeavor:

No doubt Professor Ferguson’s restatement of “capital” theory will be used to train new generations of students to erect elegant seeming arguments in terms which they cannot define and will confirm econometricians in the search for answers to unaskable questions. Criticism can have no effect. As he himself says, it is a matter of faith.

If Consumer Welfare theory is faith, then I must still provide an answer as to why it is so appealing as such. My answer: Because all the players win.

Faith Is Rewarded

The popularity of a faith is in its ability to create comfort. Karl Marx’s famous line about religion was not disparaging: “Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.” Thus, a religion makes you feel good. Or at least one would hope.

The faith of Consumer Welfare does make everyone involved feel good.

Practitioners and consulting economists can feel good because they are representing the parties before the agencies. They are limiting the excesses of government, while being paid well to do so. For government attorneys, a “win” involves a settlement (it does not matter the degree). A consent decree is a win, for budgetary purposes. A successful trial is a win. And, if the parties abandon a transaction, that is a win, too. While government attorneys are not paid well, they can take solace in doing the “people’s work,” or eventually leave to work on the better-paying side.

I am not stating that there are merchants in the temple that should be cast out. Because this faith allows for profiting from it.

The Faithful Gather to Reinforce Their Faith

Faith is based in part on gathering. The temple for Consumer Welfare adherents is the Marriott Marquis in D.C. The service is the ABA Antitrust Section Spring Meeting. During this service, the practitioners of the religion discuss the meanings of ancient texts. Some are rejected because of their age and in light of modern interpretations, such as Brown Shoe. Others are lauded still because they are consistent with modern thought, such as Marine Bancorp. (Justice Powell wrote the famous memo advocating for conservative antitrust and Consumer Welfare theory). High priests, people who have wrestled with these tomes—economists, law professors, and lawyers—debate within small margins the meanings of these sacred texts, often picking and choosing verses that appeal mostly to themselves.

Unlike other religions, debate (within limits) is welcomed, particularly those who play an important role on “both sides” of the debate. But the debate is friendly compared to the nastiness of attacks on the New Brandeisians. Here, people might switch sects in the Consumer Welfare faith and still be welcomed. They might even switch sides “in the trenches” between government and private practice. It’s not a war. No French crossed into German Bunkers in World War I without being called a traitor or deserter.

Friendly discourse in the faith does not yield an unending number of Wall Street journal op-eds attacking you personally, or your FTC Commissioner colleague calling you a Communist. Both sides, within each sect, is still within the Church of Consumer Welfare’s teachings.

This is perhaps why Chair Khan has been the subject of so much hostility in the antitrust world. The Wall Street Journal, the oracle of Consumer Welfare, has devoted enough space to the FTC Chair that it could itself be the foundation of its own faith. The defense bar is up in arms. And, with the new draft Merger Guidelines, there are even suggestions that the lords of Consumer Welfare—the Courts—would surely cast out such heresy from “Marxists.”

Faith Wavers, but Never Fails

Should one point out that the arguments of faith themselves go against the teachings of the faith, things get awkward. I have at various points sardonically argued that antitrust should only be about per se illegal activity such as price fixing and bid rigging. If monopolies cannot undermine the competitive process because they are temporary and most mergers are efficient, why create such huge taxes on corporations by applying the rule of reason? Indeed, detractors of recent proposed amendments to HSR filings have argued such a point.

If enforcement only makes sense for per se violations, and if the deterrent effect were properly understood by would-be cartels, shouldn’t nearly everyone in the ABA defense bar be out of a job? Shouldn’t most antitrust enforcers similarly be fired? Isn’t antitrust enforcement for single-firm monopolization and merger cases “inefficient?” The response to such a comment is usually that antitrust serves a useful purpose of deterring harmful conduct and mergers. But doesn’t it also create great Type I errors? Should we not weigh those? Raise this argument at the Spring Meeting and watch the rallying cry of the defense bar for the antitrust status quo.

But even those practitioners of the Consumer Welfare Religion who earnestly seek broader antitrust enforcement have limits in their faith. Consider an argument I pose frequently to disciples of efficiency (cost savings), a tenet of the Consumer Welfare religion. Consider a merger between two firms (both do business in the United States). Suppose one has beneficial ties to a country where child labor is legal, and the other has capital to build plants there. Suppose the firms prove (and the foreign government also corroborates) that the merger will lower costs, increase output, and use child labor as the basis of the lower costs. Disciples will call into question whether their religion has anything to say on this, despite the efficiency doctrine being right there for the taking. Implicit in the discomfort is a recognition that antitrust has other purposes at hand. The hypothetical tests their faith.

The point of this example is not that supporters of consumer welfare are pro-slavery and pro-child labor. The point of the example is that faith wavers, but not for long.

Adding Science by Killing Faith

When a discipline ceases being a science and becomes a faith, it can no longer accept advancements. Entrenchment becomes the norm. And ultimately, the field dies under its own ignorance, perhaps taking society down with it.

The New Brandeisians have modern economics—the science, not the Consumer Welfare faith—on their side as well as the original goals of Congress in passing the antitrust laws. Antitrust law was hijacked by Consumer Welfare theory, and it is time to put an end to that flawed economic science now undertaken as faith. To question the status quo, as Galileo discovered, no doubt creates hostility.

As New Brandeisians fight the fight, it is almost with glee that members of the Consumer Welfare Faith celebrate the losses. With the remorse is an gleeful tone when some members of the defense bar speaks of the FTC’s losing streak. As Homer Simpson once famously said, “Well son, you tried and you failed. The point is, never try.” In terms of single-firm monopolization cases, the agencies, until very recently, appeared to have taken this advice.

Some Solutions and a Conclusion

I do not have answers that will ever be implemented given the strength of the faith and the forces that propel it. I’m sorry if you thought I would, having taken my statement in the introduction that I had such solutions on faith.

That’s the thing about faith. Sometimes it isn’t warranted.

The views in this essay do not reflect the view of my coauthors, my employer (the Great State of Texas), the Utah Project, my school of Kung Fu, or any other group with which I’m affiliated. I speak solely for myself. I do not have any clients. I am not seeking any appointment to any government agency. Nor do I anticipate them seeking me. I don’t anticipate any of those things changing after I write this.

On Thanksgiving Day in 1971, the number one ranked Nebraska Cornhuskers faced the second ranked Oklahoma Sooners in a game that is today known as “The Game of the Century.” On that day, Nebraska proved victorious over its archrival and secured the Big Eight title. Across the years, these two teams frequently met in November and frequently that game decided a conference title. Consequently, this rivalry goes far beyond one game in 1971. The rivalry between Nebraska and Oklahoma was arguably one of the most important rivalries in the history of college football.

Or so it was until conference realignment.

Today a game between these two teams wouldn’t mean much at all. In 2011, the University of Nebraska left the Big 12 for the Big Ten. And next year, Oklahoma will leave the Big 12 for the SEC. Although Nebraska and Oklahoma may play each other in this century, it is a safe bet the “Game of the 21st Century” will not be between Nebraska and Oklahoma. The desire to play in bigger and more financially successful conferences effectively killed this rivalry.

For many people, Nebraska and Oklahoma fleeing the Big 12 is part of the inevitable decline in college football. This alleged decline has been hastened by the implosion of the Pac-12 in recent weeks. But it is important to put these moves in some perspective.

According to the Department of Education, in 2010—the last year Nebraska played in the Big 12—the football team reported about $55 million in revenues. In 2019, Nebraska’s revenues reached $108 million. Adjusted for inflation, its football revenues increased by over 50 percent in just ten years.

A similar story can be told about Oklahoma. In 2010, Oklahoma reported $58 million in football revenue. In 2019, that number had increased to $115 million. Again, adjusted for inflation, this is about a 50 percent increase in revenue in one decade.

Yes, ending the rivalry likely disappointed some fans. But in terms of revenue, both teams did quite well after Nebraska left the Big 12. And that was true despite the fact Nebraska has fallen from the ranks of dominant college football powers.

From a business perspective, neither the University of Nebraska nor the University of Oklahoma appear to be impacted much by what has transpired since the Cornhuskers left the Big 12. And that story is consistent with what we see in general in college football. According to the Department of Education, the average team in the Football Bowl Subdivision (formerly known as Division I-A) has seen its revenues grow from $22.4 million in 2010 to over $40 million in 2021. Adjusted for inflation, that’s about a 44 percent increase. In sum, college football has attracted substantial audiences since the 19th century and continues to do just fine!

It is important, though, that we put that business in some sort of perspective. The University of Nebraska Lincoln reports that the total revenue for the school in 2023-24 was about $1.5 billion. And if we look at the entire University of Nebraska system (they have multiple campuses), the reported revenue is $3.3 billion. Yes, that’s billion with a “b”!!

It’s possible that when people around the nation think about the University of Nebraska, they think about their football team. But athletics are a tiny part of the business of the University of Nebraska. And that is essentially the story wherever you look at higher education. Yes, people may know more about the exploits of a school’s athletics than they do about the accomplishments of a school’s economics professors, but academics—as was argued by Charles Davidson of the Federal Reserve Bank of Atlanta—remains the primary business of colleges and universities in this country.

So college football can be thought of as a thriving but relatively small business. Some fans of your school may live and die with the exploits of their favorite team on Saturday afternoon. But the university continues regardless of what is on the scoreboard.

Growing Revenues Mask a Larger Problem

The thriving nature of college football might suggest that there are no problems. Unfortunately, that’s definitely not true. In fact, it has never been true. College football has a problem. In fact, all of college athletics has a problem. And this problem has always needed to be fixed.

Back in the 19th century, a decision was made at American universities that tickets would be sold at athletic contests involving university students. Soon after, those contests became a thriving small business. By the 1880s, thousands of fans were showing up to watch college football and those fans gave the schools hosting the games thousands of dollars.

The schools decided that all those dollars were not going to be shared with the students the fans were watching. Athletes in these contests were labeled “amateurs,” a word that came to mean “you ain’t getting paid.” 

Okay, there was some payment. Universities often agreed to give the students a scholarship to the school. But the pay to the athletes in college sports was tightly controlled by the universities.

In economics we have a word for such a system. The word is “monopsony.” More specifically, an employer has monopsony power when they have substantial power to set the wages of their employees. For more than a century, colleges and universities have made millions of dollars from the students playing the games we watch. And the monopsony power of the college and universities allows them to greatly restrict how much compensation the athletes generating these dollars get to receive.

At least, that is the story we were told. For years we suspected that many athletes were receiving additional benefits from boosters of athletic programs. Such benefits violated the rules of college sports. Nevertheless, schools that wanted to employ specific individuals would use boosters to help recruit that talent.

Now this has all changed. The efforts boosters had made to recruit talent under the table in the past can now be made in the full light of day. Starting in 2021, college athletes could be compensated for their Name, Image, and Likeness (NIL). And this means, a booster can now just give money to the athletes they want to see compete for their favorite team.

Of course, this is not the intent of NIL deals. An NIL deal is supposedly about an athlete being hired to do something like pitch a product. Certainly, such deals are being made. And it certainly makes sense that an athlete whose NIL qualities are worth something in the marketplace should be compensated. To use a person’s NIL without paying that individual is, as the courts have ruled, very, very wrong.

Right?

Well, there’s one obvious exception. Consider this scenario. An athlete—with a universities’ name clearly advertised on their uniform—is featured on a highlight on ESPN. Just like an athlete advertising Wendy’s, that highlight advertises the school. And that athlete’s NIL is clearly part of that advertisement. But it seems few people think that athlete is entitled to any compensation for appearing in this highlight. Certainly, the universities don’t think so. Once again, universities have monopsonistic power and they have decided to restrict the payment of athletes to the cost of attendance.

Of course, college athletics aren’t just about advertising a university. College athletics also directly generate revenue for the school. For example, the University of Oklahoma reported in 2021 to the Department of Education that its athletic teams generated $157 million in revenue. The athletic program had 596 participants. Imagine the Sooners did what professional sports in North America generally do and gave 50% of their revenue to their players. If they did this, the 596 athletes would split nearly $80 million. Or to put it another way, each athlete would get more than $260,000 to play for the Sooners. Yes, an education at the University of Oklahoma is worth quite a bit. But it is not worth $260,000 per year.

Should the schools split the revenues equally across all athletic teams? One could argue that teams and players that generate more revenue on the field should be paid more. For example, consider a study of the men’s basketball team at Duke University in 2014-15. For that season, Duke University told the Department of Education that the men’s basketball team generated $33.7 million in revenue. If half of that revenue went to the players, then those players would receive about $16.9 million. Spread equally across 15 basketball players on the roster, that would come to $1.1 million per year. Alternatively, if that $16.9 million was allocated in terms of on-court productivity, Jahlil Okafor would have received $4.1 million for the one season he spent with the Blue Devils. It was estimated four more players were each worth more than one million dollars.

To put it simply, if the men’s basketball team at Duke University operated in the same market we see in the NBA, many of their players would be paid millions of dollars. The fact they were not means they are very much exploited.

Exploitation Is Not Just a Man’s Game

One might think that we would only find such exploitation with respect to college football and men’s basketball. A similar study found, however, that women in college basketball also could generate more revenue than what they were paid by their school. The same was found with respect to some athletes in college softball (study forthcoming at the Journal of Sports Economics) and women’s gymnastics (study presented at the Western Economic Association). In sum, exploitation is not simply a man’s game in college sports.

Of course, because revenues are currently much higher in college football and men’s college basketball, the wages we would see in a competitive labor market would be predicted to be much higher in these two men’s sports than what we might see in women’s college sports. It is important to understand why those differences exist. Title IX became law in 1972. Prior to this law, colleges and universities were under no obligation to offer and invest in women’s sports. Due to the prevailing discrimination, women’s college sports very much lagged behind men’s college sports.

After Title IX became law, the investment gap between men and women’s college sports closed. But as the USA Today’s investigation into Title IX revealed, the gap most certainly didn’t vanish. Consequently, it is reasonable to infer that the revenue gap we see in college sports is really about discrimination. And that means, there is a good argument to be made that athletic revenues should be evenly distributed across men and women’s sports. Or as the economist Stefan Szymanski put it in a discussion of U.S. Soccer, there is a good case to be made that men’s sports should pay reparations to women’s sports to overcome decades of discrimination.

However you think the revenue should be distributed among the athletes in college sports, one issue should be clear: The current system, which dramatically limits the compensation athletes receive for the revenue they directly generate for their schools, is wrong. The revenues earned by colleges and universities come from the efforts of the employees at these institutions. Currently these institutions pay the university administrators, faculty, staff and coaches wages that are negotiated in a labor market. Like these people, college athletes are employees and they should also be paid for their efforts in a labor market that isn’t controlled by the monopsony power of the NCAA.

Countering the Arguments Against Payments

Not surprisingly, not everyone likes this idea. More specifically, those who benefit from the NCAA’s monopsony power and others who simply don’t understand the economics of college sports often raise the following objections to this plan.

Unfortunately, this is likely just a sample of the many objections people have to treating college athletes like employees. Those who object to this idea seem quite adept at predicting the payment of college athletics would have catastrophic consequences!! 

But the current system we have has already resulted in catastrophic consequences for more than a century. The monopsony power of college and universities has resulted in millions of dollars being transferred from the workers (i.e., athletes) who generate much of this revenue to other people employed by these institutions (i.e., coaches and other administrators).

We are now moving to a system where athletes can get paid by boosters. Of course, that can’t be thought of as a good system either. More specifically, why would any institution think it is a good idea to have the payment to their workers controlled by individuals and groups not associated with their institution?

At this point it should be obvious there is a better way. College sports has done quite well for over a century, and the coaches and administrators associated with these programs have enjoyed most of the benefits from these programs. It is time to end the monopsony power of the NCAA and start treating college athletes like any other college employee.

That doesn’t mean replacing the NCAA’s current system that restricts compensation with another system that controls athlete compensation. What I am advocating is replacing the current system of monopsonistic control with a system where college athletes are treated like any other employee. In other words, we should bring the free labor market to college sports. Yes, that will mean coaches and administrators will likely end up with less. But the people we are watching play the games should be fully compensated for their efforts promoting the institutions that hired them.

David Berri is a professor of economics at Southern Utah University, lead author of the books Wages of Wins and Stumbling on Wins, and author of the textbook Sports Economics.

As the DOJ’s antitrust case against Google begins, all eyes are focused on whether Google violated antitrust law by, among other things, entering into exclusionary agreements with equipment makers like Apple and Samsung or web browsers like Mozilla. Per the District Court’s Memorandum Opinion, released August 4, “These agreements make Google the default search engine on a range of products in exchange for a share of the advertising revenue generated by searches run on Google.” The DOJ alleges that Google unlawfully monopolizes the search advertising market.

Aside from matters relating to antitrust liability, an equally important question is what remedy, if any, would work to restore competition in search advertising in particular and online advertising generally?

Developments in the UK might shed some light. The UK Treasury commissioned a report to make recommendations on changes to competition law and policy, which aimed to “help unlock the opportunities of the digital economy.” The report found that Big Tech’s monopolizing of data and control over open web interoperability could undermine innovation and economic growth. Big Tech platforms now have all the data in their hands, block interoperability with other sources, and will capture more of it, through their huge customer-facing machines, and so can be expected to dominate the data needed for the AI Period, enabling them to hold back competition and economic growth.

The dominant digital platforms currently provide services to billions of end users. Each of us has either an Apple or Android device in our pocket. These devices operate as part of integrated distribution platforms: anything anyone wants to obtain from the web goes through the device, its browser (often Google’s search engine), and the platform before accessing the Open Web, if not staying on an app on an apps store within the walls of the garden.

Every interaction with every platform product generates data, refreshed billions of times a day from multiple touch points providing insight into buying intent and able to predict people’s behavior and trends.

All this data is used to generate alphanumeric codes that match data contained in databases (aka “Match Keys”), which are used to help computers interoperate and serve relevant ads to match users’ interests. These were for many years used by all from the widely distributed Double Click ID. They were shared across the web and were used as the main source of data by competing publishers and advertisers. After Google bought Double Click and grew big enough to “tip” the market, however, Google withdrew access to its Match Keys for its own benefit.

The interoperability that is a feature of the underlying internet architecture has gradually been eroded. Facebook collected its own data from user’s “Likes” and community groups and also withdrew access for independent publishers to its Match Key data, and recently Apple has restricted access to Match Key data that is useful for ads for all publishers, except Google has a special deal on search and search data. As revealed in U.S. vs Google, Apple is paid over $10 billion a year by Google so that Google can provide its search product to Apple users and gather all their search history data that it can then use for advertising. The data generated by end user interactions with websites is now captured and kept within each Big Tech walled garden.

If the Match Keys were shared with rival publishers for use in their independent supply channel and used by them for their own ad-funded businesses, interoperability would be improved and effective competition could be generated with the tech platforms. Competition probably won’t exist otherwise.  

Both Google and Apple currently impose restrictions on access to data and interoperability. Cookie files also contain Match Keys that help maintain computer sessions and “state” so that different computers can talk to each other and help remember previous visits to websites and enable e-commerce. Cookies do not themselves contain personal data and are much less valuable than the Match Keys that were developed by Double Click or ID for advertisers, but they do provide something of a substitute source of data about users’ intent to purchase for independent publishers.

Google and Apple are in the process of blocking access to Match Keys in all forms to prevent competitors from obtaining relevant data about users needs and wants. They also prevent the use of the Open Web and limit the inter-operation of their apps stores with Open Web products, such as progressive web apps.

The UK’s Treasury Report refers to interoperability 8 times and the need for open standards as a remedy 43 times; the Bill refers to interoperability and we are expecting further debate about the issue as the Bill passes through Parliament.

A Brief History of Computing and Communications

The solution to monopolization, or lack of competition, is the generation of competition and more open markets. For that to happen in digital worlds, access to data and interoperability is needed. Each previous period of monopolization involved intervention to open-up computer and communications interfaces via antitrust cases and policy that opened market and liberalized trade. We have learned that the authorities need to police standards for interoperability and open interfaces to ensure the playing field is level and innovation can take place unimpeded. 

IBM’s activity involved bundling computers and peripherals and the case was eventually solved by unbundling and unblocking interfaces needed by competitors to interoperate with other systems. Microsoft did the same, blocking third parties from interoperating via blocking access to interfaces with its operating system. Again, it was resolved by opening-up interfaces to promote interoperability and competition between products that could then be available over platforms.

When Tim Berners Lee created the World Wide Web in the early 1990s, it took place nearly ten years after the U.S. courts imposed a break-up of AT&T and after the liberalization of telecommunications data transmission markets in the United States and the European Union. That liberalization was enabled by open interfaces and published standards. To ensure that new entrants could provide services to business customers, a type of data portability was mandated, enabling numbers held in incumbent telecoms’ databases to be transferred for use by new telecoms suppliers. The combination of interconnection and data portability neutralized the barrier to entry created by the network effect arising from the monopoly control over number data.

The opening of telecoms and data markets in the early 1990s ushered in an explosion of innovation. To this day, if computers operate to the Hyper Text Transfer Protocol then they can talk to other computers. In the early 1990s, a level playing field was created for decentralized competition among millions of businesses.

These major waves of digital innovation perhaps all have a common cause. Because computing and communications both have high fixed costs and low variable or incremental costs, and messaging and other systems benefit from network effects, markets may “tip” to a single provider. Competition in computing and communications then depends on interoperability remedies. Open, publicly available interfaces in published standards allow computers and communications systems to interoperate; and open decentralized market structures mean that data can’t easily be monopolized. 

It’s All About the Match Keys

The dominant digital platforms currently capture data and prevent interoperability for commercial gain. The market is concentrated with each platform building their own walled gardens and restricting data sharing and communication across. Try cross-posting among different platforms as an example of a current interoperability restriction. Think about why messaging is restricted within each messaging app, rather than being possible across different systems as happens with email. Each platform restricts interoperability preventing third-party businesses from offering their products to users captured in their walled gardens.

For competition to operate in online advertising markets, a similar remedy to data portability in the telecom space is needed. Only, with respect to advertising, the data that needs to be accessed is Match Key data, not telephone numbers.    

The history of anticompetitive abuse and remedies is a checkered one. Microsoft was prohibited from discriminating against rivals and had to put up a choice screen in the EU Microsoft case. It didn’t work out well. Google was similarly prohibited by the EU in Google search (Shopping) from (1) discriminating against rivals in its search engine results pages, (2) entering exclusive agreements with handset suppliers that discriminated against rivals, and (3) showing only Google products straight out of the box in the EU Android case. The remedies did not look at the monopolization of data and its use in advertising. Little has changed and competitors claim that the remedies are ineffective.

Many in the advertising publishing and ad tech markets recall that the market worked pretty well before Google acquired Double Click. Google uses multiple data sources as the basis for its Match Keys and an access and interoperability remedy might be more effective, proportionate and less disruptive.     

Perhaps if the DOJ’s case examines why Google collects search data from its search engine, its use of search histories, browser histories and data from all interactions with all products for its Match Key for advertising, the court will better appreciate the importance of data for competitors and how to remedy that position for advertising-funded online publishing. 

Following Europe’s Lead

The EU position is developing. Under the EU’s Digital Markets Act (DMA), which now supplements EU antitrust law as applied in the Google Search and Android Decisions, it is recognized that people want to be able to provide products and services across different platforms or cross-post or communicate with people connected to each social network or messaging app. In response, the EU has imposed obligations on Big Tech platforms in Articles 5(4) and 6(7) that provide for interoperability and require gatekeepers to allow open access to the web.

Similarly, Section 20.3 (e) of the UK’s Digital Markets, Competition and Consumers Bill (DMCC) refers to interoperability and may be the subject of forthcoming debate as the bill passes further through Parliament. Unlike U.S. jurisprudence with its recent fixation on consumer welfare, the objective of the Competition and Markets Authority is imposed by the law. The obligation to “promote competition for the benefit of consumers” is contained in EA 2013 s 25(3). This can be expressly related to intervention opening up access to the source of the current data monopolies: the Match Keys could be shared, meaning all publishers could get access to IDs for advertising (i.e., operating systems generated IDs such as Apple’s IDFA or Google’s Google ID or MAID).

In all jurisdictions it will be important for remedies to stimulate innovation, and to ensure that competition is promoted between all products that can be sold online, rather than between integrated distribution systems. Moreover, data portability needs to apply with reference to use of open and interoperable Match Keys that can be used for advertising, and that way address the data monopolization risk. As with the DMA, the DMCC should contain an obligation for gatekeepers to ensure fair reasonable and nondiscriminatory access, and treat advertisers in a similar way to that through which interoperability and data potability addressed monopoly benefits in previous computer, telecoms, and messaging cases.        

Tim Cowen is the Chair of the Antitrust Practice at the London-based law firm of Preiskel & Co LLP.

In July, a proposed $13 billion mega-merger between Sanford Health, the largest rural health system in the county, and Fairview Health Services, one of the largest systems in Minnesota’s Twin Cities metro, was called off. Abandonment of the merger came after concerted opposition from farmers, healthcare workers, and medical students, emboldened by passage of state legislation that creates much stronger oversight of healthcare mergers. The new law addresses several of the challenges the Federal Trade Commission (FTC) has encountered while trying to block hospital mergers and demonstrates the important role states can play in policing monopoly power. 

Hospital consolidation has been rapid and relentless over the past two decades, with over 1,800 hospital mergers since 1998 leaving the United States with around 6,000 hospitals instead of 8,000. This consolidation has raised healthcare costs, reduced access to care, and lowered wages for healthcare workers. Although nearly half of all FTC merger challenges between 2000 and 2018 involved the healthcare industry, that effort still only amounted to challenging around one percent of hospital mergers.

While the FTC has made efforts to protect competition among hospitals and health systems over the years, it has faced key obstacles, including (1) limits on pre-merger notification, (2) a self-imposed limit to focus exclusively on challenging mergers of hospitals within a single geographic region, and (3) exemptions in the FTC’s antitrust authority over nonprofits. 

Parties to small healthcare mergers don’t have to notify the FTC before merging due to the limits on pre-merger notification under the Hart-Scott-Rodino Act. Thus, the FTC is unaware of many smaller healthcare mergers, and the agency is left trying to unwind those mergers after the fact.

The FTC’s election to refrain from challenging ”cross-market mergers,” which involve hospitals operating in different geographic markets, has enabled such systems to become the predominant health system nationwide. This hands-off approach occurs despite mounting evidence that cross-market mergers give health systems even more power to raise prices. A study in the RAND Journal of Economics found that hospitals acquired by out-of-market systems increased prices by about 17 percent more than unacquired, stand-alone hospitals; these mergers were also found to drive up prices at nearby rivals. 

While the FTC has broad authority to challenge hospital mergers, the agency’s authority to prevent anticompetitive conduct is more limited. The FTC Act gives the agency the authority to prohibit “unfair methods of competition” and “unfair or deceptive acts or practices” but that authority does not extend to nonprofits, which account for 48.5 percent of hospitals nationwide. This has meant that antitrust cases like the one against Atrium Health in 2016 for entering into contracts with insurers that contained anti-steering and anti-tiering clauses, have been brought by the DOJ.

Minnesota Serves as a Testing Ground

Minnesota is no stranger to the hospital consolidation that has visited the rest of the country. Over two decades ago, 67 percent of Minnesota’s hospitals were independent, but because of a wave of consolidation that has bolstered the largest health systems, only 28 percent of Minnesota’s hospitals remain independent. Just six health systems control 66 of Minnesota’s 125 hospitals, compared to 51 a decade prior. Just three health systems (Fairview, The Mayo Clinic, Allina Health System) receive nearly half of all hospital operating revenue in Minnesota. Amidst this consolidation, Minnesota has lost ten hospitals since 2010 and seen per capita spending for hospital care rise from six percent below the national average in 1997 to over eight percent above the national average in 2021, according to Personal Consumption Expenditures data from the Bureau of Economic Analysis. 

The Sanford-Fairview hospital merger would have doubled-down on these trends. The combination would have given Sanford control of a fifth of Minnesota’s hospitals, with a geographic footprint spanning across several corners of the state. The merger also would have established the largest operator of primary care clinics. In addition to the sheer size of the merger, Fairview’s control of the University of Minnesota Medical Center, which is home to the teaching hospital that trains 70 percent of Minnesota’s doctors, generated labor concerns and provided an opening for passage of tougher regulations on healthcare transactions. 

The initial legislative activity around the Sanford-Fairview merger leveraged the work by Attorney General (AG) Keith Ellison when the transaction was first announced. Ellison’s office held four community meetings across the state to gather input from Minnesotans on the deal, and legislators followed with their own informational hearings. Initial legislative concerns specifically related to granting an out-of-state entity control over a teaching hospital. Because of the work of Ellison’s office alongside organizations like the Minnesota Farmers Union (the author’s employer), the Minnesota Nurses Association, and SEIU-Healthcare Minnesota, legislative discussions turned more broadly to fixing the lack of safeguards Minnesota law provided against healthcare consolidation.

Sanford and Fairview initially failed to provide information Ellison’s office needed to properly investigate the merger, which left Ellison publicly pleading with the systems to delay their initial timeline. While the entities agreed to do so, the delay created uncertainty over whether Ellison’s office would be able to conduct a proper review before the transaction was finalized. 

The law that passed makes three critical changes that help address the obstacles the FTC has run into. First, the law created a robust pre-merger notification regime that will give the Minnesota AG access to a broader set of information than the FTC currently receives under the HSR Act. This requirement is also much broader than the minimal notice requirements that previously existed in state law, and should help avoid a repeat of a key issue during Ellison’s review of the merger. Healthcare entities will now be required to provide specific information to the AG’s Office at the outset. The law also makes the failure to provide this information a reason for blocking a proposed transaction. Health systems will be required to provide geographic information, details on any existing relationships between the merging systems, terms of the transaction, any plans for the new system to reduce workforce or eliminate services as a result of the transaction, any analysis completed by experts or consultants used to facilitate and evaluate the transaction, financial statements, and any federal filings pertaining to the merger including information filed pursuant to the Hart-Scott-Rodino Act. 

Second, the new law requires that health systems provide a financial and economic analysis of the proposed transaction, as well as an impact analysis of the merger’s effects on local communities and local labor. This broad set of information in some ways resembles the changes that the FTC recently proposed to HSR filings. These first two requirements apply to any transaction that involves a healthcare entity that has average annual revenues of $80 million or more or will result in the creation of an entity with annual revenues of $80 million or more. This is a lower revenue threshold than contained in the HSR Act.

Third, the new law establishes a public interest standard for evaluating healthcare transactions. The law spells out a wide range of factors the AG can consider when determining whether a proposed transaction is in the public’s interest. These broad factors include a transaction’s potential impact on the wages, working conditions or collective bargaining agreements for healthcare workers, the impact on public health, access to care in affected communities, access to care for underserved populations, the quality of medical education, workforce training or research, access to health services, insurance or workers, costs for patients and broader healthcare costs trends.  

This broad public interest standard helps ensure that the narrowness of current antitrust law and its mountains of bad case law, do not restrict Minnesota’s ability to address the harms of hospital monopolies. Instead of having to fight with courts over technical definitions of healthcare markets, the AG can point to the many harms flowing from consolidation, regardless of whether the transaction is a cross-market merger. In addition to the public interest standard, the law explicitly prohibits any transaction that would substantially lessen competition or tend to create a monopoly or monopsony.

The New Law Soon Will Be Put to Practice

While Sanford-Fairview will no longer provide a potential test case of the new law, two mergers in northern Minnesota were proposed just last month. As policymakers were told throughout the legislative session, Sanford-Fairview was far from the last healthcare merger with which Minnesota would need to grapple. One proposal would combine Minnesota-based Essentia Health with Wisconsin-based Marshfield Clinics Health System into a four-state system stretching across northern North Dakota, Michigan, Minnesota, and Wisconsin. The other proposed merger would fold the small two-hospital St. Luke’s Duluth system into the 17-hospital Wisconsin-based Aspirus Healthcare.  

Whether in healthcare or elsewhere in the economy, mergers are not inevitable, nor are they beyond the capacity of state governments to address. With Congressional gridlock and legislative capture posing a challenge to any federal antitrust reforms, states are a necessary battleground for anti-monopolists. Minnesota’s battle with Sanford and Fairview can serve as an instructive model for the rest of the country. Mobilizing state legislators and state AGs to pass bold antitrust reforms and challenge corporate power not only creates a laboratory for these reforms, but also serves an important part of dealing with monopolists in a world where federal enforcers face significant resource and legal constraints. 

Justin Stofferahn is Antimonopoly Director for the Minnesota Farmers Union.

If I were to draft new Merger Guidelines, I’d begin with two questions: (1) What have been the biggest failures of merger enforcement since the 1982 revision to the Merger Guidelines?; and (2) What can we do to prevent such failures going forward? The costs of under-enforcement have been large and well-documented, and include but are not limited to higher prices, less innovation, lower quality, greater inequality, and worker harms. It’s high time for a course correction. But do the new Merger Guidelines, promulgated by Biden’s Department of Justice (DOJ) and Federal Trade Commission (FTC), do the trick?

Two Recent Case Studies Reveal the Problem

Identifying specific errors in prior merger decisions can inform whether the new Guidelines will make a difference. Would the Guidelines have prevented such errors? I focus on two recent merger decisions, revealing three significant errors in each for a total of six errors.

The 2020 approval of the T-Mobile/Sprint merger—a four-to-three merger in a highly concentrated industry—was the nadir in the history of merger enforcement. Several competition economists, myself included, sensed something was broken. Observers who watched the proceedings and read the opinion could fairly ask: If this blatantly anticompetitive merger can’t be stopped under merger law and the existing Merger Guidelines, what kind of merger can be stopped? Only mergers to monopoly?

The district court hearing the States’ challenge to T-Mobile/Sprint committed at least three fundamental errors. (The States had to challenge the merger without Trump’s DOJ, which embraced the merger for dubious reasons beyond the scope of this essay.) First, the court gave undue weight to the self-serving testimony of John Legere, T-Mobile’s CEO, who claimed economies from combining spectrum with Sprint, and also claimed that it was not in T-Mobile’s nature to exploit newfound market power. For example, the opinion noted that “Legere testified that while T-Mobile will deploy 5G across its low-band spectrum, that could not compare to the ability to provide 5G service to more consumers nationwide at faster speeds across the mid-band spectrum as well.” (citing Transcript 930:23-931:14). The opinion also noted that:

T-Mobile has built its identity and business strategy on insulting, antagonizing, and otherwise challenging AT&T and Verizon to offer pro-consumer packages and lower pricing, and the Court finds it highly unlikely that New T-Mobile will simply rest satisfied with its increased market share after the intense regulatory and public scrutiny of this transaction. As Legere and other T-Mobile executives noted at trial, doing so would essentially repudiate T-Mobile’s entire public image. (emphasis added) (citing Transcript at 1019:18-1020:1)

In the court’s mind, the conflicting testimony of the opposing economists cancelled each other out—never mind such “cancelling” happens quite frequently—leaving only the CEO’s self-serving testimony as critical evidence regarding the likely price effects. (The States’ economic experts were the esteemed Carl Shapiro and Fiona Scott Morton.) It bears noting that CEOs and other corporate executives stand to benefit handsomely from the consummation of a merger. For example, Activision Blizzard Inc. CEO Bobby Kotick reportedly stands to reap more than $500 million after Microsoft completes its purchase of the video game publishing giant.

Second, although the primary theory of harm in T-Mobile/Sprint was that the merger would reduce competition for price-sensitive customers of prepaid service, most of whom live in urban areas, the court improperly credited speculative commitments to “provide 5G service to 85 percent of the United States rural population within three years.” Such purported benefits to a different set of customers cannot serve as an offset to the harms to urban consumers who benefited from competition between the only two facilities-based carriers that catered to prepaid customers.

Third, the court improperly embraced T-Mobile’s proposed remedy to lease access to Dish at fixed rates—a form of synthetic competition—to restore the loss in facilities-based competition. Within months of the consummated merger, the cellular CPI ticked upward for the first time in a decade (save a brief blip in 2016), and T-Mobile abandoned its commitments to Dish.

The combination of T-Mobile/Sprint represented the elimination of actual competition across two wireless providers. In contrast, Facebook’s acquisition of Within, maker of the most popular virtual reality (VR) fitness app on Facebook’s VR platform, represented the elimination of potential competition, to the extent that Facebook would have entered the VR fitness space (“de novo entry”) absent the acquisition. In disclosure, I was the FTC’s economic expert. (I commend everyone to read the critical review of the new Merger Guidelines by Dennis Carlton, Facebook’s expert, in ProMarket, as well as my thread in response.) The district court sided with the FTC on (1) the key legal question of whether potential competition was a dead letter (it is not), (2) market definition (VR fitness apps), and (3) market concentration (dominated by Within). Yet many observers strangely cite this case as an example of the FTC bringing the wrong cases.

Alas, the court did not side with the FTC on the key question of whether Facebook would have entered the market for VR fitness apps de novo absent the acquisition. To arrive at that decision, the court made three significant errors. First, as Professor Steve Salop has pointed out, the court applied the wrong evidentiary standard for assessing the probability of de novo entry, requiring the FTC to show a probability of de novo entry in excess of 50 percent. Per Salop, “This standard for potential entry substantially exceeds the usual Section 7 evidentiary burden for horizontal mergers, where ‘reasonable probability’ is normally treated as a probability lower than more-likely-than-not.” (emphasis in original)

Second, the court committed an error of statistical logic, by crediting the lack of internal deliberations in the two months leading up to Facebook’s acquisition announcement in June 2021 as evidence that Facebook was not serious about de novo entry. Three months before the announcement, however, Facebook was seriously considering a partnership with Peloton—the plan was approved at the highest ranks within the firm. Facebook believed VR fitness was the key to expanding its user base beyond young males, and Facebook had entered several app categories on its VR platform in the past with considerable success. Because de novo entry and acquisition are two mutually exclusive entry paths, it stands to reason that conditional on deciding to enter via acquisition, one would expect to see a cessation of internal deliberation on an alternative entry strategy. After all, an individual standing at a crossroads would consider alternative paths, but upon deciding which path to take and embarking upon it, the previous alternatives become irrelevant. Indeed, the opinion even quoted Rade Stojsavljevic, who manages Facebook’s in-house VR app developer studios, testifying that “his enthusiasm for the Beat Saber–Peloton proposal had “slowed down” before Meta’s decision to acquire Within,” indicating that the decision to pursue de novo entry was intertwined with the decision to entry via acquisition. In any event, the relevant probability for this potential competition case was the probability that Facebook would have entered de novo in the absence of the acquisition. And that relevant probability was extremely high.

Third, like the court in T-Mobile/Sprint, the district court again credited the self-serving testimony of Facebook’s CEO, Mark Zuckerberg, who claimed that he never intended to enter VR fitness apps de novo. For example, the court cited Mr. Zuckerberg’s testimony that “Meta’s background and emphasis has been on communication and social VR apps,” as opposed to VR fitness apps. (citing Hearing Transcript at 1273:15–1274:22). The opinion also credited the testimony of Mr. Stojsavljevic for the proposition that “Meta has acquired other VR developers where the experience requires content creation from the developer, such as VR video games, as opposed to an app that hosts content created by others.” (citing Hearing Transcript at 87:5–88:2). Because this error overlaps with one of the three errors identified in the T-Mobile/Spring merger, I have identified five distinct errors (six less one) needing correction by the new Merger Guidelines.

Although the court credited my opinion over Facebook’s experts on the question of market definition and market concentration, the opinion did not cite any economic testimony (mine or Facebook’s experts) on how to think about the probability of entry absent the acquisition.

The New Merger Guidelines

I raise these cases and their associated errors because I want to understand whether the new Merger Guidelines—thirteen guidelines to be precise—will offer the kind of guidance that would prevent a future court from repeating the same (or similar) errors. In particular, would either the T-Mobile/Sprint or Facebook/Within decision (or both) have been altered in any significant way? Let’s dig in!

The New Guidelines reestablish the importance of concentration in merger analysis. The 1982 Guidelines, by contrast, sought to shift the emphasis from concentration to price effects and other metrics of consumer welfare, reflecting the Chicago School’s assault on the structural presumption that undergirded antitrust law. For several decades prior to the 1980s, economists empirically studied the effect of concentration on prices. But as the consumer welfare standard became antitrust’s north star, such inquiries were suddenly considered off-limits, because concentration was deemed to be “endogenous” (or determined by the same factors that determine prices), and thus causal inferences of concentration’s effect on price were deemed impossible. This was all very convenient for merger parties.

Guideline One states that “Mergers Should Not Significantly Increase Concentration in Highly Concentrated Markets.” Guideline Four states that “Mergers Should Not Eliminate a Potential Entrant in a Concentrated Market,” and Guideline Eight states that “Mergers Should Not Further a Trend Toward Concentration.” By placing the word “concentration” in three of thirteen principles, the agencies make it clear that they are resuscitating the prior structural presumption. And that’s a good thing: It means that merger parties will have to overcome the presumption that a merger in a concentrated or concentrating industry is anticompetitive. Even Guideline Six, which concerns vertical mergers, implicates concentration, as “foreclosure shares,” which are bound from above by the merging firms’ market share, are deemed “a sufficient basis to conclude that the effect of the merger may be to substantially lessen competition, subject to any rebuttal evidence.” The new Guidelines restore the original threshold Herfindahl-Hirschman Index (HHI) of 1,800 and delta HHI of 100 to trigger the structural presumption; that threshold had been raised to an HHI of 2,500 and a change in HHI of 200 in the 2010 revision to the Guidelines.

This resuscitation of the structural presumption is certainly helpful, but it’s not clear how it would prevent courts from (1) crediting self-serving CEO testimony, (2) embracing bogus efficiency defenses, (3) condoning prophylactic remedies, (4) committing errors in statistical logic, or (5) applying the wrong evidentiary standard for potential competition cases.

Regarding the proper weighting of self-serving employee testimony, error (1), Appendix 1 of the New Guidelines, titled “Sources of Evidence,” offers the following guidance to courts:

Across all of these categories, evidence created in the normal course of business is more probative than evidence created after the company began anticipating a merger review. Similarly, the Agencies give less weight to predictions by the parties or their employees, whether in the ordinary course of business or in anticipation of litigation, offered to allay competition concerns. Where the testimony of outcome-interested merging party employees contradicts ordinary course business records, the Agencies typically give greater weight to the business records. (emphasis added)

If heeded by judges, this advice should limit the type of errors we observed in T-Mobile/Sprint and Facebook/Within, with courts crediting the self-serving testimony by CEOs and other high-ranking employees.

Regarding the embrace of out-of-market efficiencies, error (2), Part IV.3 of the New Guidelines, in a section titled “Procompetitive Efficiencies,” offers this guidance to courts:

Merging parties sometimes raise a rebuttal argument that, notwithstanding other evidence that competition may be lessened, evidence of procompetitive efficiencies shows that no substantial lessening of competition is in fact threatened by the merger. When assessing this argument, the Agencies will not credit vague or speculative claims, nor will they credit benefits outside the relevant market. (citing Miss. River Corp. v. FTC, 454 F.2d 1083, 1089 (8th Cir. 1972)) (emphasis added)

Had this advice been heeded, the court in T-Mobile/Sprint would have been foreclosed from crediting any purported merger-induced benefits to rural customers as an offset to the loss of competition in the sale of prepaid service to urban customers. 

Regarding the proper treatment of prophylactic remedies offered by merger parties, error (3), footnote 21 of the New Guidelines state that:

These Guidelines pertain only to the consideration of whether a merger or acquisition is illegal. The consideration of remedies appropriate for otherwise illegal mergers and acquisitions is beyond its scope. The Agencies review proposals to revise a merger in order to alleviate competitive concerns consistent with applicable law regarding remedies. (emphasis added)

While this approach is very principled, the agencies cannot hope to cure a current defect by sitting on the sidelines. I would advise saying something explicit about remedies, including mentioning the history of their failures to restore competition, as Professor John Kwoka documented so ably in his book Mergers, Merger Control, and Remedies (MIT Press 2016).

Finally, regarding courts’ committing errors in statistical logic or applying the wrong evidentiary standard for potential competition cases, errors (4) and (5), the New Merger Guidelines devote an entire guideline (Guideline Four) to potential competition. Guideline Four states that “the Agencies examine (1) whether one or both of the merging firms had a reasonable probability of entering the relevant market other than through an anticompetitive merger.” Unfortunately, there is no mention that reasonable probability can be satisfied at less than 50 percent, per Salop, and the agencies would be wise to add such language in the Merger Guidelines. In defining “reasonable probability,” the Guidelines state that evidence that “the firm has successfully expanded into other markets in the past or already participates in adjacent or related markets” constitutes “relevant objective evidence” of a reasonable probably. In making its probability assessment, the court in Facebook/Within did not credit Facebook’s prior de novo entry in other app categories on Facebook’s VR platform. The Guidelines also state that “Subjective evidence that the company considered organic entry as an alternative to merging generally suggests that, absent the merger, entry would be reasonably probable.” Had it heeded this advice, the court would have ignored, when assessing the probability of de novo entry absent the merger, the fact that Facebook did not mention the Peloton partnership two months prior to the announcement of its acquisition of Within.

A Much Needed Improvement

In summary, I conclude that the new Merger Guidelines offer precisely the kind of guidance that would have prevented the courts in T-Mobile/Sprint and in Facebook/Within from committing significant errors. The additional language suggested here—taking a firm stance on remedies and defining reasonable probability—is really fine-tuning. While this review is admittedly limited to these two recent cases, the same analysis could be undertaken with respect to a broader array of anticompetitive mergers that have approved by courts since the structural presumption came under attack in 1982. The agencies should be commended for their good work to restore the enforcement of antitrust law.

This piece originally appeared in ProMarket but was subsequently retracted, with the following blurb (agreed-upon language between ProMarket’s Luigi Zingales and the authors):

“ProMarket published the article “The Antitrust Output Goal Cannot Measure Welfare.” The main claim of the article was that “a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.” The published version was unclear on whether the theorem contained in the article was a statement about an equilibrium outcome or a mere existence claim, regardless of the possibility that this outcome might occur in equilibrium. When we asked the authors to clarify, they stated that their claim regarded only the existence of such points, not their occurrence in equilibrium. After this clarification, ProMarket decided that the article was uninteresting and withdrew its publication.”

The source of the complaint that caused the retraction was, according to Zingales, a ProMarket Advisory Board member. The authors had no contact with that person, nor do we know who it is. We would have welcomed published scholarly debate versus retraction compelled by an anonymous Board Member.

We reproduce the piece in its entirety here. In addition, we provide our proposed revision to the piece, which we wrote to clear up the confusion that it was claimed was created by the first piece. We will let our readers be the judge of the piece’s interest. Of course, if you have any criticisms, we welcome professional scholarly debate.

(By the way, given that the piece never mentions supply or demand or prices, it is a mystery to us why any competent economist could have thought it was about “equilibrium.” But perhaps “equilibrium” was a pretext for removing the article for other reasons.)

The Antitrust Output Goal Cannot Measure Welfare (ORIGINAL POST)

Many antitrust scholars and practitioners use output to measure welfare. Darren Bush, Gabriel A. Lozada, and Mark Glick write that this association fails on theoretical grounds and that ideas of welfare require a much more sophisticated understanding.

By Darren Bush, Gabriel A. Lozada, and Mark Glick

Debate seems to have pivoted in the discourse on consumer welfare theory to the question of whether welfare can be indirectly measured based upon output. The tamest of these claims is not that output measures welfare, but that generally, output increases are associated with increases in economic welfare. 

This claim, even at its tamest, is false. For one, welfare depends on more than just output, and increasing output may detrimentally affect some of the other factors which welfare depends on. For example, increasing output may cause working conditions to deteriorate; may cause competing firms to close, resulting in increased unemployment, regional deindustrialization, and fewer avenues for small business formation; may increase pollution; may increase the political power of the growing firm, resulting in more public policy controversies and, yes, more lawsuits being decided in its interest; and may adversely affect suppliers. 

Even if we completely ignore those realities, it is still possible for an increase in output to reduce welfare. These two short proofs show that even in the complete absence of these other effects—that is, even if we assume that people obtain welfare exclusively by receiving commodities, which they always want more of—increasing output may reduce welfare. 

We will first prove that it is possible for an increase in output to reduce welfare under the assumption that welfare is assessed by a social planner. Then we will prove it assuming no social planner, so that welfare is assessed strictly via individuals’ utility levels.

The Social Planner Proof 

Here we show that a shift out in a production possibility frontier does not necessarily increase welfare, as assessed by a social welfare function.

Suppose in the figure below that the original production possibility frontier is PPF0 and

the new production possibility frontier is PPF1. Let USWF be the original level of social welfare, so that the curve in the diagram labeled USWF is the social indifference curve when the technology is represented by PPF0. This implies that when the technology is at PPF0, society chooses the socially optimal point, I, on PPF0. Next, suppose there is an increase in potential output, to PPF1. If society moves to a point on PPF1 which is above and to the left of point A, or is below and to the right of point B, then society will be worse off on PPF1 than it was on PPF0. Even though output increased, depending on the social indifference curve and the composition of the new output, there can be lower social welfare.

The Individual Utility Proof

Next, we continue to assume that only consumption of commodities determines welfare, and we show that when output increases every individual can be worse off. Consider the figure below, which represents an initial Edgeworth Box having solid borders, and a new, expanded Edgeworth Box, with dashed borders. The expanded Edgeworth Box represents an increase in output for both apples and bananas, the two goods in this economy.

The original, smaller Edgeworth Box has an origin for Jones labeled J and an origin for Smith labeled S. In this smaller Edgeworth Box, suppose the initial position is at C. The indifference curve UJ0 represents Jones’s initial level of utility with the smaller Edgeworth Box, and the indifference curve US represents Smith’s initial level of utility with the smaller Box.  In the larger Edgeworth Box, Jones’s origin shifts from J to J’, and his UJ0 indifference curve correspondingly shifts to UJ0′.  Smiths’ US indifference curve does not shift. The hatched areas in the graph are all the allocations in the bigger Edgeworth Box which are worse for both Smith and Jones compared to the original allocation in the smaller Edgeworth Box.

In other words, despite the fact that output has increased, if the new allocation is in the hatched area, then Smith and Jones both prefer the world where output is lower. We get this result because welfare is affected by allocation and distribution as well as by the sheer amount of output, and more output, if mis-allocated or poorly distributed, can decrease welfare.

GDP also does not measure aggregate Welfare 

The argument that “output” alone measures welfare sometimes refers not to literal output, as in the two examples above, but to a reified notion of “output.” A good example is GDP.  GDP is the aggregated monetary value of all final goods and services, weighted using current prices. Welfare economists, beginning with Richard Easterlin, have understood that GDP does not accurately measure economic well-being. Since prices are used for the aggregation, GDP incorporates the effects of income distribution, but in a way which hides this dependence, making GDP seem value-free although it is not. In addition, using GDP as a measure of welfare deliberately ignores many important welfare effects while only taking into account output. As Amit Kapoor and Bibek Debroy put it:

GDP takes a positive count of the cars we produce but does not account for the emissions they generate; it adds the value of the sugar-laced beverages we sell but fails to subtract the health problems they cause; it includes the value of building new cities but does not discount for the vital forests they replace. As Robert Kennedy put it in his famous election speech in 1968, “it [GDP] measures everything in short, except that which makes life worthwhile.”

Any industry-specific measure of price-weighted “output” or firm-specific measure of price-weighted “output” is similarly flawed.

For these reasons, few, if any, welfare economists would today use GNP alone to assess a nation’s welfare, preferring instead to use a collection of “social indicators.”

Conclusion

Output should not be the sole criterion for antitrust policy. We can do a better job of using competition policy to increase human welfare without this dogma. In this article, we showed that we cannot be certain that output increases welfare even in a purely hypothetical world where welfare depends solely on the output of commodities. In the real world, where welfare depends on a multitude of factors besides output—many of which can be addressed by competition policy—the case against a unilateral output goal is much stronger.

Addendum

The Original Sling posting inadvertently left off the two proposed graphs that we drew as we sought to remedy the Anonymous Board Member’s confusion about “equilibrium.” We now add the graphs we proposed. The explanation of the graphs was similar, and the discussion of GNP was identical to the original version.

The Proof if there is a Social Welfare Function (Revised Graph)

A diagram of apples and apples

Description automatically generated

The Individual Utility Proof (Revised Graph)

A diagram of a graph

Description automatically generated

The New Merger Guidelines (the “Guidelines”) provide a framework for analyzing when proposed mergers likely violate Section 7 of the Clayton Act that is more faithful to controlling law and Congressional intent than earlier Guidelines. The thirteen guidelines presented in the new Guidelines go quite a long way in pulling the Agencies back from an approach that placed undue burden on plaintiffs and ignored important factors such as the trend in market concentration and serial mergers that were addressed by earlier Supreme Court precedent. The Guidelines also incorporate the modern, more objective economics of the post-Chicago school of economics. For these reasons, and others, the Guidelines should be applauded.

Unfortunately, remnants of Judge Bork’s Consumer Welfare Standard remain. In several places the Guidelines refer to a merger’s anticompetitive effects as price, quantity (output), product quality or variety, and innovation. These are all effects that can shift demand curves or equilibrium positions in the output market and thus increase consumer surplus, the only goal recognized by the Consumer Welfare Standard.

To their credit, the Guidelines also mention input markets, referring to mergers that decrease wages, lower benefits or cause working conditions to deteriorate. Lower wages reduce labor surplus (rent), a consideration that would come within a Total Trading Partner Surplus approach. However, the traditional goals of antitrust as articulated by Congress and many Supreme Court opinions, including protecting democracy through dispersion of economic and political power, protection of small business, and preventing unequal income and wealth distribution, are conspicuously absent.

The basis for these traditional goals is well known. Prominent economist Stephen Martin has documented the judicial and congressional statements concerning the antitrust goal of dispersion of power. The historical support for the goal of preserving small business can be found in a recent paper by two of the authors of this piece. Lina Khan and Sandeep Vaheesan, and Robert Lande and Sandeep Vaheesan, have laid out the textual support for the antitrust inequality goal. Moreover, welfare economists have empirically demonstrated significant positive welfare effects from democracy, small business formation, and income equality.

Indeed, the Brown Shoe opinion, on which the Guidelines heavily rely, examined whether the lower court opinion was “consistent with the intent of the legislature” which drafted the 1950 Amendments, and the opinion itself refers to the goal of “protection of small businesses” in at least two places. The legislative history of the 1950 Amendment deemed important by the Brown Shoe Court evinced a clear concern that rising concentration will, according to Senator O’Mahoney, “result in a terrific drive toward a totalitarian government.”

The remnants of the Consumer Welfare Standard are most evident in the Guidelines’ rebuttal section on efficiencies. The Guidelines open the section with the recognition that controlling precedent is clear that efficiencies are not a defense to a merger that violates Section 7; accordingly, the section is offered as a rebuttal rather than a defense. In essence, if the merging parties can identify merger-specific and verifiable efficiencies, it can rebut a finding that the merger substantially lessened competition. The Guidelines do not define “efficiencies.”  However, the context makes clear that Guidelines mean to follow previous versions of the Merger Guidelines, that assume “efficiencies” are primarily cost savings. A defendant can offer a rebuttal to a presumption that a merger may significantly harm competition, if such cost savings are passed through to consumers in lower prices, to a degree that offsets any potential post-merger price increase. There are at least six reasons why the Agencies should jettison this “efficiency” rebuttal.

First, lower prices resulting from cost savings are quite a bit different than lower prices resulting from entry (rebuttal by entry). New entry reduces concentration, but cost savings at best will only lower output price, and higher prices (or reduced output) is not the sole problem that results from high concentration except under a strict Consumer Welfare Standard.

Second, to the extent the Guidelines equate efficiencies with cost savings (as in earlier merger guidelines), they have adopted the businessman’s definition of efficiencies. In contrast, economic theory suggests that some cost savings lower rather than raise social welfare. For example, cost savings from lower wages, greater unemployment, or redistribution between stakeholders can both lower welfare and reduce prices. An increase in consumer or producer surplus that comes at the expense of input supplier surplus can also lower welfare.

Third, only under the output-market half of a surplus theory of economic welfare, which is the original Consumer Welfare Standard, can one clearly link cost savings to economic welfare, because lower cost increases consumer and/or producer surplus. As we show elsewhere, this theory has been thoroughly discredited by welfare economists. In fact, for economists, “efficiency” only means Pareto efficiency. As discussed by Gregory Werden and by Mas-Colell et al.’s leading Microeconomics textbook (Chapter 10), the assumptions necessary to ensure that maximizing surplus results in Pareto Efficiency are extreme and unrealistic. These assumptions include quasilinear utility, perfectly competitive other markets, and lump sum wealth redistributions that maximize social welfare. This discredits the surplus approach, which is the only way to reconcile Pareto Efficiency, which is what efficiencies mean in economic theory, with cost savings, which is the definition implied in the Guidelines.

Fourth, the efficiency section is superfluous. As many economists have recognized, most recently Nancy Rose and Jonathan Sallet, the merging parties are already credited for efficiencies (cost savings) in the “standard efficiency credit” which undergirds Guideline 1. After all, absent any efficiencies, why allow any merger that evenly weakly increases concentration? A concentration screen that allows some mergers and not others must be assuming that all mergers come with some socially beneficial cost savings. Why do we need another rebuttal section when cost savings have already been credited?

Fifth, there is no empirical research to suggest that mergers that increase concentration actually lower costs and pass on sufficient benefits to consumers to constitute a successful rebuttal. As one district court commented, “The Court is not aware of any case, and Defendants have cited none, where the merging parties have successfully rebutted the government’s prima facia case on the strength of the efficiencies.” We have identified nine studies measuring either cost savings or productivity gains or profitability from mergers spanning industries like health insurance, banking, utility, manufacturing, beer, and concrete industries. Five of these studies find no evidence of productivity gain or a cost reduction. The other four studies find productivity gains in terms of cost savings; but three of these four studies report a significant increase in prices to the consumers post-merger, and the remaining study does not report price effects post-merger. In other words, we have not been able to find any empirical study showing post-merger pass on of cost savings to consumers. These results are consistent with those of Professor Kwoka, who performs a comprehensive meta-analysis of the price effects of horizontal mergers and finds that the post-merger price at the product-level increases by 7.2 percent on average, holding all other influences constant. More than 80 percent of product prices show increases, and those increases average 10.1 percent.

Sixth, even if there were cost savings from mergers it is unlikely that they would be merger- specific and verifiable. Earlier versions of the Merger Guidelines expressed skepticism that economies of scale or scope could not be achieved by internal expansion (1968 Merger Guidelines) or that cost savings related to “procurement, management or capital costs” would be merger specific (1997 Merger Guidelines). In their article on merger efficiencies, Fisher and Lande write that “it would be extremely difficult for merging firms to prove that they could not attain the anticipated efficiencies or quality improvements through internal expansion.” Louis Kaplow has argued that the ability to use contracting to achieve claimed efficiencies is seriously underappreciated or studied. Verification of future efficiencies is also inherently problematic. The 1997 Merger Guidelines state that efficiencies related to R&D are “less susceptible to verification.” This problem and other verification hurdles are discussed by Joe Brodley and John Kwoka. In summary, the New Merger Guidelines could be improved by a footnote in Guideline One clarifying the multiple antitrust goals Congress sought to achieve by preventing concentrated markets through mergers. In addition, the Agencies should take seriously the holdings of at least three Supreme Court Opinions, none of which have been overturned (Brown Shoe, Phila. Nat’l Bank and Procter & Gamble Co.) that (as quoted in the Guidelines) “possible economies [from a merger] cannot be used as a defense to illegality.” There are good reasons to abandon an efficiencies rebuttal as well.

Mark Glick, Pavitra Govindan and Gabriel A. Lozada are professors in the economics department at the University of Utah. Darren Bush is a professor in the law school at the University of Houston.

In 2023, Columbia University announced that it would no longer be participating in US News’ college rankings. At the time, the conventional interpretation of Columbia’s withdrawal was that it signaled the incoming demise of US News’ college rankings. Yet to date, no other elite undergraduate university has followed Columbia in withdrawing from US News’ undergraduate rankings. Could the conventional interpretation about Columbia’s withdrawal be wrong? 

In 1988, Columbia was ranked eighteenth in US News’ college rankings. But in the years that followed, Columbia’s undergraduate rank kept improving. By 2021, Columbia had surged to an all-time high of second. Naturally, Columbia’s breathtaking climb in US News rankings raised questions. What had Columbia done so well? What should Columbia do more of? How could other universities learn from Columbia? Among the people asking these questions was Columbia’s very own math professor, Dr. Michael Thaddeus. Skeptical by nature, he started studying Columbia’s rankings surge. What he found sent shockwaves through higher education.  

In February 2022, Dr. Thaddeus released a 21-page report exposing widespread misrepresentation of data provided by Columbia University to US News’ college rankings. For example, Dr. Thaddeus demonstrated that Columbia’s reported spending per student was inflated by “a substantial portion” of the $1.2 billion spent by its hospital on patient care, a function of the university completely unrelated to education. Because US News’ rankings are calculated, in part, by how much an institution spends per student, this overstatement greatly improved Columbia University’s ranking, or at least that’s what Dr. Thaddeus alleged. 

At first, Columbia intimated that Dr. Thaddeus was mistaken. Eventually Columbia came clean. In September 2022, Mary Boyce, Columbia’s provost, said in a statement, “We deeply regret the deficiencies in our prior reporting and are committed to doing better.” While Columbia’s acknowledgement was a step in the right direction, it was also silent on a crucial question. What possessed Columbia to lie to US News in the first instance?

In the edition of the US News rankings that followed Columbia’s rankings scandal, Columbia was demoted to a rank of 18th. The drop from second to eighteenth was incredibly steep. Observers wondered how it was computed. Indeed, Columbia hadn’t submitted any new data to US News following Dr. Thaddeus’s report. Instead, US News seemingly arrived at the new ranking without accurate data from Columbia to correct the inaccurate data from the past. The speculative nature of the ranking was on display, but so too was something else. While some drop for Columbia was nevertheless proper, was the drop to 18th justified? Or was US News making an example out of Columbia?  

Finally, in June 2023, Columbia withdrew from US News’ rankings, implying that the US News ranking was reductive, flawed, and distortive. After Columbia’s deeply embarrassing rankings scandal, it’s perhaps not surprising that Columbia would leave the party loudly and in protest. But the more interesting question is, if Columbia felt this way about US News’ ranking, why did it stay at the party so long to begin with? Why did it keep lying to muscle its way into the front of the party? And now that Columbia is gone, why are others refusing to leave the party? What explains all these contradictory facts? 

One theory, the charitable theory, is that the elite college ecosystem is just naturally full of uncoordinated institutions, where each institution pursuing its own interpretation of society’s best interests somehow leads to dysfunction in the aggregate. Per this theory, US News tries its best to create a good ranking but falls short because it is impossible to create truly objective ranking. Elite colleges are constantly looking to expand access, as evidenced by their commitment to affirmative action, but they are held back by constraints of resources, now the courts, regulation, or efficiency which are all outside their control. Per this theory, the cost of elite higher education rises because of Baumol’s cost disease. And per this theory, Columbia and other elite colleges don’t purposely lie to US News. Instead, elite colleges get mixed up in vague definitions that lead to understandable mistakes in their submissions. 

But the charitable theory is sometimes hard to swallow, in light of the facts. With each passing year, a different, more cynical theory feels increasingly plausible. Per this theory, elite colleges aren’t just independent, uncoordinated actors, but members of a commercially collusive cartel. It implies that US News is a vital hub for collusion among the elite colleges, helping elite colleges coordinate systemic scarcity of seats and raise each other’s costs. It means that elite colleges aren’t committed to access but its opposite. Per this alternative, the cost of education rises because of market structure, not natural economic laws; and it suggests that if elite colleges are merely doing what’s in their best interests, it’s in the context of a rigged system they designed and uphold. Such a cynical theory is inherently speculative, but is there an obvious reason to reject the elite college cartel theory outright? 

One obvious reason for objection might be that elite colleges are often in conflict with US News, not in cahoots. To take just one example, Columbia certainly wasn’t doing US News any favors, and US News may have retaliated against Columbia. So, how could US News be a hub for collusion, when it is clearly antagonistic to those that it ranks?   

Lessons from The Toy Cartel

A few decades ago, Toys “R” Us was the dominant toy retailer in America. But Toys “R” Us’ future dominance wasn’t assured. The retail toy market was rapidly changing. Disruptive entrants had created a new form of retail experience, the warehouse club.  By 1992, warehouse club chains like Costco, Sam’s Club, Pace, Price Club, and BJ’s were expanding quickly. 

The secret to the warehouse clubs’ success was that they were able to offer far cheaper prices because they slashed all sorts of operating costs. Club stores opened in places where real estate was cheaper, operated with less staff, and decorated themselves in a spartan way. As the President of Costco testified in the 1990s, “almost invariably our presence in the community is going to have a tendency to drive prices down.” 

For Toys “R” Us, there was reason to be nervous. In the early 90s, Toys “R” Us’ average margin on toys was above thirty percent. Costco’s margin on toys was nine percent. When Toys “R” Us’ chairman was asked whether the warehouse clubs could hurt his business, he responded, “Sure they could hurt us. Yeah.” When asked, “How so?,” he sharply replied, “By selling that product for a price that we couldn’t afford to sell it at. Simple economics.”

Competition was coming, but in the early 1990s, Toys “R” Us was still the toy manufacturers’ largest and most important customer, often buying 30 percent or more of the output of Hasbro, Mattel, and others. So, to prevent warehouse clubs from catching up, Toys “R” Us organized a cartel conspiracy with the toy manufacturers. Toys “R” Us offered the toy manufacturers a stronger relationship with itself, but only if they sold inferior products to the warehouse clubs like Costco. 

The conspiracy worked. As the FTC concluded, “By the end of 1993, all of the big, traditional toy companies were selling to the clubs only on discriminatory terms that did not apply to any other class of retailers.” When the toy manufacturers sold inferior toys to warehouse clubs, fewer consumers bought their toys there. For example, Mattel’s sales to warehouse clubs declined from over $23 million in 1991 to under $8 million in 1993. But it wasn’t pure sacrifice for the toy manufacturers. After all, the toy manufacturers were benefiting from Toys “R” Us’ big purchase orders even as Toys “R” Us was benefiting from suppressing warehouse clubs’ emergence as a threat in the retail toy market. 

Still, it wasn’t an easy cartel to operate. Some of the toy manufacturers wanted it all. They wanted to have a strong relationship with Toys “R” Us and they wanted to secretly increase their sales to the warehouse clubs too. As Toys “R” Us’ then-President Roger Goddu testified, “I would get phone calls all the time from Mattel saying Hasbro has this in the clubs or Fisher Price has that in the clubs…. So that occurred all the time.” Importantly, if one toy manufacturer cheated, the other manufacturers that stayed true lost out on market share. A cartel couldn’t run like that.  

In response, Toys “R” Us had to punish the cheating toy manufacturer for defecting from this toy cartel. Toys “R” Us would withhold its own orders from that cheating firm until it got back in line by pulling out of the warehouse clubs. Punishment from Toys “R” Us was key to making the whole system work. Indeed, the toy manufacturers acknowledged as much, often explaining to Toys “R” Us executives that they wouldn’t be a part of the toy cartel unless their competitors were too.      

The structure of the toy cartel was a variation on a traditional horizontal cartel. Instead of competitors colluding among themselves, a third-party ring leader helped them coordinate. Such a cartel has many names. Sometimes, it is called “hub-and-spoke.” Other times it is known as “rim-and-wheel.” I prefer “head-and-tentacles.” Whatever one calls it, it’s often illegal; and, Toys “R” Us and the toy manufacturers found that out in both Administrative and Appellate Court after the FTC sued them for antitrust violations in 1996. 

Why Rankings Matter

Returning to the elite college market, one major question implicit in the elite college cartel theory is whether the obvious tension between US News and the elite colleges it ranks is consistent with a cartel theory. As the toy cartel demonstrates, however, antagonisms are often a natural part of cartels, far from being inconsistent with, some antagonism may be evidence of a cartel. Ultimately, wasn’t US News’ demotion of Columbia all the way down to 18th, after Columbia got caught cheating, eerily reminiscent of the punishment Toys “R” Us used to dole out to promote cartel compliance? 

A different reason to scoff at the elite college cartel theory is that the mechanics of US News’ not-so-rigorous ranking surely couldn’t coordinate the policies of universities, each with billions of dollars, sometimes tens of billions of dollars, in their endowments. How could the satellite dictate the movements of the planet?  

One reason may be that rankings are an odd market. Credit rating agencies are often much smaller than the companies, states, or nations that they rate. Yet credit ratings often have huge effects on the valuations of those larger entities. For the unfamiliar, US News publishes the “definitive” college ranking. While other publications also publish rankings, US News has near monopoly market share, measured by views, in the college rankings market. Within the first few days of its annual releases, US News’ college ranking routinely captures tens of millions of views from anxious students and parents. One study finds that being on US News’ top 25 list can lead a school’s applications to go up between six and ten percent. A 2013 Harvard Business School study found that “a one-rank improvement leads to a 1-percentage-point increase in the number of applications to that college.” Empirically, when Cornell rocketed in US News’ rankings from fourteenth in the fall of 1997 to sixth in the fall of 1998, applications to Cornell rose by over ten percentage points the following cycle. To the extent that competition among elite colleges exists, Stanford Sociologist Mitchell Stevens describes US News’ rankings as “the machinery that organizes and governs this competition.” 

One reason rankings are so influential is that choosing a college is a complicated purchase. Young prospective students just don’t have the ability to forecast differences between four-year experiences with very little information about colleges themselves. Therefore, students and their parents often rely on proxy information, and the US News’ rankings are deeply influential in the admissions process. Plainly, a high US News ranking is a critical input that a college needs to compete in the elite college market. Therefore, because US News is a necessary upstream supplier for elite colleges, it is perfectly positioned to play the hub, consciously or unconsciously. Still, just because students rely on rankings doesn’t explain why students rely on US News’ ranking uniquely. How did US News become so dominant, and why can’t it be replaced?

US News’ path to dominance was paved by elite colleges. The incumbent elite universities of the 1980s implicitly agreed to lend US News an air of credibility, by filling out annual surveys, something that made US News popular with students. In parallel, US News’ helped the incumbent elite universities extend their incumbency into the future by creating a ranking that rated them highly not for their educational quality, but instead for their wealth and exclusivity. It’s unclear how conscious or unconscious this partnership between US News and elite colleges was. Maybe it was totally unconscious, with both sides merely pursuing their dominant business strategy. 

Regardless of the mental state, the descriptive truth is that this basic relationship between elite colleges and US News is still operative today. Harvard helps US News dominate the college rankings market in the present, and US News helps Harvard extend its dominance in the elite college market in the future. Bob Morse, one of the architects of US News’ rankings, admitted as much in an interview in 2009 saying, “When the public sees that the schools are wanting to do better in our rankings, they say, well if the schools want to improve in these rankings, they must be worth looking at. So, in essence, the colleges themselves have been a key factor in giving us the credibility.” Credibility is the most important factor for a ranking to be successful. Students and parents can’t independently adjudicate the quality of a ranking for the same reason that they can’t independently adjudicate the quality of a college: it’s complicated and subjective. Therefore, the ranking with the most credibility wins out with students and parents. And a path of least resistance to such credibility was to get the incumbent elites to qualify US News as worthwhile.   

US News likely knows that it can’t afford to lose the support of the incumbent elite schools; without it, US News couldn’t offer a credible ranking. US News likely knows that it cannot afford the perception of a boycott from those incumbent elite colleges. This may be why US News publishes a ranking that weights selectivity at seven percent, financial-resources-per-students at ten percent, class-size at eight percent, student-faculty-ratio at one percent, and colleges ranking each other at twenty percent. The US News ranking criteria has a very simple logic. As Washington Monthly magazine observed in 2000, “the perfect school is rich, hard to get into, harder to flunk out of, and has an impressive name.” 

There may well be evidence of hyper-elite colleges lobbying US News for changes to this or that criteria to better serve their needs. Yet even in the absence of such an explicit conspiracy, US News uses a criteria that self-justifies why incumbents are already on top of the prestige ladder. Presumably, both US News and the elite colleges know this. But the circular logic of US News’ rankings doesn’t just keep the elite colleges on top. It also incentivizes them to become more extreme versions of themselves. 

US News incentivizes colleges to pull in more applications so they can reject them. US News incentivizes colleges to stagnate enrollment growth. US News incentivizes colleges to raise prices further and spend more money per student. Worse, US News incentivizes less-elite colleges to adopt all the worst parts of the elite colleges, such as out-of-control spending. 

Plainly, incentivizing colleges to spend more money to compete in the elite market raises barriers to entry by shifting supply curves up for each participant. Plus, incentivizing colleges to be unduly rejective to attract students forces colleges to undersupply more than they otherwise would. In sum, US News’ rankings incentivize intense market dysfunctions like scarcity on the supply side.   

Per the elite college cartel theory, the role of Toys “R” Us is played by US News. US News allegedly coordinates collusion among all the different elite colleges. But the way US News allegedly coordinates its spokes is subtle and ingenious, as it facilitates seat scarcity coordination through its ranking formula instead of explicit communication. In this narrative, US News became dominant precisely because it chose to play the coordinating role that elite colleges may have wanted, and it played that role just as elite colleges may have wanted it to.

There are facts that support an inference of collusion. For one, links between US News and hyper-elite colleges are deep. In the 2000s and 2010s, Mortimer Zuckerman, the owner of US News, became a mega-donor to Ivy League colleges like Harvard and Columbia. He served on the Board of Trustees at Princeton. In that same period, US News’ monopoly consolidated. The hyper-elite colleges continued to support the regime. They didn’t boycott the rankings. They embraced them, continuing to give US News exclusive answers to surveys that no other ranking receives. When President Obama’s administration sought to roll out a public competitor to US News’ college rankings in 2013, elite college administrators rallied to kill the ranking effort. Instead, the public got a limp scorecard, and US News didn’t face a public, credible competitor. 

For another, elite colleges have horizontal links among themselves. For instance, there is large surface area for coordination in things such as lobbying for government policies, joint research, patent commercialization, and admissions. Moreover, elite colleges often have interlocking governance boards. Jurisprudentially, these types of interlocking links have supported inferences of conspiracy in many antitrust cases. 

Empirically, elite colleges have been pulled into court for allegedly collusive cartel behavior before. The Ivy League colleges were sued by the Department of Justice for fixing prices on financial aid in the 1990s. In 2022, a group of seventeen elite colleges was sued for price-fixing by a class of students on financial aid. The NCAA has been sued many times for cartel tactics that limit compensation for student-athletes. So, might seat scarcity, and exclusion of competitors, be another area where elite colleges collude?

Lastly, another factor supporting an inference of conspiracy are the market dynamics themselves. The demand for seats at elite institutions has proven to be remarkably inelastic. If the Varsity Blues scandal proves anything, it’s that people will go through a lot of trouble to capture a seat at an elite school. Importantly, inelastic demand, in other markets, has attracted cartel formation. For example, OPEC operates in the inelastic oil market. Big tobacco companies operate in the inelastic cigarette market. In those markets, cutting output by one percent has often raised prices by more than one percent, making scarcity a profitable strategy.   

In some markets with inelastic demand, a cartel isn’t needed to produce scarcity because only one or two companies control the entire market anyway. But the elite college market isn’t like that. Many different elite colleges exist. Without some machinery to coordinate scarcity, it likely would not be possible to produce systemic scarcity. 

In a system structured as ours, explicitly or accidentally, Penn’s acceptance rate necessarily drops from 47 percent in 1991 to less than 6 percent in 2023. As authors Carnevale, Schmidt, and Strohl quantify, in their book, The Merit Myth, “There are 1.5 million high school seniors with better than an 80 percent chance of graduating from one of the top 193 colleges, but those colleges annually admit only 250,000 freshmen.” As economists Kent and Smetters quantified in 2021, if elite colleges ignored relative prestige and simply maintained student quality, since 1990 their total enrollments would have doubled or tripled. Instead, Harvard, Princeton, Yale, and Stanford only increased enrollment by seven percent between 1990 and 2015. 

Let’s be clear: The artificial scarcity in slots, made possible by the ranking system, allows elite colleges to under-produce relative to a world in which such coordination was not possible; and by under-producing, the elite colleges are able to impose supra-competitive prices for admission.   

You can see the market failure all around you. Tuition prices keep rising well beyond the average inflation rate. Scarcity-induced admissions scandals like Varsity Blues continue to pop up. But perhaps the most obvious sign is that the demographics at formerly less exclusive colleges have also gone wacky. The rankings are warping the whole market into copycatting the worst parts of the hyper-elite colleges. Safety schools have morphed into reach schools. Reach schools have slipped out of sight. Seats at American universities have turned needlessly scarce, and the privileged few have largely outcompeted the aspiring many for those seats.  

Indeed, elite colleges might object to the entire conjecture of this article. They might say that there is no fusion between themselves and US News. They might claim that they are increasingly diverging from US News’ rankings. They might point to their boycott of the US News’ law school rankings. Harvard might point out that it pulled out of US News’ medical school rankings. 

Yet it’s not clear if these objections comprehensively rule out a cartel explanation. Even if elite colleges are pulling out of US News’ rankings at the graduate levels, they refuse to do so at the undergraduate level. There is absolutely no evidence that a boycott is forthcoming for the undergraduate rankings. Instead, we see only continuing participation. Importantly, undergraduate rankings matter far more than law school or medical school rankings. They matter more because an elite college’s undergraduate reputation is also often used as a proxy for its graduate programs. 

Pitiful Growth in Seats

Skeptics might also assert that elite colleges have always been exclusive, independently of US News. Elite colleges might argue that the whole reason that they’re elite is because they’re exclusive. This argument is more persuasive on first glance than on close examination. While it’s true that elite colleges must reject some students to maintain class quality, the question is one of degree. 

How exclusive does an elite college need to be? Do we need Ivy League colleges to reject 95 percent of applicants? Or will rejecting 80 percent, as they did in the 1980s, suffice? It is a false argument to assert that growth and quality are necessarily opposed. For example, before the rankings-era, Stanford increased its enrollment by over 250 percent from 1920 to 1970. It managed to stay very elite in that period of time. 

Nor is undue rejection necessary to maintain academic quality. A common quip on Harvard’s campus is that the hardest thing about Harvard is getting in. There are many students of diligent character and great intellect who are routinely rejected by the elite colleges, and those rejections have nothing to do with quality. Instead, those rejections may be the collateral damage that a hub-and-spoke cartel produces. Absent this concern for relative prestige, driven home by the rankings, the elite colleges would naturally admit more students.

Elite colleges might alternatively object that acceptance rates are a poor measure for increasing exclusion. They might argue that each student applies to far more schools than she once did. This is true. But the reason each student applies to more schools today is because she is dramatically more likely to get rejected at each one. If you leave behind acceptance rates, and merely look at raw numbers, the growth in seats at elite colleges has been pitiful. As Economists Kent and Smetters explained in 2021, “While college enrollment has more-than doubled since 1970, elite colleges have barely increased supply, instead reducing admit rates.” For example, in the 2005–06 school year, Yale enrolled 1,321 undergrads, and in 2016–17, Yale enrolled a whopping 1,367 students.  

A market fundamentalist might argue that the scarcity produced by elite colleges is opening space for formerly less elite colleges like Tulane and BU to fill the new market need by becoming more exclusive themselves. Fundamentalists might argue that the market is responding as it should, by creating more supply to meet the growing demand of qualified students eager for prestigious degrees. But, of course, such fundamentalists miss the crucial point. 

At what cost is Tulane filling in for Penn? The rankings that US News has set up requires everyone to get more expensive. So, as more  students were rejected by Penn, Tulane experienced an increase in demand for its slots, which justified higher prices. Even then it’s not a real substitute. As economists Blair and Smetters quantified in 2021, the consumer welfare loss of being rejected from Harvard, Yale, Stanford, or Princeton is estimated to be around 140 percent of the mean total tuition, an amount in the order of hundreds of thousands of dollars. This is despite the rise of the so-called substitutes. 

In the end, whether the elite colleges have explicitly colluded to produce dysfunction or whether it is some freak accident is probably the least interesting thing about the elite college market to the vast majority of Americans. For the average student coming of age, it doesn’t matter if the elite college market is dysfunctional because of an explicit conspiracy or because of an unfortunate accident of market development. What matters to the applicant is that she may not be accepted to the college of her dreams because rankings incentivize each elite college to slow growth in enrollments. With each new college scandal, a simple fact becomes more and more clear. We need a serious conversation about how to restructure this market.

Sahaj Sharda is a student at Columbia Law School and author of the book The College Cartel.

Over the past two years, heterodox economic theory has burst into the public eye more than ever as conventional macroeconomic models have failed to explain the economy we’ve been living in since 2020. In particular, theories around consolidation and corporate power as factors in macroeconomic trends–from neo-Brandeisian antitrust policy to theories of profit seeking as a driver of inflation–have exploded onto the scene. While “heterodox economics” isn’t really a singular thing–it’s more a banner term for anything that breaks from the well established schools of thought–the ideas it represents challenge decades of consensus within macro- and financial economics. This development, of course, has left the proponents of the traditional models rather perturbed.

One of the heterodox ideas that has seen the most media attention is the idea of sellers’ inflation: the theory that inflation can, at least partially, be a result of companies using economic shocks as smokescreens to exercise their market power and raise the prices they charge. The name most associated with this theory is Isabella Weber, a professor of economics at the University of Massachusetts, but there are certainly other economists who support this theory (and many more who support elements of it but are holding out for more empirical evidence before jumping into the rather fraught public debate.)

Conventional economists have been bristling about sellers’ inflation being presented as an alternative to the more staid explanation of a wage-price spiral (we’ll come back to that), but in recent months there have been extremely aggressive (and often condescending, self-important, and factually incorrect) attacks on the idea and its proponents. Despite this, sellers’ inflation really is not that far from a lot of long standing economic theory, and the idea is grounded in key assumptions about firm behavior that are deeply held across most economic models.

My goal here is threefold: first, to explain what the sellers’ inflation and conventional models actually are; second, to break down the most common lines of attack against sellers’ inflation; third, to demonstrate that, whatever its shortcomings, sellers’ inflation is better supported than the traditional wage-price spiral. Many even seem to recognize this, shifting to an explanation of corporations just reacting to increased demand. As we’ll see, that explanation is even weaker.

What Is Sellers’ Inflation?

The Basic Story

As briefly mentioned above, sellers’ inflation is the idea that, in significantly concentrated sectors of the economy, coordinated price hikes can be a significant driver of inflation. While the concept’s opponents generally prefer to call it “greedflation,” largely as a way of making it seem less intellectually serious, the experts actually advancing the theory never use that term for a very simple reason: it doesn’t really have anything to do with variance in how greedy corporations are. It does rely on corporations being “greedy,” but so do all mainstream economic theories of corporate behavior. Economic models around firm behavior practically always assume companies to be profit maximizing, conduct which can easily be described as greedy. As we’ll see, this is just one of many points in which sellers’ inflation is actually very much aligned with prevailing economic theory.

Under the sellers’ inflation model, inflation begins with a series of shocks to the macroeconomy: a global pandemic causes an economic crash. Governments respond with massive fiscal stimulus, but the economy experiences huge supply chain disruptions that are further worsened with the Russian invasion of Ukraine. All of these events caused inflation to increase either by decreasing supply or increasing demand. The stimulus checks increased demand by boosting consumers’ spending power–exactly what it was supposed to do. Both strained supply chains and the sanctions cutting Russia off from global trade restricted supply. Contrary to what some opponents of sellers’ inflation will say, the theory does not deny the stimulus being inflationary (though some individual proponents might). Rather, sellers’ inflation is an explanation for the sustained inflation we saw over the past two years. Those shocks led to a mismatch between demand and supply for consumer goods, but something kept inflation high even after the effects of those shocks should have waned.

The culprit is corporate power. With such a whirlwind of economic shocks, consumers are less able to tell when prices are rising to offset increases in the cost of production versus when prices are being raised purely to boost profit. This, too, is not at odds with conventional macro wisdom. Every basic model of supply and demand tells us that when supply dwindles and demand soars, the price level will rise. Sellers’ inflation is an explanation of how and why prices rise and why prices will increase more in an economy with fewer firms and less competition. 

Sellers’ inflation is really just a specific application of the theory of rent-seeking, which has been largely accepted since it was introduced by David Ricardo, a contemporary of the father of modern economics, Adam Smith. (Indeed, this point, which I raised nearly a year and a half ago in Common Dreams, was recently explored in a new paper from scholars at the University of London.) As anyone who has ever watched a crime show could tell you, when you want to solve a whodunnit, you need to look at motive, means, and opportunity. The greed (which, again, is at the same level it always is) is the motive. Corporations will always seek to charge as high of a price as they can without being dangerously undercut by competitors. Sellers’ inflation doesn’t posit a massive increase in corporate greed, but a unique economic environment that allows firms to act upon the greed they have possessed.

Concentration is the means; when the market is in the hands of only one or a few firms, it becomes easier to raise prices for a couple of reasons. First, large firms have price-setting power, meaning they control enough of the sector that they are able to at least partially set the going rate for what they sell. Second, when there’s only a few firms in a sector, wink-wink-nudge-nudge pricing coordination is much easier. Just throw in some vague but loaded phrases in press releases or earnings calls that you know your competition will read and see if they take the same tack. For simplicity, imagine an industry dominated by two firms, A and B. At any given point, both are choosing between holding prices steady and raising them (assume lowering prices is off the table because it’s unprofitable, let’s keep it simple.) This sets up the classic game-theoretical model of the prisoner’s dilemma:

A Maintains PriceA Raises Price
B Maintains Price, ,
B Raises Price, ,

In the chart above, the red arrows represent the change in A’s profit and the blue represent the change in B’s. If both hold the price steady, nothing changes, we’re at an equilibrium. If one and only one firm raises prices without the other, the price-hiker will lose money as price-conscious consumers switch to their competitor, who will now see higher profits. This makes the companies averse to raising prices on their own. But, if both raise their prices, both will be able to increase their profits. That’s why collusion happens. But, wait, isn’t that illegal? Yes, yes it is. But it is nigh on impossible to police implicit collusion, especially when there is a seemingly plausible alternative explanation for price hikes.

As James Galbraith wrote, in stable periods, firms prefer the safer equilibrium of holding prices relatively stable. As he explains:

In normal times, margins generally remain stable, because businesses value good customer relations and a predictable ratio of price to cost. But in disturbed and disrupted moments, increased margins are a hedge against cost uncertainties, and there develops a general climate of “get what you can, while you can.” The result is a dynamic of rising prices, rising costs, rising prices again — with wages always lagging behind.

And that gets us to opportunity, which is what the macroeconomic shocks provide. Firms probably did experience real increases in their production costs, which gives them good reason to raise their prices…to a point. But what has been documented by Groundwork Collaborative and separately by Isabella Weber and Evan Wasner is corporate executives openly discussing increasing returns using “pricing power,” which is code for charging more than is needed to offset their costs. This is them signaling that they see an opportunity to get to that second equilibrium in the chart above, where everyone makes more money. And since that same information and rationale is likely to be present at all of the firms in an industry, they all have the incentive (or greed if you prefer) to do the same. This is easiest to conceptualize in a sector with two firms, but it holds for one with more that is still concentrated. At some point, though, you reach a critical mass where suddenly there’s one or more firms who won’t go along with it. As the number of firms increases, it becomes more and more probable that one won’t just go along with it, which is why concentration facilitates coordination

And that’s it. In an economy with significant levels of concentration — more than 75 percent industries in the American economy have become more concentrated since the 1990s — and the smokescreen of existing inflation, corporate pricing strategy can sustain rising prices due to the uncertainty. Now, if you ask twenty different supporters of sellers’ inflation, you’ll likely get twenty slightly different versions of the story. However, the main beats are mostly agreed upon: 1) firms are profit maximizing, 2) they always want to raise prices but usually won’t out of fear of either being undercut by the competition or being busted for illegal collusion, and 3) other inflationary pressures provide some level of plausible deniability which lowers the potential downside of price increases.

What Evidence Is There?

The evidence available to support theories of sellers’ inflation is one of the main points of contention between its proponents and detractors. Despite that, there is strong theoretical and empirical evidence that backs the theory up.

First is a basic issue of accounting that nobody in the traditional macro camp seems to have a good answer for. Profits are always equal to the difference between revenues (all the money a company brings in) and costs (all the money a company sends out). 

Profits= Revenue – Costs

This is inviolable; that is simply the definition of profits. As I’ve written before, this means that the only two possible ways for a company to increase profits is by generating more revenue or cutting costs (or a combination of the two, but let’s keep it simple). Costs can’t be the primary driver in our case because we know they’re increasing, not decreasing. Inflationary pressures should still have increased production costs like labor and any kind of input that is imported. Companies also have been adamant about the fact that they are facing rising costs; that’s their whole justification for price hikes. And mainstream economists would agree. They blame lingering inflation on a wage price spiral, which says that workers demanding higher wages have driven cost increases that force companies to raise prices – resulting in higher inflation. As both sides agree that input costs are rising, the only possible explanation for increased profits is an increase in revenue. Revenue also has itself a handy little formula:

Revenue = Price * Units Sold

While the units sold may have increased, price was the bigger factor. We know this for at least two key reasons: because of evidence showing that output (the units sold) actually decreased and because of the evidence from earnings calls compiled by Groundwork. Executives said their strategy was to raise prices, not to sell more products. And there’s two very good reasons to believe the execs: (1) they know their firms better than anyone, and (2) they are legally required to tell the truth on those calls. (That second reason is also evidence of sellers’ inflation on its own; if the theory’s opponents don’t buy the explanation given by the executives to investors, they must think executives are committing securities fraud.) 

In rebuttal to the accounting issue, Brian Albrech, chief economist at the International Center for Law and Economics, has argued that using accounting identities is wrongheaded:

Just as we never reason from a price change, we need to never reason from an accounting identity. My income equals my savings plus my consumption: I = S + C. But we would never say that if I spend more money, that will cause my income will rise.

This, on face, seems like a reasonable argument, except all it really shows is that Albrecht doesn’t understand basic math. Tracking just one part of the equation won’t automatically tell us what the others do…duh. But we can track what a variable is doing empirically and use that relationship to make sense of it. We would never say that someone spending more money on consumption causes their income to rise. But we certainly could say that if we observe an increase in personal consumption, then we can reason that either their income increased or their savings decreased. The mathematical definition holds, you just have to actually consider all of the variables. In fact, Albrecht agrees, but warns “Yes, the accounting identity must hold, and we need to keep track of that, but it tells us nothing about causation.” No, it tells us correlation. Which, by the way, is what econometrics and quantitative analyses tell us about as well. 

The way you get to causation in economics is by tying theory and context to empirical correlations to explain those relationships. Albrecht’s case is just a very reductive view of the actual logic at play. He continues:

After all, any revenue PQ = Costs + Profits. So P = Costs/Q + Profits/Q. If inflation means that P goes up, it must be “caused” by costs or profits.

No, again. Stop it. This is like saying consumption causes income.

Once again, Albrecht is wrong here. This is like saying higher consumption will correspond to either higher income or lower savings. Additionally, there’s a key difference between the accounting identities for income and for profits: income is broken down into consumption and savings after you receive it, whereas costs and revenues must exist before profits. This makes causal inference in the latter much more reasonable; income is determined exogenously to that formula, but profits are endogenous to their accounting identity. 

In addition to these observations, though, there is also various economic research that supports the idea of seller’s inflation. Some of the best empirical evidence comes from this report from the Federal Reserve Bank of Boston, this one from the Federal Reserve Bank of San Francisco, and this one from the International Monetary Fund.

Another key piece of evidence is a Bloomberg investigation that found that the biggest price increases came from the largest firms. If market power were not a factor, then prices should have been rising roughly proportionally across firms, regardless of their size. If anything, large firms’ economies of scale should have cut down on the need to hike prices. Especially because basic economic theory tells us that when demand increases, companies want to expand supply, which should have resulted in more products (especially from larger firms with more resources) and a corresponding drop in price increases. And yet, what we actually saw was a drop in production from major companies like Pepsi, who opted instead to increase profits by maintaining a shortfall in supply.

That said there’s plenty more, including this from the Kansas City Fed, this from Jacob Linger et al., this from French economists Malte Thie and Axelle Arquié, this from the European Central Bank, this one from the Roosevelt Institute, and more. The Bank of Canada has also endorsed the view. It seems unlikely that the Federal Reserve, European Central Bank and the Bank of Canada have all become bastions of activist economists unmoored from evidence. Perhaps it’s time those denying sellers’ inflation are labeled the ideologues.

The Case Against Sellers Inflation

A Few Notes on Semantics

Before we get into the substance of critiques against sellers’ inflation as a theory, there are a few miscellaneous issues with the framing its opponents often use. There is a tendency for arguments against sellers’ inflation to use loaded words or skewed phrasing to implicitly undermine the legitimacy of people who are spearheading the push for greater scrutiny of corporations as a part of managing inflation.

For instance, Eric Levitz says the debate sees “many mainstream economists against

heterodox progressives.” This phrasing suggests that the debate is between economists on the one hand and proponents of sellers’ inflation on the other. But that’s not true! There are both economists and non-economists on both sides of the issue. Weber is an economist, as are the researchers at the Boston and San Francisco Feds. And others, including James Galbraith, Paul Donovan, Hal Singer, and Groundwork’s Chris Becker and Rakeen Mabud are on board. Notably, Lael Brainard, the head of President Biden’s Council on Economic Advisors (and former Federal Reserve Vice Chair) recently endorsed the view.

Or take how Kevin Bryan, a professor of management at the University of Toronto described Isabella Weber as a “young researcher” who “has literally 0 pubs on inflation.” Weber is old enough to have two PhDs and tenure at UMass and–will you look at that–has written about inflation before! Presenting her as young sets the stage for making her seem inexperienced, which saying she has no publications doubles down on. But his claims are false. Weber wrote a paper with Evan Wasner specifically about sellers’ inflation. But even if we take Bryan’s point as true and ignore the very real work Weber has done on inflation and pricing, Weber still has significant experience with political economy, which helps to explain how institutional power is able to influence markets—exactly the type of thinking sellers’ inflation is based upon.

(And this is nothing compared to the abuse that Weber endured after an op-ed in The Guardian provoked a frenzy of insulting, condescending attacks from many professional economists. For more on that, see Zach Carter’s New Yorker profile of Weber and/or this Twitter thread that documents Noah Smith’s outlash at Weber.)

But even the semantics that don’t get into ad hominem territory are confusing. Here is a list of the topline concerns that Kevin Bryan raised:

Let’s just run through that list of concerns real quick:

  1. What does very online even mean? Sellers’ inflation has been embraced as at least a plausible concept by the President of the United States, the European Central Bank, at least two Federal Reserve Banks, and the International Monetary Fund. If that’s not enough legitimization it’s hard to know what would be. This concern makes it sound like the proponents are random reddit users, rather than the serious academics and policy makers they are.
  2. I don’t know why the presence of “virulent defenders”  undermines the idea itself. Defenders of traditional economics are virulent as well; Larry Summers called the idea of relating antitrust policy to inflation “science denial.”
  3. Traditional monetary policy is often (but not always) associated with centrist, pro-business politics. Also, conventional Industrial Organization theory and even Borkian consumer welfare theories recognize a relationship between price and the structure of firms and markets, so the fundamental ideas are certainly not leftist.
  4. That proponents of sellers’ inflation refer to gatekeepers shooting down these theories  seems disingenuous. Everyone who supports sellers’ inflation would probably rather be discussing it because of its merits. But when people like Bryan or Larry Summers refuse to even consider the idea as potentially legitimate, the only option left is to discuss it because of the iconoclasm. If there isn’t a story about changing academic opinions, then the story about challenges to conventional wisdom being shut out by the old guard will have to do.

All of this is to set up the next point in that Twitter thread, which is that “being an Iconoclast is not the same thing as being rigorous, or being right.” True, but dodging the debate by attacking the credibility of an idea’s advocates and taking issue with the method of dissemination are also not the same as being rigorous. Or as being right.

These are just a couple of examples, but opponents of this theory really lean into making it sound like its champions are inexperienced and don’t know what they’re talking about. Aside from being in bad faith, this also indicates a lack of confidence in comparing the contemporary story to that of sellers’ inflation.

The Theoretical Substance of the Opposition

With the semantics out of the way, it’s time to get into the meat of the case(s) against sellers’ inflation. There is no singular, unified case here, more of a constellation of related ideas. 

The first line of defense against theories of sellers’ inflation is asserting that traditional macroeconomics is good and has solved our inflation problem. For example, Chris Conlon of NYU has credited rate hikes with inflation cooling. Conlon says “I for one am glad Powell and Biden admin followed boring US textbook ideas.” But there’s a problem with that: the contemporary economic story does not actually explain how rate hikes can cool inflation without a corresponding rise in unemployment. 

The traditional story starts in the same place as the sellers’ inflation story: macroeconomic shocks create inflation. (Although the traditionalists prefer to emphasize fiscal stimulus as the primary shock, rather than supply chains. The evidence largely indicates that stimulus did have some inflationary effect, but not much. The global nature of inflation also undercuts the idea that American domestic fiscal policy could be the main explanation.) The shock(s) create a supply and demand mismatch, with too much money chasing too few available goods. After that, however, the traditional mechanism for explaining inflation remaining high is supposed to be a wage-price spiral. 

The story goes something like this: the stimulus boosted consumer demand, which overheated the economy, and created more jobs than could be filled, meaning job seekers negotiated higher pay when they took positions. They then spent that extra money which increased demand further, leading to even higher prices as supply couldn’t keep up with demand. Workers saw that their cost of living went up, so they took the opportunity to demand better pay. Companies were forced to give in because they knew in a hot labor market, their workers could leave and earn more elsewhere if employers didn’t meet workers’ demands. Once their wages went up, those workers had more spending power, which they used to buy more things, further increasing demand. That elevated prices more, as the supply-demand mismatch increased. Now workers see their cost of living rising again, so they ask for another raise. If this pattern has held for a few rounds of pay negotiations, maybe workers ask for more than they otherwise would, trying to get out ahead of their spending power shrinking again. Rinse and repeat.

But we know that this story doesn’t describe the inflation that we saw over the last couple of years. Wage growth lagged behind inflation, which indicates that something else had to be driving price increases. Plus the Phillips curve, which is meant to illustrate this relationship between higher employment and higher inflation, has been broken in the US for years. It simply does not show a meaningful positive relationship any more. 

It’s important that we understand this story as a whole. Levitz, in his piece, tries to separate the initial supply-demand mismatch from the wage-price spiral as a way of making the conventional model stack up better against sellers’ inflation. But that doesn’t actually hold because if you omit the wage-price spiral (which Levitz agrees seems dubious), the mainstream macro story has no mechanism for inflation staying high. If it were just a one-time stimulus, that would explain a one-time inflation spike, but once that money is all sent out (say by the end of 2021), there’s no source for further exacerbating the supply-demand mismatch (in say the end of 2022 or early 2023). (Remember, inflation is the rate of change of prices, so if prices spike and then stay the same afterwards, that plateau will reflect a higher price level but not sustained high inflation.) 

Similarly, focusing on only the supply-side shocks provides no reason for why inflation remained elevated long after supply chain bottlenecks had cleared and shipping prices had fallen.

The incentive shift that occurs in concentrated markets is key to understanding this. In a competitive market, firms’ response to a surge in demand is to produce more. But, when the market is concentrated and some level of implicit coordination is possible, increased production is actually against a firm’s best interest, it will just put them back at that first equilibrium from earlier. They want to enjoy the high prices and hang out in the second equilibrium as long as they can

Sellers’ inflation, at least, has an internal mechanism that can explain how we got from one-off shocks to the economy to sustained inflation. Yet its opponents wrongly describe what that mechanism is. Remember the story from earlier: the motive of profit maximization, the means of market power in concentrated industries, and the opportunity of existing inflation. The most basic objection to this mechanism is to mischaracterize it as blaming sustained upward pressure on prices on an increase in the level of greed among corporations. That’s what economist Noah Smith did in a number of blogs that have aged quite poorly. But no one is seriously arguing companies are greedier, only that there is an innate level of greed, which conventional models also assume. 

The strawmanning continues when we get to the means, which is what this Business Insider piece by Tevan Logan of Ohio State does by pointing out how Kingsford charcoal tried and failed to rent seek by raising prices, which just caused them to lose market share to retailers’ generic brands. Exactly! The competition in the charcoal market demonstrates why consolidation is a key ingredient in sellers’ inflation. If Kingsford had a product without so many generic substitutes, then consumers would not have had the chance to switch products. And that’s why a lot of the biggest price hikes occurred with goods like gas, meat, and eggs, all of which are controlled by cartel-esque oligopolies.

The opportunity component actually seems to be a point that there’s broad agreement on. For example, Conlon says that the “idea that firms might raise prices by more than their costs is neither surprising nor uncommon.” He goes on to suggest, however, that this is likely because firms expect costs to continue rising. There’s certainly an element of truth to that, but also consider the basic motivation of corporations: maximizing profits. As a result, if companies expect their costs to rise by, say, 5 percent over the next year and they’re going to adjust prices anyway, why not raise prices by 7 percent, more than enough to offset expected cost increases? 

The theoretical case against sellers’ inflation is, as Eric Levitz noted, “deeply confused;” he was just wrong about which side was getting stumped. 

The Empirical Case Against Sellers’ Inflation

The other side of the opposition to sellers’ inflation focuses on the empirics. To be fair, there’s certainly more work that needs to be done. But that’s about as far as the critique goes. The response is just “the data isn’t there.” I’ll refer you to Groundwork’s excellent work on executives saying that they are raising prices beyond costs, Weber’s paper, the Boston and San Francisco Fed papers, Bloomberg’s findings about larger firms charging higher prices, Linger et al.’s case study of concentration and price in rent increases, and the IMF working paper. 

Setting aside the very real empirical evidence in support of seller’s inflation, the argument about a lack of empirics still gives no reason to default to the traditional model of inflation. Even if we accept a lack of data for sellers’ inflation, we have quite a lot of data that directly contradicts the mainstream story. Surely, something unproven is still preferable to something disproven.

Some economists like Olivier Blanchard have raised questions about methodology and the need for more work. Great! That’s what good discourse is all about; being skeptical of ideas is fine, as long as you don’t throw them out on gut instinct. Unfortunately, critics often simply reject the theory, rather than express skepticism. When they do, however, they often fall into the same methodological gaps in which they accuse “greedflation” proponents. For example, Chris Conlon egregiously conflating correlation and causation of the Fed’s monetary policy. Or Brian Albrecht taking issue with inductive logic while siding with a traditional story that makes up ever more convoluted, illusory concepts

So Where Does That Leave Us?

The traditional model of inflation is broken. The Phillips curve is no longer a useful tool for understanding inflation, a wage-price spiral flies in the face of reality, and there’s no viable alternative mechanism for sustained inflation within the demand-side model. Enter sellers’ inflation.

From the same starting point, and drawing on several cornerstone pieces of economic theory, sellers’ inflation is able to provide a consistent vehicle for one-off shocks to create prolonged upward pressure on price levels as firms exercise their market power. The bedrock ideas of the theory are consistent with seminal economic thought from the likes of David Ricardo and even Adam Smith himself and has the support of a number of subject matter experts. Is it a perfect theory? No, but to paraphrase President Biden, don’t compare it to the ideal, compare it to the alternative. More empirics would be preferable, but the case for sellers’ inflation remains much stronger than the case for a fiscal stimulus igniting a wage-price spiral, which is entirely anathema to most of the evidence we do have.

One way or another, inflation is trending down and, by some measures, is closing in on the target rate again. Many have rushed to credit the Federal Reserve for following the textbook course, but they don’t have any internal story about how the Fed could have done that without increasing employment. As Nobel laureate Paul Krugman (who supported rate hikes and once bashed the theory of sellers’ inflation) asked, “Where’s the rise in economic slack?” The conventional story is missing its second chapter and yet its advocates are eager to point to an ending they can’t explain as all the justification they need to avoid reconsidering their priors. One possibility Krugman notes, which Matthew Klein explicates here, is that inflation really was transitory the whole time. The sharp upward pressures were, indeed, caused by one-off shocks from the pandemic, supply chains, and Russian aggression, but the effects had unusually long tails. This theory aligns very well with sellers’ inflation; corporate price hikes could simply be the explanation for such long lasting effects. 

Additionally, as Hal Singer pointed out, the recent drop in inflation corresponds to a downturn in corporate profits. Some, including Noah Smith (in that tweet’s comments), disagree and argue that both lower profits and less inflation are caused by new slack in demand. But that doesn’t really match what we’re seeing across macroeconomic data. True, employment growth has slowed, as has the growth of personal consumption, but that still doesn’t match up with the type of deflationary pressure that we were supposed to need; Larry Summers was citing figures as high as 6 percent unemployment. Plus, the metrics that do show demand softening largely only show that employment and consumption are steadying, not decreasing. On top of that, the contraction in output that The Wall Street Journal identified makes the case for simple shifts in demand driving price levels dubious. Additionally, if a wage-price spiral were at fault, leveling off employment growth would not be enough, the labor market would still be too tight (aka inflationary), hence why we’d need to increase unemployment. 

Good economic theories always need more work to apply them to new situations and produce quality empirics. But pretending that sellers’ inflation is a wacky idea while the conventional macro story maps perfectly onto the economy of the past three years is thumbing your nose at the most complete story available, significant empirical evidence, and centuries of economic theory.

Dylan Gyauch-Lewis is Senior Researcher at the Revolving Door Project.

The Federal Trade Commission’s scrutiny of Microsoft’s acquisition of game producer Activision-Blizzard did not end as planned. Judge Jacqueline Scott Corley, a Biden appointee, denied the FTC’s motion for preliminary injunction, ruling that the merger was in the public interest. At the time of this writing, the FTC has pursued an appeal of that decision to the Ninth Circuit, identifying numerous reversible legal errors that the Ninth Circuit will assess de novo.

But even critics of Judge Corley’s opinion might find agreement on one aspect: the relative lack of enforcement against anticompetitive vertical mergers in the past 40+ years. As Corley’s opinion correctly observes, United States v. AT&T Inc, 916 F.3d 1029 (D.C. Circuit 2019), is the only court of appeals decision addressing a vertical merger in decades. Absent evolution of the law to account for, among other recent phenomena, the unique nature of technology-enabled content platforms, the starting point for Corley’s opinion is misplaced faith in case law that casts vertical mergers as inherently pro-competitive.

As with horizontal mergers, the FTC and Department of Justice have historically promulgated vertical merger guidelines that outline analytical techniques and enforcement policies. In 2021, the Federal Trade Commission withdrew the 2020 Vertical Merger Guidelines, with the stated intent of avoiding industry and judicial reliance on “unsound economic theories.” In so doing, the FTC committed to working with the DOJ to provide guidance for vertical mergers that better reflects market realities, particularly as to various features of modern firms, including in digital markets.

The FTC’s challenge to Microsoft’s proposed $69 billion acquisition of Activision, the largest proposed acquisition in the Big Tech era, concerns a vertical merger in both existing and emerging digital markets. It involves differentiated inputs—namely, unique content for digital platforms that is inherently not replaceable. The FTC’s theories of harm, Judge Corley’s decision, and the now-pending appeal to the Ninth Circuit provide key insights into how the FTC and DOJ might update the Vertical Merger Guidelines to stem erosion of legal theories that are otherwise ripe for application to contemporary and emerging markets.

Beware of must-have inputs

In describing a vertical relationship, an “input” refers to goods that are created “upstream” of a distributor, retail, or manufacturer of finished goods. Take for instance the production and sale of tennis shoes. In the vertical relationship between the shoe manufacturer and the shoe retailer, the input is the shoe itself. If the shoe manufacturer and shoe retailer merge, that’s called a vertical merger—and the input in this example, tennis shoes, is characteristic of a replaceable good that vertical merger scrutiny has conventionally addressed. If such a merger were to occur and the newly-merged firm sought to foreclose rival shoe retailers from selling its shoes, rival shoe retailers would likely seek an alternative source for tennis shoes, assuming the availability of such an alternative.

When it comes to assessing vertical mergers in digital content markets, not all inputs are created equal. To the contrary, online platforms, audio and video streaming platforms, and—in the case of Microsoft’s proposed acquisition of Activision—gaming platforms all rely on unique intellectual property that cannot simply be replicated if a platform’s access to that content is restricted. The ability to foreclose access to differentiated content that flows from the merger of a content creator and distributor creates a heightened concern of anticompetitive effects, because rivals cannot readily switch to alternatives to the foreclosed product. This is particularly true when the foreclosed content is extremely popular or “must-have,” and where the goal of the merged firm is to steer consumers toward the platform where it is exclusively available. (See also Steven Salop, “Invigorating Vertical Merger Enforcement,” 127 Yale L.J. 1962 (2018).)

The 2020 Vertical Merger Guidelines fall short in their analysis of mergers involving highly differentiated products. The guidelines emphasize that vertical mergers are pro-competitive when they eliminate “double marginalization,” or mark-ups that independent firms claim at different levels of the distribution chain. For example, when game consoles purchase content from game developers, they may decide to add a mark-up on that content before offering it for consumer consumption. (In the real world of predatory pricing and cross-subsidization, the incentive to add such a mark-up is a more complex business calculation.) Theoretically, the elimination of those markups creates an incentive to lower prices to the end consumer.

But this narrow focus on elimination of double marginalization—and theoretical downward price pressure for consumers—ignores how the reduction in competition among downstream retailers for access to those inputs can also degrade the quality of the input. Let’s take Microsoft-Activision as an example. As an independent firm, Activision creates games and downstream consoles engage in some form of competition to carry those games. When consoles compete on terms to carry Activision games, the result to Activision includes greater investment in game development and higher quality games. When Microsoft acquires Activision, that downstream competition for exclusive or first-run access to Activision’s games is diminished. Gone is the pro-competitive pressure created by rival consoles bidding for exclusivity, as is the incentive for Activision to innovate and demand greater third-party investment in higher quality games.

Emphasizing the pro-competitive effects of eliminating double marginalization—even if that means lower prices to consumers—only provides half of the picture, because consumers will likely be paying for lower quality games. Previous iterations of the Vertical Merger Guidelines emphasize the consumer benefits of eliminating double marginalization, but they stop short of assessing the countervailing harms of mergers involving differentiated inputs. They should be updated accordingly.

Partial foreclosure will suffice

During the evidentiary hearings in the Northern District of California, the FTC repeatedly pushed back against the artificially high burden of having to prove that Microsoft had an incentive to fully foreclose access to Activision games. In the midst of an exchange during the FTC’s closing arguments, FTC’s counsel put it directly: “I don’t want to just give into the full foreclosure theory. That’s another artificially high burden that the Defendants have tried to put on the government.” And yet, in her decision, Judge Corley conflates the analysis for both full and partial foreclosure, writing, “If the FTC has not shown a financial incentive to engage in full foreclosure, then it has not shown a financial incentive to engage in partial foreclosure.”

Although agencies have acknowledged that the incentive to partially foreclose may exist even in the absence of total foreclosure (see, for instance, the FCC’s 2011 Order regarding the Comcast-NBCU vertical transaction), the Vertical Merger Guidelines do not make any such distinction. Again, that incomplete analysis hinges in part on the failure to distinguish between types of inputs. Take for instance a producer of oranges merging with a firm that makes orange juice. Theoretically, the merged firm might fully foreclose access to oranges to rival orange juice makers, who may then go in search for alternative sources of oranges. Or the merged firm might supply lower quality produce to rival firms, which may again send it in search of an alternative source.

But a merged firm’s ability and incentive to foreclose looks different when foreclosure takes the subtler form of investing less in the functionality of game content with a gaming console, subtly degrading game features, or adding unique features to the merged firm’s platforms in ways that will eventually drive more astute gamers to the merged firm (even though the game in question is technically still available on rival consoles). Such eventualities are perhaps easier to imagine in the context of other content platforms—for example, if news content were less readable on one social media platform than another. When a merged firm has unilateral control over those subtle design and development decisions, the ability and incentive to engage in more subtle forms of anticompetitive partial foreclosure is more likely and predictable.

In finding that Microsoft would not have a financial incentive to fully foreclose access to Activision games, Judge Corley’s analysis hinges on a near-term assessment of Microsoft’s financial incentive to elicit game sales by keeping games on rival consoles. (Never mind that Microsoft is a $2.5 trillion corporation that can afford near-term losses in service of its longer-view monopoly ambitions.) Regardless, a theory of partial foreclosure does not mean that Microsoft must forgo independent sales on rival consoles to achieve its ambitions. To the contrary, partial foreclosure would still allow users to purchase and play games on rival consoles. But it also allows for Microsoft’s incentive to gradually encourage consumers to use its own console or game subscription service for better game play and unique features.

Finally, Judge Corley’s analysis of Microsoft’s incentive to fully foreclosure is irresponsibly deferential to statements made by Activision Blizzard CEO Bobby Kotick that the merging entities would suffer “irreparable reputational harm” if games were not made available on rival consoles. Again, by conflating the incentives for full and partial foreclosure, the court ignores Microsoft’s ability to mitigate that reputational harm—while continuing to drive consumers to its own platforms—if foreclosure is only partial.

Rejecting private behavioral remedies

In a particularly convoluted passage in the district court’s order, the Court appears to read an entirely new requirement into the FTC’s initial burden of demonstrating a likelihood of success on the merits—namely, that the FTC must assess the adequacy of Microsoft’s proposed side agreements with rival consoles and third-party platforms to not foreclose access to Call of Duty. Never mind that these side agreements lack any verifiable uniformity, are timebound, and cannot possibly account for incentives for partial foreclosure. Yet, the Court takes at face value the adequacy of those agreements, identifying them as the principal evidence of Microsoft’s lack of incentive to foreclose access to just one of Activision’s several AAA games.

In its appeal to the Ninth Circuit, the FTC seizes on this potential legal error as a basis for reversal. The FTC writes, “in crediting proposed efficiencies absent any analysis of their actual market impact, the district court failed to heed [the Ninth Circuit’s] observation ‘[t]he Supreme Court has never expressly approved an efficiencies defense to a Section 7 claim.’” The FTC argues that Microsoft’s proposed remedies should only have been considered after a finding of liability at the subsequent remedy stage of a merits proceeding, citing the Supreme Court’s decision in United States v. Greater Buffalo Press, Inc., 402 U.S. 549 (1971). Indeed, federal statute identifies the Commission as the expert body equipped to craft appropriate remedies in the event of a violation of the antitrust laws.

In its statement withdrawing the 2020 Vertical Merger Guidelines, the FTC announced it would work with the Department of Justice on updating the guidelines to address ineffective remedies. Presumably, the district court’s heavy reliance on Microsoft’s proposed behavioral remedies is catalyst enough to clarify that they should not qualify as cognizable efficiencies, at least at the initial stages of a case brought by the FTC or DOJ.

If this decision has taught us anything, it is that the agencies can’t come out with the new Merger Guidelines fast enough. In particular, those guidelines must address the competitive harms that flow from the vertical integration of differentiated content and digital media platforms. Even so, updating the guidelines may be insufficient to shift a judiciary so hostile to merger enforcement that it will turn a blind eye to brazen admissions of a merging firm’s monopoly ambitions. If that’s the case, we should look to Congress to reassert its anti-monopoly objectives.

Lee Hepner is Legal Counsel at the American Economic Liberties Project.

At some point soon, the Federal Trade Commission is very likely to sue Amazon over the many ways the e-commerce giant abuses its power over online retail, cloud computing and beyond. If and when it does, the agency would be wise to lean hard on the useful and powerful law at the core of its anti-monopoly authority. 

The agency’s animating statute, the Federal Trade Commission Act and its crucial Section 5, bans “unfair methods of competition,” a phrase Congress deliberately crafted, and the Supreme Court has interpreted, to give the agency broad powers beyond the traditional antitrust laws to punish and prevent the unfair, anticompetitive conduct of monopolists and those companies that seek to monopolize industries. 

Section 5 is what makes the FTC the FTC. Yet the agency hasn’t used its most powerful statute to its fullest capability for years. Today, with the world’s most powerful monopolist fully in the commission’s sights, the time for the FTC to re-embrace its core mission of ensuring fairness in the economy is now.

The FTC appears to agree. Last year, the agency issued fresh guidance for how and why it will enforce its core anti-monopoly law, and the 16-page document read like a promise to once again step up and enforce the law against corporate abuse just as Congress had intended. 

Why Section 5?

The history of the Section 5—why Congress included it in the law and how lawmakers expected it to be enforced—is clear and has been spelled out in detail: Congress set out to create an expert antitrust agency that could go after bad actors and dangerous conduct that the traditional anti-monopoly law, the Sherman Act, could not necessarily reach. To do that, Congress crafted Section 5 so that the FTC could stop tactics that dominant corporations devise to sidestep competition on the merits and instead unfairly drive out their competitors. Congress gave the FTC the power to enforce the law on its own, to stop judges from hamstringing the law from the bench, as they have done to the Sherman Act. 

As I’ve detailed, the Supreme Court has issued scores of rulings since the 1970s that have collectively gutted the ability of public enforcement agencies and private plaintiffs to sue monopolists for their abusive conduct and win. These cases have names—Trinko, American Express, Brooke Group, and so on—and, together, they dramatically reshaped the country’s decades-old anti-monopoly policy and allowed once-illegal corporate conduct to go unchecked. 

Many of these decisions are now decades old, but they continue to have outsized effects on our ability to policy monopoly abuses. The Court’s 1984 Jefferson Parish decision, for example, made it far more difficult to successfully prosecute a tying case, in which a monopolist in one industry forces customers to buy a separate product or service. The circuit court in the government’s monopoly case against Microsoft relied heavily on Jefferson Parish in overturning the lower court’s order to break Microsoft up. More recently, courts deciding antitrust cases against Facebook, Qualcomm and Apple all relied on decades of pro-bigness court rulings to throw out credible monopoly claims against powerful defendants. 

Indeed, the courts’ willingness to undermine Congress was a core concern for lawmakers when drafting and passing Section 5. Three years before Congress created the FTC, the U.S. Supreme Court handed down its verdict in the government’s monopoly case against Standard Oil, breaking up the oil trust but also establishing the so-called “rule of reason” standard for monopoly cases. That standard gave judges the power to decide if and when a monopoly violated the law, regardless of the language of, or democratic intent behind, the Sherman Act. Since then, the courts have marched the law away from its goal of constraining monopoly power, case by case, to the point that bringing most monopolization cases under the Sherman Act today is far more difficult than it should be, given the simple text of the law and Congress’ intent when it wrote, debated, and passed the act.

That’s the beauty and the importance of Section 5. Congress knew that the judicial constraints put on the Sherman Act meant it could not not reach every monopolistic act in the economy. That’s now truer than ever. Section 5 can stop and prevent unfair, anticompetitive acts without having to rely on precedent built up around the Sherman Act. It’s a separate law, with a separate standard and a separate enforcement apparatus. What’s more, the case law around Section 5 has reinforced the agency’s purview. In at least a dozen decisions, the Supreme Court has made clear that Congress intended for the law to reach unfair conduct that falls outside of the reach of the Sherman Act.

So the law is on solid footing, and after decades of sidestepping the job Congress charged it to do, the FTC appears ready to once again take on abuses of corporate power. And not a moment too soon. After decades of inadequate antitrust enforcement, unfairness abounds, particularly when it comes to the most powerful companies in the economy. Amazon perches atop that list. 

A Recidivist Violator of Antitrust Laws

Investigators and Congress have repeatedly identified Amazon practices that appear to violate the spirit of the antitrust laws. The company has a long history of using predatory pricing as a tactic to undermine its competition, either as a means of forcing companies to accept its takeover offers, as it did with Zappos and Diapers.com, or simply as a way to weaken vendors or take market share from competing retailers, especially small, independent businesses. Lina Khan, the FTC’s chair, has called out Amazon’s predatory pricing, both in her seminal 2017 paper Amazon’s Antitrust Paradox, and when working for the House Judiciary Committee during its big tech monopoly investigation. 

Under the current interpretation of predatory pricing as a violation of the Sherman Act, a company that priced a product below cost to undercut a rival must successfully put that rival out of business and then hike up prices to the point that it can recoup the money it lost with its below-cost pricing. Yet with companies like Amazon—big, rich, with different income streams and sources of capital—it might never need to make up for its below-cost pricing by hiking up prices on any one specific product, let alone the below-cost product. Indeed, as Jeff Bezos’s vast fortune can attest, predatory pricing can generate lucrative returns simply by sending a company’s stock price soaring as it rapidly gains market share. 

If Amazon wants to sell products from popular books to private-label batteries at a loss, it can. Amazon makes enormous profits by taxing small businesses on its marketplace platform and from Amazon Web Services. It can sell stuff below cost forever if it wants to–a clearly unfair method of competing with any other single-product business–all while avoiding prosecution under the judicially weakened Sherman Act. Section 5 can and should step in to stop such conduct. 

Amazon’s marketplace itself is another monopolization issue that the FTC could and should address with Section 5. The company’s monopoly online retail platform has become essential for many small businesses and others trying to reach customers. To wit, the company controls at least half of all online commerce, and even more for some products. As an online retail platform, Amazon is essential, suggesting it should be under some obligation to allow equal access to all users at minimal cost. Of course, that’s not what happens; as my organization has documented extensively, Amazon’s captured third-party sellers pay a litany of tolls and fees just to be visible to shoppers on the site. Amazon’s tolls can now account for more than half of the revenues from every sale a small business makes on the platform. 

The control Amazon displays over its sellers mirrors the railroad monopolies of yesteryear, which controlled commerce by deciding which goods could reach buyers and under what terms. Antitrust action under the Sherman Act and legislation helped break down the railroad trusts a century ago. But if enforcers were to declare Amazon’s marketplace an essential facility today, the path to prosecution under the Sherman Act would be difficult at best. 

Section 5’s broad prohibition of unfair business practices could prevent Amazon’s anticompetitive abuses. It could ban Amazon from discriminating against companies that sell products on its platform that compete with Amazon’s own in-house brands, or stop it from punishing sellers that refuse to buy Amazon’s own logistics and advertising services by burying their products in its search algorithm. The FTC could potentially challenge such conduct under the Sherman Act, as a tying case, or an essential facilities case. But again, the pathway to winning those cases is fraught, even though the conduct is clearly unfair and anticompetitive. If Amazon’s platform is the road to the market, then the rules of that road need to be fair for all. Section 5 could help pave the way. 

These are just a few of the ways we could see the FTC use its broad authority under Section 5 to take on some of Amazon’s most egregious conduct. If I had to guess, I imagine the FTC in a potential future Amazon lawsuit will likely charge the company under both the Sherman Act and the FTC Act’s Section 5 for some conduct it feels the traditional anti-monopoly statute can reach, and will rely solely on Section 5 for conduct that it believes is unfair and anticompetitive, but beyond the scope of the Sherman Act in its current, judicially constrained form. For example, while the FTC could potentially use the Sherman Act to address Amazon’s decision to tie success on its marketplace to its logistics and advertising services, the agency’s statement makes clear that Section 5 has been and can be used to address “loyalty rebates, tying, bundling, and exclusive dealing arrangements that have the tendency to ripen into violations of the antitrust laws by virtue of industry conditions and the respondent’s position within the industry.”

Might this describe Amazon’s conduct? Very possibly, but that will ultimately be up to the FTC to decide. Suing Amazon under both statutes would invite the court to make better choices around the Sherman Act that are more critical of monopoly abuses, and help develop the law so that the FTC can eagerly embark on its core mission under Section 5: to help ensure markets are fair for all.

Ron Knox is a Senior Researcher and Writer for ILSR’s Independent Business Initiative.