In February 2022, I wrote a piece in The American Prospect advocating for antitrust enforcement as a means to combat inflation. I wasn’t totally wrong. In light of personal experience in price-fixing litigation and the fate of the Biden administration, however, my perspective has shifted. Antitrust can be part of the solution, but it can’t be the entirety. (And neither can Fed rate hikes.) As spelled out below, the scope of antitrust is too narrow to combat many forms of profiteering that drive inflation. And even where inflationary conduct is cognizable by antitrust law, antitrust moves too slowly to make a meaningful difference in the short run, especially over the four-year term of a president.
By several accounts, profiteering was a significant contributor to post-Covid inflation. A 2022 study by the Economic Policy Institute documented that 54 percent of the increase in prices from the trough of the Covid-19 recession to the second quarter of 2020 was attributed to larger profit margins. A 2023 study by the Federal Reserve Bank of Kansas City found that growth in markups accounted for more than half of inflation for 2021. A 2023 study by the Institute for Public Policy Research concluded that business profits rose by 30 percent among UK listed firms post-pandemic, driven by a small number of firms. And a 2023 study by Groundwork Collaborate found that corporate profits fueled 53 percent of inflation during the second and third quarters of 2023. Per a 2025 BIS Working Paper, during the 2021–22 post-Covid period, one third of the price surge was traceable just to the largest firms in an industry. And a recent paper from the Federal Reserve Bank of St. Louis estimated that non-financial corporate profits, as a share of total economic output, increased from 13.9 percent in the years prior to Covid to 16.2 percent in the years after.
There are several mechanisms by which a firm’s margin—the difference between its price and its marginal costs and a measure of market power—can increase. One mechanism, as explained by Isabella Webber in her writings and on The Slingshot—is that a systemic cost shock can be used as a coordination device among sellers, which allows prices to rise beyond any true increase in marginal cost. A related mechanism is that companies have superior information about their costs compared to their customers. For example, when a tariff is applied to an input in a firm’s production process, it is impossible for a customer to know what portion of the cost is affected by the tariff, let alone the amount of the ever-shifting tariff. An entirely unrelated mechanism for profiteering is that companies can communicate their intentions to raise prices or cut capacity via public airwaves, especially during earnings calls; these announcements can be understood as an “invitation to collude” by rivals. The policy question is whether antitrust is up to the task of policing these exercises of market power.
The Narrow Scope of Antitrust
In broad strokes, antitrust recognizes pricing conduct as being anticompetitive when it falls into one of two buckets: (1) a unilateral price hike made possible by an exclusionary restraint; or (2) a coordinated price hike made possible by an agreement among rivals or a series of acquisitions that give rise to collective market power. An example of the former would be a single firm with market power that needed to use a restraint like a most-favored-nations provision or exclusive contract in order to raise prices. (The restraint typically must pierce the firm’s boundaries—that is, appear in a contract with buyers or suppliers.) A price hike taken solely by virtue of a firm’s lawfully acquired market power, by contrast, is not cognizable under antitrust. An example of coordinated conduct condemned by antitrust would be a group of firms sharing current or future price information via excel files or some third-party information broker to jointly raise prices. A roll-up of small horizontal rivals by a private equity firm—think anesthesiology practices in Texas—could also be condemned under antitrust laws. A coordinated price hike achieved “tacitly” and thus without an agreement, by contrast, is outside the scope of antitrust law.
When we think about the types of price hikes that can fuel inflation, it becomes painfully obvious that antitrust cannot be the first line of defense. Consider the following not-so hypothetical examples:
The first two tactics fall outside the scope of antitrust. And while the third is addressable via Section 5 of the FTC Act, only the FTC could bring such a challenge.
Even for conduct that falls within the narrow scope of antitrust law, prosecuting a case can take multiple years to resolve, and even then, settlements can allow perpetrators of price-fixing agreements to pay a fraction of the harm inflicted. Consider a case of coordinated price hikes made possible via a common pricing algorithm, such as RealPage, or a more primitive form of information sharing, such as the Agri Stats cases (disclosure: I’ve been an expert in two of Agri Stats matters). Or a case of a private equity roll-up of dozens of small horizontal rivals, such as cheerleading competitions by Varsity, granting the combined entity newfound pricing power (disclosure: I was the gyms’ expert in Varsity). The complaint in Varsity was filed in October 2020, and the order approving the disbursement of settlement funds was issued in May 2025, nearly five years later. And that’s speedy for an antitrust case in my experience.
The Makings of a New Toolkit
So what is needed to effectively police this kind of inflationary conduct? Beginning with conduct within the ambit of antitrust, in addition to prosecuting the use of common pricing algorithms via antitrust enforcement, many cities such as San Diego, Berkeley, San Francisco and Minneapolis have simply banned the use of RealPage software and others should follow suit. The Lever’s Luke Goldstein recently documented a cottage industry of “price optimization consultants” spotting price-hiking opportunities for companies in the same industry. Turning over the pricing decisions, as well as competitively sensitive information, to a third party that is also advising your rivals should be banned generally. There’s no reason to wait for these “facilitating practices” to bear fruit for their clients before prosecuting; by then, the damage of higher prices has already been inflicted. Indeed, Congress should make clear that any common pricing algorithm, no matter how primitive—e.g., sharing excel spreadsheets via an intermediary or chatting over the phone with a shared pricing consultant—or sophisticated should be per se illegal under the antitrust laws.
Similarly, there is no reason to wait for a roll-up of rental units in a neighborhood by a single entity (often private equity) to lead to rental inflation before we intervene ex post via antitrust. As I documented in an OECD paper with two co-authors, the most consolidated neighborhoods in Florida experienced the steepest increase in rents in the post-Covid era. Cities and states could address this threat ex ante by imposing a cap on the share of units that could be controlled by a single entity in a neighborhood.
Although public invitations to collude are covered by the FTC Act, Congress should extend the same policing authority to states and private enforcers. At least one federal court has decided that such cases are not amendable to private enforcement under the Sherman Act. The brazen behavior of firms, especially airlines, makes clear they perceive antitrust law to be impotent here. To wit, in March of this year, Delta and United discussed planned capacity reductions at the same JP Morgan investor conference in succession. In April, both airlines announced plans to reduce capacity in the third quarter by nearly the same amount (four percent). Given the FTC’s limited resources, the agency can’t be expected to police every perceived invitation to collude.
Moving to conduct outside of antitrust, recall that a single firm raising prices without the crutch of a restraint is permissible. Hence, antitrust cannot police episodes of firms unilaterally exploiting a crisis to pad their profits. Just as Covid served as a generalized cost shock, so too do tariffs. One pricing consultant recently bragged to DealBook that tariffs represent a “golden opportunity” to exploit customers, and explained the term “taking price,” which means using a rivals’ (potentially legitimate) price hike as cover for your own price hike. And several firms, including Black & Decker, Adidas, Hasbro, and Procter & Gamble have announced planned price increases owning to Trump’s tariffs.
A federal anti-price gouging law, as proposed by Kamala Harris during her presidential campaign, would be a good start. Price hikes would still be tolerated, so long as they could be justified by a commensurate increase in the firm’s costs. But we must go further: Industries experiencing above-average inflation should be automatically probed by a designated federal agency (either the DOJ or FTC). Egg prices were soaring, in part due to coordinated pricing in a concentrated industry as documented by Basel Musharbash, until Trump’s DOJ announced an industry probe in March. Other industries exhibiting above-average inflation, including auto insurance, should also be subjected to government probes. And the use of the bully pulpit by the president, along the line of what JFK did to turn back prices hikes by the steel industry, would also be helpful.
Yet another inflationary strategy that escapes antitrust scrutiny is surveillance pricing, sometimes referred to as dynamic pricing, in which a company adjusts prices based on the personally identified characteristics of shoppers or market dynamics (e.g., a school bus full of hungry soccer players arrives at a fast-food restaurant). Some states are moving to ban these practices in retailing. At a minimum, these practices should be subjected to regulatory oversight, as they have the potential of extracting consumer surplus (even relative to monopoly levels) by charging each customer one penny below her willingness to pay. Even worse, this technology could lead to discriminatory pricing on the basis of race or income or time since the last paycheck. Alas, the new FTC Chairman, Andrew Ferguson, closed an inquiry into surveillance pricing initiated by his predecessor.
Thinking Outside the Box
As reported in the Times, the Catalonia government has employed several remedies to address soaring rental inflation in Barcelona, including (1) imposition of rental price caps last March (rents have since fallen more than six percent); (2) ending licenses for Airbnb homes, and required owners to convert units into long-term leases at capped rates (brining 10,000 units back into the market); and (3) teaming up with private developers to build 50,000 new units by 2030. In addition to these fairly radical interventions, the government is considering a proposal to compel landlords and banks that are holding defaulted mortgages to put 75,000 units to use for long-term rentals. Another proposal would close the loophole in Catalonia’s housing laws that allow investors to convert residential apartments into tourist rentals. Not mentioned is the notion of deregulating zoning laws—potentially helpful at the margin, but not something to bring renters short-run relief (and certainly not fodder around which to build a political campaign).
It’s time for policymakers generally and progressive authors of the next presidential transition project in particular to think outside the box. As the 2024 election made clear, voters are willing to embrace autocracy when basic needs become unaffordable. Aside from stepped-up antitrust enforcement, the Biden administration took a hands-off approach to inflation, deferring mainly to the Fed. And we know the results. Although the Fed eventually brought down prices by raising rates, it did so at tremendous costs, making home ownership out of reach. The high social cost of inflation militates in favor of developing a new toolkit to preserve democracy and make America affordable again.
These past few months have had more than their share of decade-long weeks. Not even three months in, the second Trump administration has already totally shattered norms and scrambled the playing field, challenging everything we thought we knew about the government’s role in the economy. We thought that Congress had the power of the purse, but now that’s become a question seeking an answer. We thought that even the president had to follow instructions from the courts, but now everyone is left to wonder if that is still the case. Once sacred norms atrophy daily.
Yet one thing the Trump administration has cast into doubt that gets little air time is the usefulness of neoclassical economic theory in explaining the economy.
The classical school of economics generally describes the theory of the first cohort of economists in our modern understanding of the discipline—though it was still radically different from the modern iteration, much more intertwined with studies of politics and philosophy. Most famous among these early economists is Adam Smith himself. Other notable figures include David Ricardo, Thomas Robert Malthus, James Mill, and James’ (more well-known) son John Stuart Mill. Most of modern economic theory descends from this small group of English and Scottish political economists.
It bears mentioning that this is not because classical economists were the first to rigorously investigate the economy, but rather because they crystallized it into a concrete area of study, whereas previously it was considered part of moral philosophy and political philosophy and history and in the study of the classics and on and on. Indeed, most of the classical economists were also philosophers—the key concept of utilitarianism is a philosophical foundation of most economic thought.
The neoclassical school, on the other hand, was a category originally used by Thorstein Veblen to group the Austrian school of thought with the “marginalists,” whose work centered around the insights to be gained by examining effects at the, you guessed it, margins. The term was later adopted and expanded by other economists.
Over the course of the twentieth century, much of the original canon of Austrian economics, and a number of significant theoretical advancements like F.A. Hayek’s theory of prices as purveyors of information, were absorbed into the mainstream. At the same time, the demand-side economic theory of John Maynard Keynes became so accepted that—from World War II through the dawn of Reaganomics—a common refrain was that “we are all Keynesians now.” This synthesis left “neoclassical economics” as a stand-in for all of the core ideas of the discipline.
Nowadays, neoclassical economics is usually used simply to mean “mainstream” or “orthodox” economics, as opposed to heterodox schools of thought like institutional economics (of which Veblen is often considered a founder), Marxian or Marxist economics, or Modern Monetary Theory. Although it is arguably too broad of a term to be of much use, there are enough basic intellectual throughlines that we can at least gesture at a “neoclassical” school of thought.
Neoclassical economics models and theories are premised on a handful of key assumptions. They will vary slightly depending on who exactly you ask, but generally include:
These assumptions are obviously not universally true and most economists don’t believe them to be. Rather, the idea is that by reducing complexity, one can discern how various changes to a model will shift behavior, economic interactions, and, ultimately, the dynamics of a market. And once that’s done, those same general dynamics should approximate the more complicated real world.
This has always been somewhat dubious and has never been short of critics—the modern Austrian school is partly a heterodox tradition because they were opposed to these formal, more mathematical models. Indeed, most cutting-edge mainstream economics is about relaxing neoclassical assumptions to create a richer picture that better captures human behavior. More so than actual professional economists, reporters and media personalities have embraced oversimplified models as a crutch for economic analysis. For instance, when opposing some modest intervention into a market, the talking heads insist on discussing the “Econ 101” (read: obvious) view.
The irony is that the discipline itself understands the limited use of such simplistic concepts. Econ 101 introduces concepts that are increasingly complexified in further study. Because reporters and talking heads usually didn’t study advanced economics, much of the discourse winds up being unscrupulously grounded in the handful of assumptions outlined above. Nothing has shattered the illusion that we can understand complex situations with basic models like the start of the second Trump administration.
Shaking the Foundations
Trump’s recent implementation and then partial rollback of tariffs is a good case study. Despite being a cornerstone of the president’s 2024 campaign, business leaders were reportedly surprised at the size and scope of Trump’s initial proffer. And investors clearly did not price such a dramatic intervention in trade policy into their expectations, as evidenced by the rapid gyrations of the stock market. It makes sense when you consider that political and business insiders often default to explaining decisionmaking via presumed rationality. The orthodox view was basically that this kind of sweeping and incoherent tariffs wouldn’t happen; because the costs so outweighed the benefits, such an intervention would clearly go against the government’s (ergo the president’s) basic self-interest. (An example of this sort of thinking beyond economics is political science’s rational state theory—a consequence of how neoclassical economics has colonized much of political science.)
Even though the tariffs have quickly been walked back—even if in the coming days, weeks, or months they are totally undone—the key issue is that, under a neoclassical framework, they would not have happened at all.
Now, one could retort that the market reacted exactly as even the most elementary model would predict; uncertainty made the prospects of financial markets less palatable, resulting in a scramble from investors to reduce their risk exposure, triggering a loss in valuation as the demand curve shifted down. True enough. But the fact that this played out so predictably is partially the point. Everyone knew that it would be economically harmful to impose blanket tariffs. It would obviously be antithetical to American financial interests. Yet the administration did it anyway.
There are basically two ways to reconcile the tariffs with a neoclassical model. First, the model could simply do away with the assumption of rationality. This would make it basically impossible, however, to use as a predictive tool (behavior would become too complex to easily anticipate). Second, the model could do away with the assumption that actors (governments, individuals) are optimizing for utility. Perhaps the White House is actually optimizing for profits for aligned businesses or for accumulating political influence. This type of tweaking of the “objective function,” is much more in line with existing economics, but still represents a major break from neoclassical models.
(The fact that this sort of work is ongoing and most economists do not actually adhere to such restrictive assumptions is one good reason why “neoclassical” being used interchangeably with “modern” or “orthodox” can be confusing. Unfortunately, many pundits, journalists, and businesspeople don’t study the discipline far enough to move beyond the oversimplified worldview.)
For the administration to take an action so clearly against the nation’s interest without breaking these assumptions, it would require believing that they have information that drastically changes the calculus. Possible, but unlikely when it comes to trade, where there’s little information opacity compared to, say, intelligence and national security.
Speaking of information, the current administration has scrubbed enormous amounts of data from federal government websites and databases (some data have been made available again after litigation and public pressure). Everything from omitting the role of trans people in Stonewall to removing reams of medical data has happened at a rapid pace. Some of this information may not be immediately relevant to economic decision-making. Other times the path from that data to economic or commercial relevance is a straight line. New pharmaceutical undertakings will suffer a material harm to their research and development with fewer resources from the National Institutes of Health. The poultry industry might well miss CDC data on avian flu.
But even beyond these specific applications, the withdrawal of mass amounts of previously public data fundamentally erodes the idea that economic actors will ever have anything resembling information symmetry. Not to mention how much widespread attacks on the media compound the issue.
One final issue is regulatory uncertainty. Rational, independent decision-making requires some degree of confidence in the laws and institutions governing the market you participate in. The pushing of novel legal theories—including that oral orders from judges are not binding or that the executive branch can eviscerate congressionally mandated departments and programs—makes it nearly impossible to presume that you can accurately predict the benefits or costs of any particular decision. When even gargantuan law firms prefer deference over self-defense, confidence in the rule of law no longer grants the basic trust required in a modern, global economy.
Goodbye to the Neoclassical World
One could argue that the weakening of these norms has nothing to do with economic thought, and that it’s just dirty politics. But markets are political. Institutions create rules governing behavior, including economic behavior. And a stable set of rules is necessary for any of the assumptions undergirding neoclassical models to play out.
To the extent that we ever lived in a neoclassical world, the Trump administration is ensuring that we don’t any longer. We are long overdue for more nuanced economic discourse that doesn’t shy away from its own limitations, and that recognizes when it can and should (perhaps must) be complemented with other types of insights. As the illusion of perfect competition becomes ever more ethereal, the need for more sophisticated economic thinking and debate becomes ever more urgent.
After Elon Musk poured almost $300 million into his campaign last year, President Trump returned the favor by endowing Musk with unrestricted authority to restructure the federal government. In just over two months, Musk has usurped congressional power and initiated the dismantling of agencies like USAID, the Consumer Financial Protection Bureau, and the Department of Education. Even while the courts have paused some of Musk’s and Trump’s more egregious actions, such as firing all probationary employees, the most conservative Supreme Court in a century cannot be counted on to stop their institutional destruction. Despite the ongoing gutting of federal institutional capacity to rein in big business abuses, Americans still have robust tools for controlling corporate power, most notably the states.
Indeed, the states were the first to take action against the threat to our economic liberties posed by corporate autocracy. Iowa enacted the first antitrust law in 1888, and Kansas followed with a substantially more forceful bill that would be a model for the Sherman Antitrust Act of 1890, which itself was designed for the “preservation of our democratic political and social institutions.” Throughout the 20th century, the federal government and the states enacted policies like public utility laws aimed at regulating corporate misconduct. It was precisely these laboratories of democracy that would assist federal efforts to rein in concentrated corporate power. With democracy under siege, states must once again take up the antimonopoly mantle and use the legal tools available to them to serve as a bulwark against corporate domination and as a force for democratic renewal in America. States have at least five powerful tools at their disposal—each ready for immediate use.
First, state enforcers can pursue policies that directly enhance workers’ individual freedom, mobility, and dignity. They should start by targeting coercive contracts or vertical restraints in antitrust parlance. Vertical restraints are contracts of domination by firms in a vertical relationship like a franchisor and its franchisees—that, in the words of the Supreme Court, “cripple the freedom” of workers and independent businesses.
A good first step is tackling non-competes. Non-competes deprive workers of a fundamental right—the ability to quit a job and obtain better employment elsewhere. Copious research, which has been conscientiously detailed in the FTC’s rule to ban non-competes nationwide, shows that these coercive contracts have little justification, depress wages, suppress business formation, and deter businesses from engaging in more socially beneficial conduct to retain workers, such as improving working conditions.
Over the past several years, many states, like California and Minnesota, have enacted laws that substantially restrict the use of non-competes across the economy. Recently, Ohio lawmakers proposed a sweeping bill that bans non-competes and their functional equivalents. Others must follow suit.
States have also enacted other laws that target vertical restraints aimed at distributors by a supplier. For example, Maryland enacted a law that makes resale price maintenance (RPM) illegal under state antitrust law soon after the Supreme Court broadly legalized the practice under federal law in 2007. RPM restricts the price at which a distributor can sell that good by establishing a price floor. For example, an RPM contract could prohibit a retailer from selling a pair of Nike shoes below the price specified by the company. The Supreme Court once classified RPM agreements as a contract of domination that deprived businesses of “the only power they have to be wholly independent businessmen.” Like other vertical restraints, these agreements can harm workers. The effect of these agreements was made clear after a McDonald’s franchisee complained to corporate about the crushing price ceilings (think of the McDonald’s dollar menu) imposed by the company’s RPM agreements. A representative told her to “just pay your employees less.”
At least in Maryland, Schonette Jones Walker, the chief of the state’s antitrust division, expressed her office’s willingness to enforce the state’s law during a recent American Bar Association event. Again, other states must swiftly do the same by initiating lawsuits targeting vertical restraints or enacting new legislation.
Second, public enforcers have a crucial role in holding corporations accountable to the communities they impact, not only by preventing further harm but also by fostering greater responsiveness to local economies. Corporate executives—increasingly private equity financiers—often treat their workers, trading partners, and local enterprises as nothing more than commodities to be discarded at will, with no regard for community welfare or the livelihoods destroyed. The primary way this harm occurs is through mergers. Antitrust law provides states with a readily available tool to address this problem.
Congress amended Section 7 of the Clayton Act in 1950 to restrict mergers and ensure corporations were accountable to the public. Senator Estes Kefauver—one of the lead drafters of the 1950 amendments—stated during the legislative debates that:
The control of American business is steadily being transferred …from local communities to…central managers [that] decide the policies and the fate of the far-flung enterprises they control…Through monopolistic mergers the people are losing power to direct their own economic welfare.
States can use Section 7 to tackle mergers head-on, particularly because robust case law from the 1960s remains controlling. For example, in Philadelphia National Bank, the Supreme Court held that a merger forming a firm with a 30 percent market share is “so inherently likely to lessen competition substantially that it must be enjoined.” Recently, too, Colorado and Washington State successfully stopped a merger between grocery giants Kroger and Albertsons using their state laws—demonstrating that these legal pathways can be just as viable avenues for restraining corporate power as their federal counterparts.
Third, states can enact policies that grant small businesses and workers a more direct role in governing the economy by endowing them with the power to shape the rules of the marketplace. As I have previously described in The Sling, nail salonists in New York endure terrible working conditions—including breathing in large quantities of toxic chemicals—and receive sub-living wages. New York has previously proposed to address this problem by creating a wage and standards council. The council would authorize small businesses to collectively determine the wages and work standards to which all salonists must adhere. Traditionally, such coordination among market participants violates the antitrust laws, but due to a doctrine called Parker Immunity, state legislatures are able to shield the behavior from antitrust scrutiny.
This democratic process enables market participants to shift the variables of competition that are corrosive to workers and businesses to more desirable factors such as service quality. Simply put, states exercising their power under Parker Immunity can make our markets more democratic by granting workers and firms a mechanism to voice their concerns and make collective wage and price decisions.
Many states have started recognizing the value of increased democratic coordination between market participants and enacting their own piecemeal legislation. California recently enacted a law to raise wages and improve working conditions for fast-food workers. The law establishes a council of franchisors, franchisees, and workers to determine minimum standards and wages for the industry. The council established a $20/hour minimum wage in 2024. In 2023, Minnesota enacted a similar law for nurses.
States should also use Parker Immunity to counterbalance the power of dominant corporations. For example, over the last few decades, digital platforms like Google and Meta have extracted billions in digital advertising while squeezing the news industry—profiting from its content without fair compensation. This stranglehold over the technology and information pipeline has left news outlets struggling to survive.
The proposed federal Journalism Competition and Preservation Act and many state counterparts aim to help the beleaguered news industry by authorizing them to collectively negotiate fairer terms with Google and Meta regarding the distribution and web crawling of their content. Although the bill passed out of the Senate Judiciary Committee with bi-partisan support, Senator Schumer did not bring the bill to a full Senate vote, reportedly due to a conflict of interest. State lawmakers should adopt legislation to help vulnerable market participants aggregate their power to secure fair wages, prices, and working conditions.
Fourth, states can ensure all businesses have an equal opportunity to succeed on their own merits. In particular, states can ensure businesses treat their trading partners and consumers on non-discriminatory grounds by imposing common carriage obligations (CCOs) onto their business operations.
CCOs are ancient in our law—remnants of which extend back to the Code of Hammurabi. CCOs are simple. Whether through the courts’ common law or an enacted law, firms classified as “common carriers” must treat consumers and their trading partners on non-discriminatory terms. Common carriers must offer reasonably similar terms and prices to all customers. For example, if Meta were a common carrier, it could not arbitrarily prohibit news from being transmitted on its platform or modify its algorithm to preference some news outlets over others. Likewise, if Amazon were a common carrier, it could not strike special deals with large consumer goods manufacturers or penalize marketplace sellers for not using its logistics services.
While too often honored in its breach, the principle of equal, non-discriminatory treatment has been an important part of American public policy. It is enshrined in foundational documents, from the Declaration of Independence to the 5th and 14th Amendments of the Constitution. This principle has also shaped key legislation. For example, the Robinson-Patman Act prohibits discriminatory pricing practices, making it unlawful to grant preferential treatment to certain trading partners. Similarly, the Civil Rights Act of 1964 ensures equal access to commerce and employment by prohibiting discrimination based on race, color, religion, sex, or national origin.
CCOs embed the principle of equality into our economic life and therefore strike at the heart of oligarchy by substantially limiting corporate power over business relationships. They ensure that firms and individuals cannot be denied access to essential channels of commerce or subjected to unfair pricing and terms. Indeed, in the early 20th century, CCOs were seen as a “solution to the trust problem.”
State legislatures can enact legislation, or state AGs can initiate lawsuits to have courts designate dominant, oligarch-controlled firms as common carriers. Currently, Ohio’s state attorney general is in a protracted battle to classify Google as a common carrier. If successful, Ohio’s lawsuit could provide a template and incentive for other state law enforcers to replicate.
Fifth, some state AGs can directly structure the marketplace to require businesses to engage in competition that enhances the public’s welfare, job creation, and innovative activity, by using their law enforcement powers against “unfair methods of competition.” While many states can initiate lawsuits piecemeal, 12 state AGs are empowered to declare a specific business practice unlawful as an “unfair method of competition.” The laws of these states also contain reference clauses that align their interpretation with federal case law, which currently maintains an expansive interpretation of what constitutes an unfair method of competition.
Using this authority, state AGs can demonstrate their commitment to deploying every available regulatory tool to protect consumers, workers, and fair competition. As “The People’s Lawyer,” a state AG can not only quickly establish bright-line rules defining unlawful conduct, but also swiftly recalibrate how firms compete in the marketplace and how consumers and workers are treated under state law. By prohibiting business practices like those detailed in this article, public enforcers help uphold democracy by reinforcing the idea that democratic institutions, not private monopolies, should govern the economy.
Of course, enacting new legislation takes time and political will, and state legal departments are notoriously understaffed and under-resourced. But action is imperative. With Trump and the world’s richest man gutting critical parts of the federal government, either states take up the challenge to be one of the last defenses against oligarchy, or the public must come to terms with the fact that every layer of the American system of government has failed to protect them.
While state actors contemplate how to act, the public is already demanding change. As Senator Bernie Sanders’ current National Tour to Fight Oligarchy and the nationwide Hands Off protests against the Trump Administration demonstrate, millions of Americans are already mobilizing to resist corporate rule. The only question now is whether state enforcers and lawmakers will march alongside them.
Daniel A. Hanley is a Senior Legal Analyst at the Open Markets Institute. You can follow him on X, Bluesky, and Mastodon @danielahanley.
Free trade is under increasing attack by both the progressive left and the populist right. Although the left and the right offer different policy solutions—the progressive left stresses combining industrial policy, antitrust policy, and support for labor with targeted tariffs, while the populist right advocates a wider use of tariffs combined with stricter immigration policy—supporters of both of these groups no longer adhere to the neoliberal free trade approach advocated by most economists. Are both of these voices misinformed about economics?
We argue that it is neoliberal economists who are wrong about the economics of trade. Economics textbooks and popular work by economists typically hide the unrealistic assumptions that are required to conclude that free trade as practiced by the United States is a beneficial policy overall, meaning that it is welfare-improving.
The Case for Free Trade: Comparative Advantage
The basis for the economic claim that free trade is beneficial is the early 19th century British political economist David Ricardo’s theory of comparative advantage. The basic logic is that it is always more efficient for each party engaged in trade to specialize in what they do best. Per this logic, even if your spouse can earn more money than you can and they can perform better childcare, if you are better at childcare than at earning money, the childcare should be assigned to you. Likewise, if each nation specializes in what they do best, and then trades with other nations for other goods, everyone benefits.
In the classic textbook treatment, the benefits of comparative advantage are expressed in diagrammatic form. For example, the famous textbook of Samuelson and Nordhaus (2010) uses the following graph to prove the loss in social surplus caused by tariffs.
In this graph, imposition of the tariff causes the domestic price level to rise from 4 to 6 (from the line to the line ). This price increase causes domestic consumption to fall from 300 to 250 units, which in turn causes consumer surplus to fall by area C. Interpret the “world price” horizontal line LF as the foreign supply curve and the foreign marginal cost curve, and interpret the “domestic supply” line SEHS as the domestic supply curve and the domestic marginal cost curve. The tariff causes domestic production to rise from 100 to 150 units, and area A is the increase in production costs caused by this shift from low-cost foreign producers to higher-cost domestic producers. The sum of A and C is the loss of social surplus caused by the tariff.
This demonstration has held great sway among economists. In covering the January 2025 meeting of the American Economic Association, the New York Times reported that “free trade is perhaps the closest thing to a universally held value among economists.” To back this up, the article cited a 2016 survey by the University of Chicago’s Kent A. Clark Center for Global Markets of their panel of prominent academic economists from top universities, in which 39 out of 39 strongly disagreed or disagreed that “Adding new or higher import duties on products such as air conditioners, cars, and cookies—to encourage producers to make them in the U.S. —would be a good idea.” There was another survey performed by the Clark Center in 2018, in which 40 out of 40 of the panel disagreed that “Imposing new U.S. tariffs on steel and aluminum will improve Americans’ welfare.” For a more general question asked in 2012, “Free trade improves productive efficiency and offers consumers better choices, and in the long run these gains are much larger than any effects on employment,” the votes were 35 strongly agree or agree, two were uncertain, and none disagreed.
The economic argument depicted by the diagram is unassailable, but only if several unsurfaced assumptions hold. Although many economists highly value logical argument, they are at the same time remarkably tolerant of unrealistic assumptions of the sort we are about to discuss.
The Failed Assumptions of the Free Trade Model
There are five assumptions (two explicit and three implicit) needed to support the free-trade argument as depicted in the diagram above.
The first explicit assumption is that there is full employment in the domestic economy. It is assumed that when workers are displaced by imports, they can easily become re-employed at the same wages. If this is not the case, then removal of a tariff causes a loss of social surplus (a loss of economic rent) in the domestic labor market, which the analysis based on the above figure misses because it only depicts the output market.
Yet the assumption of full employment does not hold empirically. On the contrary, here is a revealing graph from an article by Paul Krugman (2019).
Trade job loss has been 74 percent in manufacturing, which is one of the few sectors where non-college-degree-holding males could earn a good living. Contrary to free trade theory, the U.S. lost jobs, primarily in high tech, computer parts, electronics, and durable goods manufacturing. Between 2001 and 2018, EPI estimates that the U.S. lost 1,132,500 jobs to Chinese imports but only gained 175,800 jobs to exporting industries to China.
In their work on the China Syndrome, Autor, Dorn and Hanson (2013) show that the impact on labor comes “less from its economy-wide impacts than from its disruptive effects on particular regions.” These disruptions would not have occurred if full employment, and frictionless re-employment, characterized the economy.
The second explicit assumption that undergirds the free trade theory is that there are no externalities. Pisano and Shih (2012) analyze the total impact of the loss of manufacturing jobs in particular regions. The impact can be enormous, stretching far beyond a manufacturing plant. Entire towns or cities can be hollowed out. A plant closure can destroy numerous small businesses, the tax base, and many complementary businesses. In addition, workers in the nontraded sector are hurt by an increased labor supply, and their bargaining power is undermined. None of these changes in social surplus are captured in the Samuelson and Nordhaus figure above.
Besides these two explicit assumptions, policy analysts who advocate free trade often make three more implicit assumptions, as enumerated by Fletcher (2011). These are also flawed.
The first implicit assumption is that comparative advantage results in short-run efficiencies that cause long term growth and development. The problem with this idea in practice is that comparative advantage is a static theory. The inside joke among economists is that each country should do what it is best at doing, and what underdeveloped countries are best at is underdevelopment.
The evidence is strongly contrary to this assumption. Indeed, no country has successfully developed under free trade. In his book Kicking Away the Ladder, Ha-Joon Chang reviews the development history of every developed country and shows that every one of them used significant tariffs as part of its development strategy. Joe Studwell, in How Asia Works, shows that all of the Asian Tiger countries used trade protection with government industrial policy to develop. So free trade is not a development strategy. It is a static policy that can impede development.
The second implicit assumption that undergirds free trade theory is that freely-floating currencies will keep trade balanced, limiting imports and ensuring that benefits exceed loses. This is not true, as trade with the U.S. has not been balanced for many decades per the Federal Reserve Bank of St. Louis.
After the U.S. liberalized capital markets in the 1980s, trade deficits were supported by capital movements into the United States. Foreigners have used their dollars to purchase U.S. securities and real estate, which does not increase U.S. productivity because it generates no new capital formation (at least directly).
The third implicit assumption is that the U.S. provides adequate compensation for job losses caused by international trade. On the contrary, Lori Kletzer (2001) analyzed the U.S. policy response to trade-induced job losses and found it to be woefully inadequate. By contrast, countries that have strong labor support policies (like strong social safety nets) are generally much better able to garner the benefit from international trade without suffering social and political costs from it.
In the absence of these five assumptions, the free trade argument is completely undermined.
And if these five weren’t enough, there is another, more basic assumption underlying the free trade argument that needs to be debunked—namely, the assumption that social surplus areas of the sort used in the graphic presentation of the free trade argument are a correct measure of welfare. For more on that, see our papers with Darren Bush.
Our position is not that free trade is never the correct policy. Comparative advantage exists; even permanent comparative advantage exists. But analysis of free trade policies should occur in a real-world framework, not one that makes important assumptions which do not hold, even approximately, in the real world.
Representative Ben Cline, apparently an ardent supporter of antitrust laws, has introduced a bill eliminating the Federal Trade Commission (FTC), the independent federal agency that enforces the antitrust laws. Elon Musk, possibly the head of DOGE although it isn’t quite clear, is on board with the bill.
Some have argued that the elimination of the FTC as an independent agency has already happened. And unlike former Representative John Mica’s proposals of past, this one would not turn the FTC building into a museum.
The One Agency Act proposes transferring all antitrust matters to the DOJ, including staff (for now) and budget (for now). Cline, who represents Virginia’s Sixth Congressional district, explained the rationale for the bill in his press release:
For far too long, our antitrust enforcement has been plagued by bureaucratic infighting and delays that hinder competition…These inefficiencies have allowed sophisticated entities to manipulate the system to their advantage, escaping accountability for their anti-competitive actions. It’s time we address these issues head-on. We need to streamline and reinforce our antitrust enforcement within the Justice Department. The Department is more directly accountable to the American people and is structured to deliver the decisive enforcement necessary to protect consumers and ensure a fair marketplace.
For true believers of government efficiency, however, the bill doesn’t go far enough. Sure, there are obvious “efficiencies” from reducing enforcement agencies from two to one, as well as the predictable eventual cutting of enforcement budgets. But the problem for reformers like Rep. Cline is that there are still other enforcers of the antitrust laws, beginning with the states.
If the goal is really to consolidate enforcement into a single body, Congress needs to eliminate the states from parens patriae federal antitrust enforcement. Sometimes, states forget the principle of federalism that means if the federal enforcement agencies choose not to prosecute an action, that means the state antitrust agencies should bow out as well. (At least, I’m sure there is a body of literature somewhere that says this). While there is some coordination between the states via the National Association of Attorneys General, coordination could be even more efficient to reduce that number via the massive economies of scale associated with only one provider of antitrust enforcement. Let the states litigate state antitrust laws in state court, until the Supreme Court rules 6-3 that state antitrust laws are preempted by federal antitrust laws.
But even that maneuver leaves too many antitrust enforcers. Private plaintiffs are uniquely situated to know what’s happening in the market. They have infinite resources compared to federal and state governments, and treble damage recoveries are plentiful. Standing is never an issue, even for direct purchasers. At least these are things I’m told.
For members of the efficiency cult, the optimal number of antitrust enforcers is really zero. If it is true that Type I errors are a bigger worry than Type II errors, then most mergers produce efficiencies, the rule of reason ought to dominate Section 1 behavior, and Section 2 cases should be rare. In that case, maybe it is best to be done with the whole thing. Why force companies, for example, to file HSR forms when the vast bulk of mergers are efficient? Why subject defendants to rule of reason when plaintiffs mostly lose?
Why employ law firms to defend obviously beneficial activities? After all, according to FTC Chairman Ferguson, the ABA is a left-wing political organization. Why fund it? And don’t get me started on economists, billing $1000+ an hour and increasing the costs associated with consummating an efficient merger. Think of all the cost savings from getting rid of an entire class of people who, by their own admission, think the antitrust laws are hurting their clients? Rep. Cline and his ilk are not the first to suggest the neutering of antitrust. Of course, others are more adept at killing it slowly, while some seek a quick death for it.
That’s all Congress can do to make society better and promote consumer welfare, or total surplus, or trading partners or price or output or abundance, or whatever the hell they are doing.
But the President can do more. Canada and Greenland have antitrust enforcement agencies, for example. We need to stop that nonsense. Maybe a merger?
It would be a start.
This is either a job application for Project 2029 or an April Fool’s message.
Last October, the WNBA’s players opted out of their collective bargaining agreement. After very little news for months, stars like Napheesa Collier, Angel Reese, and DiJonai Carrington all indicated recently that the players will hold out if the WNBA doesn’t start paying the players better. The statements suggest a labor dispute might be coming to the WNBA. But who is actually participating in the dispute?
This may seem like an odd question to ask. In the NBA, any labor dispute is clearly between the NBA players and the owners of the NBA teams. That’s because the owners of the teams in the NBA also collectively own the league. The WNBA, by contrast, doesn’t quite work this way.
The WNBA was created by David Stern and the NBA in 1996. At the time, the American Basketball League existed and employed some of the major stars in women’s basketball. But by 1998, the WNBA had driven the ABL out of business and a monopoly in North American professional basketball was established. Until 2002, the NBA owned all the teams in this monopoly (i.e., every team in the NBA and WNBA).
After 2002, the NBA brought in some independent owners for the WNBA. If the WNBA functioned like the NBA, the owners of the teams would also own the league. But the WNBA is quite different. Today, the owners of the WNBA teams technically only own 42% of the league. The NBA itself owns another 42% while investors in a $75 million capital infusion in 2022 own the last 16%. Because six of the fourteen WNBA franchises are also owned by people who own NBA teams (and NBA owners also participated in the 2022 capital raise), the owners of the NBA still own more than 60% of the WNBA.
The reality of the NBA’s ownership of the WNBA was made clear by Suzanne Abair, CEO of the Atlanta Dream, in Slaying the Trolls: “If the (all the) WNBA owners say they want to do something and the NBA says no, the answer is no.” This means that the potential labor dispute in the WNBA is effectively between the WNBA players and the NBA owners. And because the NBA owners are making all the choices in professional basketball, the responsibility for the poor pay of WNBA players today lands squarely at the feet of the NBA owners who control professional basketball in North America.
Choosing to pay women less
A simple comparison illustrates how badly the NBA is paying the women it employs. In 1971, Walt Frazier signed a 5-year contract with the New York Knicks that was later revealed to be worth $300,000 per year. In 2024, the highest paid players in the WNBA were only paid $241,984. These numbers are not adjusted for inflation. Frazier actually saw more dollars five decades ago than WNBA players are seeing now. If you adjust for inflation, Frazier was paid nearly ten times more than a WNBA player was paid this last season.
If we look at league revenues, however, the NBA in the early 1970s and the WNBA today look quite similar. When the NBA was paying Walt Frazier $300,000 per year, the entire league had only about $30 million in revenue. If we adjust that figure for inflation, the NBA in the early 1970s reported about $200 million in revenue in today’s dollars. According to Bloomberg, in 2023 the WNBA reportedly earned about the same amount. But the WNBA is only paying about 10% of its revenues to its players (the “revenue share”). In contrast, the NBA in the early 1970s was likely paying a revenue share close to 50%.
The NBA has never paid the men it employs as badly as it is paying women today. Even in the 1950s, when the NBA only had about 10% of the revenues the WNBA reports today, NBA players were paid a revenue share of 40%. So WNBA players have every reason to be upset.
Choosing the revenues in professional basketball
Napheesa Collier made it clear what she wants the NBA to do to fix this problem:
We’re not asking for the same salaries as the men, we’re asking for the same revenue shares. That’s where the big difference is. We get such a small percentage of revenue share right now that affects our salary. We’re asking for a bigger cut of that, like more equitable to what the men’s revenue share is. It wouldn’t get us anywhere close to their salaries, we’re not asking for the same salaries, we’re asking for the same cut of the pie of what is made in our league.
This seems like a simple request. Unfortunately, the revenue picture in the WNBA going forward is hardly simple to understand.
This past summer the NBA signed a new 11-year media deal worth $76 billion. In announcing the deal, the NBA revealed that $2.2 billion of this money would go to the WNBA. Yet this amount was reportedly not determined by the free market. As Kurt Badenhausen reported at Sportico at the time: “the Washington Post reported that the new media deals would not assign a specific figure to the WNBA rights but would be determined by the NBA instead.”
At the time, Cheryl Miller questioned the value the NBA assigned to the WNBA:
I’m not great with numbers, low-ball (offer). That’s a low-ball. You’re saying how much? Not enough. Not even close. Now, I’m not trying to inflate it a whole lot — ($2 billion) is nice, ($8 billion) would be better.
It is important to emphasize, as Miller states, that the NBA is “saying” how much the deal is worth. And as Miller argues, what the NBA is choosing to give the WNBA seems very low. Indeed, my own analysis of television ratings indicates that maybe $10 billion is closer to the true value of the WNBA media rights.
If the NBA is low-balling the WNBA by $6 billion or even $8 billion across 11-years, then the WNBA is transferring at least $500 million per year to the NBA. And that means, part of the pool of revenue the WNBA players generate might be used to pay NBA players and NBA owners in the future.
This very much complicates the story. WNBA players can’t just ask for a higher percentage of revenues. They also need to ask how much of the revenues they are generating is being used to subsidize the NBA.
Forcing the NBA to make different choices
Of course, this is not the story the NBA tells. The story the NBA has often told is that it subsidizes the WNBA, and that the WNBA isn’t profitable.
For those who have studied the history of labor disputes in the NBA, this is a very familiar story. The NBA argued it wasn’t profitable when it campaigned for a merger with the ABA in the early 1970s. In 1983, the NBA argued it wasn’t profitable when they got the players to accept a cap on payrolls. And in 2011, the NBA returned to this argument when it got the players to accept a pay cut.
Given this history, it is not surprising that just three days before the WNBA players opted out of their collective bargaining agreement with the league, the New York Post reported that, according to NBA sources, the WNBA lost $40 million this past season. This was not the first time the NBA made such a claim. But in the past the claim was just that the WNBA lost $10 million per year.
Just as we saw throughout NBA history, these assertions of WNBA losses were not supported by any evidence. And such a claim in 2024 seems especially ridiculous. After all, the WNBA reported that the league achieved records with respect to television ratings, attendance, merchandise sales, and social media engagement. Given this growth, and given the players’ paltry salaries, one wouldn’t expect any losses. The NBA argued that not only were there losses, but the losses are four times higher than what the NBA asserted in the past.
Again, such assertions are not surprising. The NBA has used this strategy to take money from its players for decades. And traditional media outlets, like the New York Post and New York Times, seem quite happy to help the NBA spread its stories about financial losses.
But the WNBA players have something NBA players never had in the past. The WNBA players have access to social media. They can now speak directly to their fans and tell them that the NBA is choosing to pay women of the WNBA a lower wage than the NBA paid men fifty years ago, and a massively lower revenue share compared to the NBA today. And the players can also tell their fans that the NBA owners have chosen to have the WNBA subsidize the NBA in the future.
Will this lead the players to victory? Perhaps not. But if you’re a fan of the WNBA, it’s pretty clear which side you should support in this dispute. The NBA owners are exploiting players by claiming losses, just as they’ve done many times in the past. It is time to frame this labor dispute as just another choice made by the NBA.
For years, American tech titans have insisted that AI will be the determinative advancement of the 21st century, and that we need to give them billions of public funds for the United States to maintain its spot as the world’s foremost technological developer. And many pundits and politicians leapt to champion that position. They all got caught with their pants down.
It turns out that you can develop cutting edge AI with orders of magnitude less money and energy than we’ve been told. The Chinese firm DeepSeek managed to create a program that draws even with or beats just about everyone else for under $6 million dollars. Their reported hardware setup also consumes less than ten percent as much power as Open AI’s hardware (on which ChatGPT-4 was trained), along with using only about eight percent as much memory bandwidth. And DeepSeek did all of this using hardware that was specifically designed to prevent Chinese firms from being able to compete with the American giants.
DeepSeek was able to beat Silicon Valley with a couple of thousand (second-tier) GPUs and $5.5 million. Meta AI publicly was aiming for 600,000 (top tier) GPUs. Sam Altman said he needed $7 trillion to achieve his ideal program. President Trump announced a partnership between SoftBank, OpenAI, Oracle, and MGX called “Stargate.” The program is aiming to spend $500 billion on new AI infrastructure, including $100 billion to begin being disbursed “immediately.”
The name is a bit ironic; the whole Stargate sci-fi franchise is premised on protecting people from being exploited by a ruling class that controls superior technology. Practically every villain in every show and movie is either an extractive overlord who rules with tech superiority and/or a literal hostile AI that turned against its creators. It’s doubly ironic because every entry in the franchise also has a theme of doing more with less. But hey that’s pretty on brand for guys ostensibly inspired by (the famously lefty) Star Trek series who rant and rave against DEI.
On a Spending Spree
According to Intelligent CIO, the United States invested some $200 billion more in AI than China from 2019 to 2023. In fact, U.S. investment almost doubled (188 percent) what the next three leading countries did combined.
Data from Intelligent CIO, “USA leading the charge on AI investment”
With that kind of funding edge, American AI titans should be leagues ahead, virtually untouchable. So why the panic in January of 2025 when DeepSeek shocked the entire industry, triggering a loss in valuation of over $500 billion in chip-maker Nvidia?
To answer that, we need to understand the political economic view of AI that its proponents have been peddling. Artificial intelligence, the argument goes, is slated to be one of the—perhaps simply the singular—most transformative technology in human history. AI has been promised as the solution to just about every worldly problem, not to mention a fair few metaphysical ones too.
Climate change? AI will be able to solve it. Poverty? AI will usher in an unprecedented wave of productivity and growth that will create more than enough spoils for everyone. The issue of limited resources itself? AI is the key to moving to a post-scarcity economy (like Star Trek, but without those pesky ideals of acceptance and diversity).
This may sound like a ludicrous exaggeration, but a lot of players in Silicon Valley genuinely hold this belief. See, for instance, Mark Andreesen’s “techno-optimist” manifesto, which declares with a totally straight face, that “We believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.”
Or, take a look at former Google CEO Eric Schmidt, who said that we should abandon climate mitigation policies and double down on AI data centers so that we can wait for an algorithmic solution.
One doesn’t need to be a luddite to question this absolute faith in technology. What happens when we’ve let the Earth burn and AI finally gives us the answer? What happens if it’s 42? What if AI simply advises us to cut emissions, like scientists have been urging for decades?
For decades, the cult of a technological cure-all has festered in the frontal lobe of the American political class. Dreams of Mars colonies and interstellar civilizations and a sci-fi future have permeated our public consciousness. And the panacea peddlers have used their capture of our imagination to entrench their self-serving extractive institutions, funneling more and more resources to their gated future. The American AI giants were caught off guard because they fundamentally believed that they were entitled to their hoard without challenge.
The government and public have been led into siphoning enormous amounts of money and attention into an AI arms race that was supposed to be an easy lay-up. Just for a little Chinese company to block the shot in their face.
Even accepting that developing cutting edge AI domestically is important—which is an entirely separate debate—the entire economic paradigm under which the United States has been building is clearly undermined by DeepSeek.
The Political Economy of AI
At the core of the American model of AI development is devotion to the idea of scalability: If you want to train an algorithm to be orders of magnitude more powerful, the trick is to scale up its footprint, usually also by orders of magnitude. That’s why, despite Open AI training ChatGPT-4 on 25,000 last-generation processors, Meta AI felt the need to spend 2024 setting up 600,000 cutting-edge processors. If they want to beat Open AI, they need to outgun them, right?
Well, no. It turns out that this isn’t really the case. Or even if it is partially true, meaningful innovation can occur without this type of scaling.
To preserve Silicon Valley’s dominance, the federal government actually banned the export of current top-of-the-line processors, the NVIDIA H100 GPU. To that end, NVIDIA developed an inferior line of chips specifically to export to Chinese firms without giving them a chance to threaten American AI firms. That chip, the H800, is still a meaningful upgrade over the older A100 (what Open AI used for ChatGPT-4). But even so, the H800 has only 60 percent of the memory bandwidth (how much data can be read/written in a given time interval), half the max power draw, and is capable of only 76 percent of the total operations per second (TOPs). See the chart below for a more detailed breakdown.
A100 | H100 | H800 | |
Memory Bandwidth | 1.935 TB/s | 3.35 TB/s | 2 TB/s |
TOPs | 624 | 3958 | 3026 |
TFLOPs | 312 | 3958 | 3026 |
Max Power | 300 W | 700 W | 350 W |
Sources: Stats for the H100 and A100 are from NVIDIA’s website. Stats for the H800 are from Lenovo.
Like TOPs, total floating point operations per second (TFLOPs) is also a measure of how many computations can be performed in a second. The main difference is TOPs represent how many operations with integers can be computed, while TFLOPs represent the number of operations containing decimals (floating points) that can be computed. One of the clearest improvements in the current generation of NVIDIA chips over the last generation is much more efficiency in processing fractional data.
This all gets a bit abstract in a hurry. The next table shows what these chip specifications mean in terms of the total computing power and energy consumption of Meta AI’s target 2024 buildout, Open AI’s training setup for ChatGPT-4, and DeepSeek’s training setup for its V3 model.
Meta AI | Open AI (ChatGPT 4) | DeepSeek (V3) | |
Memory Bandwidth | 2,010,000 TB/s | 48,375 TB/s | 4096 TB/s |
TOPs | 2,374,800,000 | 15,600,000 | 6,197,248 |
TFLOPs | 2,374,800,000 | 7,800,000 | 6,197,248 |
Max Power | 420,000,000 W | 7,500,000 W | 716,800 W |
24-hr Energy Use | 10,080,000 kWH | 180,000 kWH | 17,203 kWH |
Equivalent Yearly Homes’ Power (per day) | 93 Homes | 16 Homes | 1 Home |
1-yr Energy Use | 3,679,200,000 kWH | 65,700,000 kWH | 6,279,095 kWH |
Equivalent Yearly Homes’ Power (per year) | 340,950 Homes | 6,088 Homes | 581 Homes |
Sources: Stats for the H100 and A100 are from NVIDIA’s website. Stats for the H800 are from Lenovo. Energy consumption for the average American household is from the Energy Information Administration. The quantity and type of GPUs is based on reporting from Drop Site News.
Meta’s target hardware setup for 2024 uses the same electricity as almost 341,000 average American homes in the same year—as much energy consumption in a day as 93 houses do yearly. Thus, a single company’s AI hardware consumes energy at a rate comparable to a city the size of New Orleans or Orlando. Note that this is just the power drain of the chips themselves, not including anything else in data centers like climate control, lighting, or security systems. Meta’s GPUs alone already consume more than a fourth of the yearly power consumption of a number of larger cities like Nashville, Boston, D.C., and Denver.
Data centers already represent three percent of total US power consumption and are projected to more than double to eight percent by 2030.
The response to DeepSeek in Silicon Valley has been woefully lacking in any form of introspection. Many in the AI industry have been defensive, accusing DeepSeek of illicitly possessing H100 chips and of training on copyrighted data from American firms. If those are the best stories, we should seriously reconsider the political economy of the American AI industry. Let’s assume that DeepSeek actually has triple the reported number of GPUs and all of them are H100s. That would still only be one percent of what Meta AI has. And as far as copyright goes, AI firms have been acting up to now under the assumption their AI models must train on internet data, regardless of copyright. Disrespect for IP protections is literally baked into the business model; it hardly seems notable that DeepSeek would do the same.
Analysts have also contested the $6 million figure, with one research outlet’s report suggesting the real cost for the V3 model could be as high as $1.3 billion. One outlet that covered the report called that figure “staggering.” They should take a look at what U.S. tech firms pay. Again, the Stargate project alone is looking to spend half a trillion dollars. And American cloud computing companies are expected to spend $250 billion on AI hardware in 2025 alone.
That same report also notes that DeepSeek has 50,000 GPUs. Yet the researchers don’t actually dispute that only 2000 were used to train DeepSeek’s model.
Is it Worth it?
As Energy Innovation senior fellow Eric Gimon noted to Bloomberg, this moment resembles a heavy overinvestment in fiber optic cables during the dotcom bubble of the early 2000s. Both instances were rooted in business models that presumed scaling up hardware to be the key component of technological advancement. But engineers figured out how to efficiently transmit more data through the cables, just like DeepSeek figured out how to more efficiently train an AI model.
Given this development, it’s worth reconsidering whether the public ought to continue to subsidize the AI industry, as it clearly poses a massive resource drain and can’t compete with innovators like DeepSeek. Oddly enough, the current policy agenda around AI is massively interventionist, when the same companies reject any sort of regulations in other tech markets.
It sure seems like an excellent moment to question the industry’s calls to relax environmental review to build data centers and the propping up of fossil fuel plants, specifically to power those data centers.
Richard Nixon’s Attorney General John Mitchell famously declared: “Watch what we do, not what we say.” When that was done to him, he wound up in prison. A similar lesson applies to understanding the courts.
Academics can debate whether cartels can contribute positively to consumer welfare under some (unlikely) circumstances. Indeed, Penn law professor Herbert Hovenkamp has made the argument that the Trans-Missouri and Joint Traffic cartels were of that sort and a similar proof exists to justify the sewer pipe cartel found illegal in Addyston Pipe. Yet I suspect that no one in the antitrust community thinks that the courts should invite litigation on the merits of cartels. A defendant in a criminal cartelistic market allocation case recently got a trial judge to open that door, but the Tenth Circuit slammed it shut again.
My starting point is that antitrust law is not, especially Section 1, concerned with the ultimate consequences. It sets rules for conduct focused on the kinds of conduct that will be allowed. The well-known analogy is how nations set rules of the road. Arguably it could be more efficient to drive on the other side of the street at some times, but it must not be done!
In a recent essay titled “Against Efforts to Simplify Antitrust,” Iowa law professor Sean Sullivan tracks the words courts use and not what they do. He is correct that if you look at what courts say about antitrust law doctrine it is complex and confusing. The rhetoric of courts is one of consequentialism (i.e., whether the result is likely to be substantively good or bad), and that opens the door to an infinite array of claims. Simplifying such concerns only invites further confusion.
My view of the complexity problem is that one should not look at what courts say but what they do, given the facts that they find. If we look at what courts do in applying Sherman 1, with the exception of one type of agreement—to which I will return—they condemn absolutely (per se) what Taft, Bork, Steve Ross and I call “naked restraints,” ones that only function to create, allocate or exploit the market. Such restraints pre-empt the function of the market and confer control of that public activity on private parties. This is the case even when the restraint is “vertical” as the recent McDonald’s decision reaffirmed. All such restraints are always illegal (again one type of case may be an exception but rarely is). Despite academic commentary, almost no lawyer defends a cartel on its merits. The defense is always that there was no “agreement” or “conspiracy,” and the cases focus on that element. When the agreement is “tacit,” there is no remedy. Indeed, that is why plaintiffs ought to focus on what the defendants did agree to do and show that that agreement is itself anticompetitive and remediable. But what is central here is that the underlying framework is clear—a purely anticompetitive agreement is illegal regardless of potential contribution to “consumer welfare.” More generally, what I claim is that the “per se” cases fall in a functional category that differentiates them from other cases involving agreements among competitors, e.g., CBS v BMI.
The aforementioned exception comes when there is market failure because of the lack of standards or other necessary regulation of market conduct. As Lande and Marvin pointed out in their article titled “Three Types of Collusion: Fixing Prices, Rivals, and Rules,” most such regulatory agreements create social costs and are not justified. A good example is the Cal Dental case (the FTC failed to call the economist who would have documented these effects). Contrary to the Indiana Federation of Dentists, the Court in Cal Dental assumed that there was a legitimate role for dentists in self-regulation. Indeed, this might have been plausible if there was no state regulation. But as in Indiana, California has a full set of regulations including those governing advertising. This crucial fact was not, as far as I can tell, brought to the Court’s attention. Indeed, Cal Dental is part of a group of cases, which if you look at what courts do, not what they say, stand for the proposition that some private regulatory agreements can be lawful under the Sherman Act. In an article titled “The Per Se Legality of Some Naked Restraints: A (Re)Conceptualization of the Antitrust Analysis of Cartelistic Organizations,” Bett Roth and I have identified the criteria that we think courts are using in fact. I would also observe that most of the time the courts reject the claim of a right to regulate competition.
As Taft point out in Addyston Pipe, it is different when the restraint is a functional incident to a primary transaction or venture involving the parties to the restraint. Again, when such ancillarity is apparent in a case, courts are usually reluctant to intervene without some compelling reason. Hence, the usual “rule of reason” case involves an arguably ancillary restraint which the court treats as presumptively legal. Yes, it can get difficult to determine what rebuts that presumption. But when you look at what courts do and not what they say, there has to be either evidence of substantial market power or evidence that the claim is pretextual before the court will examine the “reasonableness” of the restraint.
In some cases, however, the presumption is reversed. These are the “quick look” cases. The restraint looks pretty close to being naked, but there is at least a tenuous claim that it is ancillary to some transaction or venture. The advocate for such a restraint has the burden of showing the necessity for the restraint in terms of that transaction or venture. Cal Dental properly understood on its facts did not involve a classic quick look. It is involved the question of whether the association had the right to regulate advertising competition among dentists and, if so, whether the regulations were “reasonable.”
Thus, there is a framework, external to the language of the case law, that explains and identifies the legal issues and standards with limited complexity. Some famous cases are often misunderstood in term of their actual facts or outcome. For example, Topco is famous as a “per se” decision, but in fact the Supreme Court after remand upheld a modified territorial restraint of Topco members. The restraint was responsive to the claim by Topco of free riding on “house brands” if a competing member entered the territory of an existing member. Of course, this is nonsense because these are house brands! In operation, the decree lasted ten years and was never invoked even as Topco’s members increasingly competed with each other. Harry First and I wrote up this history. The initial absolute territorial restraint was unreasonable after only a quick look, but a greatly slimmed down restraint assuming that there was a risk of free riding could be justified. The government took only five minutes to put in its case at the initial trial and refused to add anything on remand. This left the implausible claim of a significant risk of free riding unrebutted.
My point here is that simplification works if you start with a coherent framework based on an understanding of what the law is supposed to do. In the case of Section 1 it aims, in my view, to limit the use of contracts and agreements to those that facilitate legitimate transactions and ventures. That frames the central issue and tells the parties and the court when further detail and assessment is necessary and what the proper focus of that assessment should be. Looked at this way, antitrust is a lot less complex in its legal dimension if one focuses on what courts do and not what they say. What creates complexity is trying to parse the confused and confusing lawyer and judge invocation of selective quotations from earlier court decisions. Both judges and lawyers have a deep concern not to appear to have an original thought. This approach to law obscures the relatively coherent and intelligible framework being employed. Let me add that factual issues can still be complex and require substantial inquiry especially if the challenge focuses on an arguably ancillary restraint.
With respect to Sullivan’s other example, merger law and the structural presumption, I would agree that here there is a problem of determining what the framework should be. A merger is by definition an agreement in restraint of trade, as it eliminates the freedom of the acquired party to operate freely in the market. Merger law, therefore, should be understood as a form of presumptive illegality of these restraints that are ancillary to what can be legitimate transactions. Much of the resulting case law is an unnecessarily complex and confused system based on the highly questionable premise that mergers among large corporations are likely to have good results. A growing body of empirical work shows that is rare indeed. Hence, thought of in error terms, a broad prohibition on mergers among (1) large firms engaged in related or directly competitive product or service lines, or (2) the combination of large firms operating at different levels of a market system, e.g., the food system, is unlikely to result in the loss of any significant efficiency or other social value. A presumption based on a basic review of the size and approximate market position of the firms should suffice. Below some boundary (see the various ones used from 1968 to the present in the Merger Guidelines), there is still a real possibility that the merger might have adverse effects on competition. However, the presumption is that mergers below that threshold are less likely to have adverse effect and the burden goes to the government or other challenger to prove that harm to competition is possible.
I am more than 50 years from doing a merger investigation for the government, but what I found then is that it is not hard to get a rough sense of the options for markets and the relative position of the firms. Mergers such as U.S. Foods/Sysco, Staples/Office Depot, Turbo Tax/ Tax Act, ought to have been decided easily without the need for great elaboration. But quick decisions based on presumptions invite reviewing courts, that cling to the myth of the efficient big merger, to allow such transactions. Hence, if a court is to have a basis to reject the merger, the challenger must produce lots of information and the judge needs to write voluminously about the transaction.
Professor Sullivan is correct in the sense that the incentives of the legal system are to complicate everything and extend the debate. Efforts to simplify will always invite complicating responses. Real reform starts with a better sense of what courts are doing (not just what they are saying) and identifying from that the analytic framework being using to test the legality of the conduct or merger (note monopoly is another can of doctrinal worms that could be simplified by a focus on the implicit and changing frameworks being applied). Real simplification can then come from either pointing up the errors in the underlying framework or in showing that the framework while desirable has been unduly complicated by judicial language that should be avoided.
Peter C. Carstensen is the Fred W & Vi Miller Chair in Law Emeritus University of Wisconsin Law School.
FTC Chairman Andrew Ferguson, freshly appointed by President Trump, made what seemed to be a stunning announcement. On X, he posted that the FTC and DOJ would continue to use the 2023 Merger Guidelines, as conceived by his predecessor, Lina Khan. This appeared to be a stunning victory for New Brandeisians, one that upset the usual group of people who get upset any time there might be a whiff of antitrust enforcement anywhere.
New Brandeisians similarly rejoiced, glad that the work that they put into the Guidelines would continue to be recognized and utilized.
On the other hand, to quote singer-songwriter Steve Earle, I was “just staring at the screen, with an uneasy feeling in my chest, I’m wondering what it means.”
Then the follow-up posts, at least to me, added some cause for concern:
That is a fair point. It is destabilizing to an agency to play political football with the Guidelines, although that has been done before.
On the other hand, there has been some consistency throughout. The drive toward weighing efficiencies claims against merger-related harms is one example. A 1997 update to the Guidelines solidified those efficiency claims, and the 2010 Guidelines reinforced them. To the dismay of some progressives, the 2023 Guidelines did not abandon them. Indeed, there is some consistency in the current Guidelines, restoring the original HHI thresholds and keeping the efficiency rebuttal, against my wishes.
Chairman Ferguson is also right that the FTC has limited resources—a perpetual problem for antitrust enforcement agencies. And the Chair has shown his eagerness there, proudly proclaiming the end to the FTC’s DEI programs in response to President Trump’s executive order. Though it’s not clear how much money that will save.
And of course, the Chair is correct in the following post:
The FTC and DOJ have used the same Guidelines used by previous administrations. There is a reason for that: None of those administrations essentially altered the nature of merger review after 1980.
My concern is a bit deeper. Because even short of retraction, it is always possible to simply abandon or ignore the Guidelines. One recalls the abandonment in the 1980s of the Non-Horizontal Merger Guidelines, as well as the abandonment of the potential competition doctrine (brought back to life by the FTC’s challenge of Meta’s acquisition of Within). For a period of time, the only good merger case to bring at the enforcement agencies was a horizontal one.
But another reason one might not care about what the Guidelines say, and this may give some comfort to the anti-enforcement crowd, is that perhaps Chair Ferguson does not think the Guidelines are all that important. His post states, “Courts will not rely on guidelines that are transparently partisan either.”
Chairman Ferguson in his “pitch document” for the position of FTC Chair stated he planned on ending “Lina Khan’s war on mergers.” He also stated that her investigations were “politically motivated.” In the same document, he asserts that most mergers “benefit Americans.” Thus, it is hard for me to see that retaining the Guidelines—which his memo to staff suggests is just a reiteration of caselaw—is a huge win.
New Brandeisians may take comfort in Chairman Ferguson’s harshness toward “Big Tech” and his retention of the 2023 Merger Guidelines (for now). But I want to know if it is for the right reasons. Does his decision express a true admiration for antitrust enforcement or some deeper political calculus? For example, after the ABA issued a statement suggesting that the rule of law be followed, Chairman Ferguson prohibited FTC political appointees from speaking at ABA events:
While the ABA is definitely on the side of Big Tech, and the ABA Antitrust Section did in fact publicly oppose the American Innovation and Online Choice Act (AICOA)—a law designed to regulate self-preferencing by dominant platforms such as Amazon outside of antitrust laws—the ABA is hardly a radical left-wing organization, particularly the ABA Antitrust Section. So, it is reasonable to see this move as a punishment for the ABA’s message about the rule of law, not their support of Big Tech.
Also are any other organizations beholden to the economic interests of parties with business before the FTC? I notice it’s still totally fine for political appointees to speak at Chamber of Commerce, Federalist Society, or CPAC events, apparently.
Chairman Ferguson has also stated that President Trump should be able to fire FTC Commissioners at will.
This license to fire from the Chair, combined with the fact that Big Tech has supported President Trump in significant ways, suggests that New Brandeisians should not be prematurely signaling enthusiastic support for the Chair.
To wit, corporations and CEOs in tech, AI, and cryptocurrency have donated millions to President Trump, including Amazon, Meta, Google, Microsoft, Uber, Ripple (Crypto), Uber’s CEO, Meta’s CEO, Apple’s CEO, and OpenAI’s CEO. Thus, if the Chairman is calling the lawyers of President Trump’s leftist, it might not be anything other than a performative gesture. And calling someone leftist is a performative gesture we’ve seen before.
Which raises some other concerns. It is possible to get the right answer for the wrong reason.
Is the Chairman truly pro-enforcement, and therefore against concentrations of economic power. If so, why is he wanting to halt the “war on mergers?” Why is he seemingly only worried about Big Tech’s economic power?
For example, the Chairman seemed annoyed when Commissioner Bedoya called for an investigation into egg prices, noting the industry’s soaring profits. This blowback reinforces the notion that the Chair doesn’t object to the exercise of power outside of Big Tech.
The motivation for the Chairman’s recent actions matter. And we will not know for sure until we see whether the Chairman is prone to full hard-core enforcement, cheap settlements, or no enforcement at all.
Until then, it is mostly just tweets.
Imagine two business partners, Sue and Steve, form a company. The company finds success and is acquired by a larger competitor. After the sale has been consummated but before the payment has arrived, Sue seeks to dissolve the partnership, asserting that Steve does not deserve any of the proceeds from the sale. While the fair split of the proceeds might not be 50/50—perhaps Sue brought key assets to the venture or wrote the code that undergirds the company’s intellectual property—the notion that Steve deserves nothing seems patently unfair. After all, Steve was there at the formation of the company and made positive contributions to the company’s revenues and ultimately sale.
Now swap “Steve” with “employees” in the same hypothetical. Rather than share in any of the proceeds from the sale, Sue’s company fires its employees, depriving them of the chance to partake in any of the upside from the sale. Although the relationship between the partners is not materially different in the two examples, we seem programmed as a society to accept the horrible fate of zero upside for workers. And that’s wrong.
Before the success (or failure) of a firm is realized, the steady stream of paychecks to the workers is a risk born uniquely by the employer; the worker will continue to collect such payments so long as their employer is profitable or at least heading in that direction. But when profitability is no longer achievable, that bargain falls apart. In the employer-employee relationship, workers bear downside risk that their employer underperforms. Hence, when a company experiences failure, it is reasonable for that company to downsize its workforce to keep the doors open.
By contrast, when a company experiences success, downsizing its force appears counterproductive and tantamount to the misappropriation of the workers’ contributions. And, it bears a striking similarity to Sue’s patently unfair offer in the hypothetical above. Precisely quantifying workers’ value added (as opposed to owners) is no easy feat, but we can safely infer it is not zero. If workers understood that they also bore risk in the good state of the world—that is, when their employer was overperforming—then they likely would not have signed up for the original bargain. For example, they would have demanded additional compensation in the form of equity. This is basic economics: higher risk requires higher returns.
Since the start of the new year, the New York Times business section (which I follow quasi-religiously) has chronicled several episodes in which a successful firm has made massive layoffs of its workforce. To wit:
And so it goes. In this late-stage of capitalism, workers bear the risk when employers underperform and when employers overperform. If such opportunistic behavior by employers bothers the Times, it is not showing its feelings. By amplifying this news, however, the Times is unwittingly communicating these sacrifices to the firms’ investor overlords, who typically reward companies for slashing labor costs with higher stock prices.
Not only is firing workers during a boom unfair to workers, it is also a breach of the social compact. Workers should share in the upside when employers are profitable; instead, they are cut loose. This is no novel proposition after all. Look at the NBA’s collective bargaining agreement—the players share equally in the revenue increases and bear equal risk of the shortfall. Not so in much of the rest of corporate America, where the exploitative impulse allows employers to privatize the benefits of their innovations in the event of success, but socialize the dislocation costs regardless of the outcome, success or failure. Many displaced employees will likely collect unemployment insurance, paid for by taxpayers.
Heads employers win, tails employers win. This is not a fair bargain. And as a society, we should demand a different set of rules on behalf of our workers.
Back in 2011, ESPN and the NCAA agreed to a $34 million per year media deal that gave ESPN the right to broadcast championships in 29 different college sports. The list of sports included every single college sport played by women. As time went by, it became increasingly clear this media deal dramatically undervalued the rights to women’s sports. After all, in 2024 dollars, that 2011 deal would only be worth $47.6 million, or just $1.64 million per sport (equal to $47.6 million divided by 29 sports). A report commissioned by the NCAA itself in 2021 argued that the media rights to women’s college basketball alone were worth between $81 million and $112 million.
In 2023, when the 2011 media rights deal was finally expiring, the NCAA had an opportunity to collect far more money. And in January of 2024, the NCAA proudly announced that this mission had been accomplished! NCAA President Charlie Baker told the Associated Press: “Yes, it’s a bundle, but it’s a bigger bundle that will be much better.”
Yes, that’s the quote. And yes, the NCAA agreed to a bigger and better bundle that is going to be much better!! Problem solved!
Not exactly.
When we delve into the numbers, however, we do see the agreement is technically bigger. Previously, the NCAA had a 14-year deal that paid $34 million per year, or $476 million across the entire agreement. The new deal is worth $920 million over eight years, or $115 million per year. Yet the new deal also covers 40 sports (up from 29 previously). So, it might look like the NCAA is now getting $2.875 million per sport (equal to $115 million divided by 40 sports). Or as Baker said, “bigger and better!”.
But the math doesn’t quite work as Baker’s quote suggests. Remember, the report given to the NCAA in 2021 said that women’s college basketball is worth between $81 and $112 million. The NCAA and ESPN ultimately didn’t agree with that value. Baker and the NCAA did hire a media consultant (Endeavor’s IMG and WME Sports) that “estimated about 57% of the value of the deal — or $65 million annually — is tied to the women’s March Madness tournament.”
That isn’t quite $81 million per year. But the people at Endeavor said they were pretty sure that the 2021 report overestimated the value of women’s college basketball. If we take Endeavor at their word (they didn’t show their math!), we learn something odd about all the other sports played by women. Remember, back in 2011, the NCAA sold the media rights to 29 sports for $34 million per year. Once again, in 2024 dollars, that worked out to $1.64 million per sport. If we believe Endeavor, then the right to women’s college basketball sold for $65 million and the rights to 39 sports that are not women’s college basketball were sold for $50 million ($115 million less $65 million). That means, all the other sports were valued at $1.28 million each in 2024 (equal to $5o million divided by 39).
And that means, according to the NCAA and its media consultants, the value of women’s volleyball, women’s gymnastics, and softball all went down from 2011 to 2024!
One has to wonder how that could be possible. After all…
To put all these numbers in perspective, the NHL averaged about 500,000 viewers per regular season game in 2023-24. And the Stanley Cup playoffs in 2024 averaged 1.8 million viewers per game. For these ratings, Disney (parent company of ESPN) and Turner (parent company of TNT) agreed to pay $625 million per year to the NHL.
Remember, ESPN got all of women’s college sports—and much of men’s college sports (except for football and men’s basketball)—for just $115 million per year. How could the NHL package be worth five times what we see for women’s college sports? And how could the rights to men’s college basketball be worth $1.1 billion per year, while the rights to women’s college basketball are only valued at $65 million? After all, the women’s basketball final in 2024 actually attracted nearly four million more viewers than the men’s final.
And once again, how did the value of women’s volleyball, women’s gymnastics, and softball actually go down?
All of this suggests that the NCAA left quite a bit of money on the table. For people who have only heard the story about markets primarily told in ECON 101, this must seem impossible. It reminds one of a very old joke told by economists:
Two economists are walking down a street and see a $20 bill lying on the sidewalk. The first economist says, “Look at that $20 bill.” The second says, “That can’t really be a $20 bill lying there, because if it were, someone would have picked it up already.”
This isn’t exactly funny (economists aren’t known for their ability to tell jokes!). But this story does capture a fundamental idea for many economists. Decision-makers tend to be rational, and markets tend to be efficient. Therefore, money is not left on the table (or the sidewalk!).
This view isn’t just prevalent among economists. At least, a story that likely started with economists tends to be believed by people everywhere. If you tell someone that a leader in business made a mistake that costs millions, you will immediately be asked: “How is that possible?”
There is a very simple answer to that question. Human beings don’t always try their hardest and can make mistakes. And markets, which can at time force people to try harder and correct their mistakes, are often not very efficient.
This is especially true when markets are not competitive. As Adam Smith observed back in 1776: “Monopoly… is a great enemy to good management.“
As economists have known for decades, the NCAA is a monopolistic cartel. One of the many problems with monopolies, as Adam Smith understood, is that the people who lead monopolies don’t have to be good managers.
This appears to be the story with how the NCAA sold the media rights to women’s college sports.At the very end of the article detailing the NCAA’s media right deal was this sentence: “The deal was also struck within ESPN’s exclusive negotiating window and never brought to the open market.”
And there’s our answer.
Charlie Baker and the NCAA didn’t shop the rights to women’s college sports. Markets can be efficient when there is competition. But if you take away the competition, the power of markets vanishes.
In contrast to the NCAA, the NHL shopped their rights to multiple companies and got multiple offers. Baker and the NCAA didn’t get a very good deal because they only bothered to negotiate with one company (ESPN), leaving out potential bidders such as Turner, Amazon, and Netflix. Yes, the NCAA did get more for women’s college basketball. But it doesn’t look like they got as much as they could have. And one doesn’t have to be a math major to see that the NCAA managed to get less for women’s volleyball, women’s gymnastics, and softball than they were getting before. Apparently, no one with the NCAA managed to take a few moments to break out a calculator to see that this happened.
How is that possible? Once again, monopolies are the enemy of good management. If a small farmer in a competitive market makes a serious mistake, there is a good chance the farmer goes out of business. Competition can be a very harsh teacher.
But Charlie Baker and the NCAA are not small farmers. The NCAA isn’t going to go out of business because they failed to negotiate a very good deal for women’s sports. The NCAA will continue to exist and likely continue to tell us that women’s college sports doesn’t generate much revenue. Of course, that isn’t true. Women’s sports do, in fact, generate substantial revenue. But right now they are doing this for ESPN. As Lindsey Darvin at Forbes recently reported, by January, advertising for the broadcast of the women’s March Madness had already sold out. Advertisers know there are going to be millions of viewers for the women’s college basketball championship, and they definitely are willing to pay ESPN to address that audience.
But the women in college sports aren’t going to see all that money. Charlie Baker decided to leave it on the table and prove Adam Smith was right!
The shooting of its CEO has flung UnitedHealth Group (“UHG”) into the American zeitgeist, and there’s been no shortage of heated opinions on what to make of it. With the tragedy nearly two months behind us, perhaps we can now reflect, dispassionately, on the real diagnosis here: UHG has been monopolizing and “monopsonizing” American health care. Agreeing with that diagnosis would be Eric Bricker, M.D., who educates extensively about health care finance on his YouTube channel, AHealthcareZ. With its current market cap at nearly $500 billion—close to that of the rest of the top ten health care companies in America combined—Bricker concludes, “UnitedHealth Group essentially is health care in America.”
Indeed, UHG has gone well beyond its roots in health insurance to bill itself now as “a health care and well-being company.” UHG is the Amazon of American health care—like Amazon, it should be viewed as a multi-sided platform in the health care marketplace, where it dominates as operator, participant, and controller of the “pipes” through which much of health care flows. How so? And how to interpret this from an antitrust perspective? Let us count the ways.
UHG: The Operator
Let’s start with UHG’s roots as a health insurance company, UnitedHealthcare (“UHC”). UHC is in effect a financial middleman that operates a transactional network connecting suppliers with purchasers in the health care marketplace. The suppliers are physicians, hospitals, pharmacies, pharmaceutical companies, and the like. In America, the purchasers are largely the government (via Medicare and Medicaid) and employers, who sponsor health insurance for most of those not on Medicare or Medicaid.
As an intermediary, UHC benefits from what economists call “network effects”—the more suppliers and purchasers utilize its network, the more valuable its network becomes. After a series of horizontal mergers with other insurance companies over several decades, UHC now has the largest share (14%) of the highly concentrated commercial health insurance market. Its share is even greater (28%) of the also highly concentrated Medicare Advantage market, the market of private Medicare plans now accounting for over half of the Medicare market overall. UHC makes twice as much in this space as it does in employer-sponsored health insurance. Even in traditional Medicare, UHC dominates as AARP’s exclusive Medicare Supplement plan provider.
But UHC isn’t the only network-effect-exploiting middleman in UHG’s arsenal. Its other main subsidiary is Optum. Optum itself has three business branches: OptumRx, OptumHealth, and OptumInsight. Of the three branches, OptumRx is the cash cow: it is UHG’s pharmacy benefits manager (“PBM”). PBMs have been in the crosshairs of antitrust advocates for years now, and a whole antitrust-related post could be written on this subtopic alone. Suffice it to say here, OptumRx is the third largest of the three PBMs that control 80% of all prescriptions administered in America. And Bricker illustrates well how a PBM like OptumRx sits right in between purchasers and suppliers in prescription drug administration.
The trouble occurs when OptumRx serves two masters: (1) the employer/government who wants the PBM to negotiate the lowest price possible for a given drug; and (2) the drug manufacturer who pays the PBM various “fees,” aka kickbacks, for preferred placement on the PBM’s drug formulary—kickbacks that increase with increasing drug price. OptumRx also requires its PBM to use its own pharmacy for specialty medications, Optum Specialty Pharmacy. As a recent FTC study shows, those specialty medications are an increasingly growing profit center for OptumRx, with the markup on some of them exceeding 1,000 percent. Such conflicts of interest are endemic to the other major PBMs as well. When it comes to interacting with the powerful, concentrated PBMs, the conflicts of interest and restricted choices make for awfully poor quality. (Ask any physician who’s spent hours on the phone trying to get prior authorization for the PBM to cover a prescription, and you will get an earful of Kafkaesque misery.)
At any rate, UHG plays multiple sides of its multi-sided platform in other unique ways. In 2017, Optum acquired The Advisory Board Company and is now the third largest health care consulting firm in America. In this capacity, UHG now consults hospitals on how to get paid more—while its affiliate, UHC, negotiates with those very hospitals to get paid less. With its acquisition of Change Healthcare in 2022 (more on this below), UHG brought Change’s InterQual into its fold. InterQual is one of only two companies in America that control utilization management of hospital beds: how many paid “bed days” should be assigned to a hospitalized patient with a given diagnosis before the insurance payment is cut off. Conflict of interest strikes again, in a market that Bricker estimates at $400 billion per year in health care spend. That’s a huge market to have such concentration of economic power.
UHG: The Participant
We’re not done with UHG’s non-horizontal mergers. In the last decade, UHG has gone on a vertical-integration buying spree, specifically to occupy the health care marketplace not just as a platform middleman but also as a participant. As UHG’s participant arm, OptumHealth has entered the home health care space with its acquisition of the nation’s third largest home health provider (and also a large hospice provider), LHC Group, a merger that passed through initial scrutiny by the FTC. And OptumHealth now employs or is affiliated with the largest number of physicians in the country—90,000 and counting, or a tenth of all physicians in America.
UHG argues that its acquisition of physician practices aligns with so-called “value-based care,” whereby a health care entity bears risk through capitated payments from, say, the government as in Medicare Advantage plans; the entity then makes profits based not on volume of care but quality. But quality improvement may be more rhetoric than reality, as surfaced by local investigative reports of problems post-merger:
These investigative columns have uncovered the healthcare company’s oppressive physician employment contract; a disastrous phone system; urgent care upheaval; alleged double billing; copay confusion; a scathing internal survey; data privacy breaches; attorney general scrutiny; suspect COVID-19 testing charges; predatory marketing tactics; Medicare Advantage-related profiteering concerns; state lobbying efforts; a disconcerting doctor shortage; the troubling mix of healthcare with insurance services; the unethical banning of unwell patients; and the denial of patient medical records.
That’s a hairy list.
In addition, Bricker presents a “fable” that illustrates the risk of vertical foreclosure. An insurance carrier buys a physician practice, which formerly used Vendor A for a particular patient service that charged $300 per patient per day. After the acquisition, the insurance carrier replaces Vendor A with Vendor I, which the carrier owns—and charges the patient $800 per day. Not only that, the insurance carrier and physician practice had agreed on an earnout in which the practice would earn payments based on future profits of the practice post-merger. Having forced the practice to use the more expensive Vendor I, the carrier decreases practice profits and therefore the earnout. Double win for the insurance carrier. Double loss for the physicians and the employers/other billed insurance carriers financing the health care costs, as those costs rise. Hmm…is this fable the real story of UHG?
Texas and many other states forbid the corporate practice of medicine. Yet UHG’s quiet but aggressive gobbling up of physician practices skirts around the prohibition. And while the OGs of the practices do well in the sellout, the rest may just have to deal with decreased earnouts, pay cuts, increased patient loads, layoffs, onerous do-not-competes—in short, to use Cory Doctorow’s word—the “enshittification” of health care. No wonder physicians are burning out in droves, as these vertical integrations curtail their power.
The curtailing of physician power turns into a classic case of monopsony power. At least one health care organization has filed a lawsuit against UHG in California, alleging that, among other things, UHG’s control of the local primary care physician market unlawfully restricted physicians from working for competing networks and taking their patients with them. And as UHG’s monopsony power (along with that of the other big carriers) to push take-it-or-leave-it insurance contracts with independent physicians has grown, many of those otherwise independent physicians have banded together to set up “management service organizations,” in an attempt to increase countervailing power and negotiate better contracts. It’s an arms race to determine who will get a bigger share of the health care pie. The net effect? Increasing prices and decreasing quality for those employers and their workers who seek health care.
UHG: The Pipes
UHG increasingly controls not just the operation and participants of American health care, but also its transmission lines. In 2022, UHG made a bid to acquire Change Healthcare, a company that electronically processed billing claims and remittances between myriad health insurance carriers and the vast majority of hospitals and doctors in America. Change also ran a quarter of another pipe in health care: the “switch” software connecting pharmacies with plan information from all the PBMs, as well as processing the coupons pharmaceutical companies can issue directly to the patient for prescriptions filled at the pharmacy. Around the time of the proposed acquisition, Change had only one percent of the revenue of already gargantuan UHG. What Change had, nevertheless, was the valuable data in all those billing claims and remittances: patient IDs, provider IDs, diagnosis codes, procedure codes, and billed and allowed amounts—for ALL carriers, no less. That data could give UHG an advantage, for example, in quoting lower prices on commercial plans for fully insured employers with healthier employees, targeting lower-risk Medicare Advantage pools, or carving out a few expensive outlier physicians from the insurance network.
The DOJ tried to block the UHG-Change merger but failed. In its defense, UHG pointed to longstanding strict firewalls between Optum’s data analytics and UHC’s insurance underwriting that prevented access and use of sensitive claims information from competitor carriers. That and divestiture of one of Change’s claims edit products, a horizontal competitor to Optum, were enough to convince the district court to approve the merger.
But not all has been well. The February 2024 ransomware attack against Change left thousands of medical practices, hospitals, and pharmacies without incoming cash flow once claims processing shut down. At least one large clinic in Oregon, already in talks to merge with UHG, had to apply for and ultimately get emergency approval for its buyout after running out of cash. How convenient for UHG: as one headline aptly put it, “UnitedHealth Exploits an ‘Emergency’ It Created.”
In any case, will UHG’s so-called firewalls hold up over time? Are the pipes of the health care infrastructure UHG now controls “essential facilities” that should invoke that discarded stepchild of antitrust doctrine? At the very least, UHG has foreclosed any defense that there can be no intra-enterprise conspiracy here. As one researcher lauded, the secret to UHG’s power is that it has set up Optum as a fully autonomous, separate business with its own processes, resources, and profit streams, distinct from the insurance business. That sounds like a disunity of economic interest—which means any collusion, express or tacit, between the Optum and UHC subsidiaries of UHG would implicate Section 1 of the Sherman Act.
Where Do We Go From Here?
The DOJ did not appeal the district court’s judgment on the UHG-Change merger. But it appears the DOJ wasn’t done with UHG. In October 2023, the DOJ reopened an antitrust investigation into UHG’s business practices. And in November 2024, the DOJ along with Maryland, Illinois, New Jersey, and New York sued under a horizontal merger theory to block UHG’s proposed acquisition of Amedisys, the country’s largest home health and hospice provider. It remains to be seen what the antitrust stances of the DOJ and FTC will now be with the upcoming change in administration.
Whatever that change will bring, UHG is the Amazon warrior of the health care marketplace in America. As health care’s increasingly expanding operator, participant, and pipes, UHG reigns supreme over the exploding Medicare Advantage market. As UHG and the others big carriers continue to siphon Medicare Advantage volume away from traditional participants like hospitals, Bricker predicts those hospitals will have their go-to response: demand higher unit prices from the carriers on the commercial side. Who will subsidize those higher prices? The American employer and worker. And who gets hurt the most from the concentration of economic power in health care? Patients who can least afford it.
Sadly, all the charged rhetoric surrounding the UHG CEO shooting has distracted attention away from the real diagnosis here. What ails the American health care system is structural. It has everything to do with antitrust. And the American health care system is increasingly the UnitedHealth Group system.
With the cultural shift toward populism—whether conservative or progressive in bent—let’s hope that we can unite together and make our health care system less United.
Venu Julapalli is a practicing gastroenterologist and recent graduate of the University of Houston Law Center.