Economic Analysis and Competition Policy Research

Home   •   About   •   Analytics   •   Videos

Antitrust restructuring of major corporations is on the table in a way it has not been since the Microsoft case in the late 1990s. Indeed, the historic moment may be comparable to the breakup of Standard Oil in the 1910s and AT&T in the 1980s, when courts reorganized those companies and freed the market from their control. Today, judges face a similar opportunity to rein in today’s rapacious monopolists.

Amid rising public pressure to challenge concentrated corporate power, the federal government has, since 2020, filed antitrust lawsuits against Google, Amazon, Meta, and Apple, as well as dominant firms in the agriculture, debit payment, and rental pricing software industries. These cases aim to break monopolistic control of markets, and not merely to stop unfair practices. The outcome of this litigation wave will determine whether antitrust remains a meaningful check on concentrated private power or operates as regulatory theater.

Antitrust enforcers stand poised to secure favorable judgments in lawsuits that affect multiple sectors of the economy. These lawsuits could bolster farmers’ ability to repair their equipment, reduce the costs retailers incur on e-commerce platforms, increase the revenue software developers earn from their smartphone applications, and reduce the interchange fees businesses must pay to credit and debit card companies.

The poster child for the burgeoning possibilities of antitrust remedies involves the lawsuits against Google. In August 2024, Judge Amit Mehta ruled that Google was liable for monopolizing the search market. He found that Google illegally paid Apple and other manufacturers billions of dollars every year to be the default search provider in web browsers on desktops and smartphones, resulting in the foreclosure of critical distribution channels to competitors. In April 2025, Judge Leonie Brinkema held that Google was liable for monopolizing the ad-tech market, which produces the revenue stream that underpins much of our modern media system. In her decision, she found Google used its dominant control over digital advertising to lock in news publishers, impose supra-competitive take rates on Google’s exchange, and exclude competing digital advertising providers.

Other lawsuits against Google, such as the widely publicized lawsuit by game developer Epic Games, have also found Google liable for monopolization. Given this legal onslaught against Google, odds are in favor that the corporation will undergo some form of corporate restructuring. Due to Google’s immense size and scale, the result would fundamentally alter how the public uses and accesses the internet. Breaking up Google—by divesting Chrome or its digital advertising business—would strip the corporation of its control over access to information and online revenue. The shift would mean more competition, the viability of privacy-oriented alternatives, and greater power for journalists, creators, and users concerning how information is distributed and monetized.

A few federal judges and Gail Slater, the Assistant Attorney General for the Antitrust Division of the U.S. Department of Justice, will control the scope of the remedies to be imposed on Google. Regardless of any obstacles the DOJ staff may face in determining which remedies they should pursue, one thing is certain: federal judges are vested with all the authority they need to impose the government’s demands, and they are obligated to impose sweeping remedies on antitrust violators like Google.

Flexing the Structural Relief Muscle

Remedies convert violations of legal rights into actionable consequences. In his leading casebook, renowned remedies scholar Douglas Laycock succinctly asserted that “remedies give meaning to obligations imposed by the rest of the substantive law.” In other words, remedies are how democratic institutions prove their legitimacy: equipping the law with real force to answer public calls for action, rather than serving as a meaningless political gesture. Without effective remedies, the law merely functions as a speed limit sign with no police or cameras to enforce it.

The antitrust laws are, in the words of Senator John Sherman, namesake of the titular Sherman Act, “remedial statute[s].” To accompany the sweeping prohibitions on “restraints of trade” and monopolization, lawmakers were diligent to buttress these proscriptions with a panoply of robust remedial provisions.

The antitrust laws include the ability for harmed private parties to obtain treble damages and attorneys’ fees (a novel feature in American law at the time of their enactment). Lawmakers authorized federal, state, and private enforcers to initiate lawsuits, eventually establishing two separate agencies to advance the cause. Further supporting these profound legal tools were the equity provisions that empowered enforcers to seek and, critically, courts to impose structural changes to a business’s operations. The purpose of this vast and deep remedial landscape was to facilitate the “high purpose of enforcing the antitrust law[s].

At the federal level, the thinking surrounding the purpose and necessary goals of antitrust remedies has been lost for some time. With the notable exception of the antitrust litigation against Microsoft in the late 1990s, market restructuring has simply not been seriously contemplated by enforcers since the 1970s, when the DOJ was litigating its antitrust lawsuit against telecommunications giant AT&T for willfully stifling competition in, among others, long-distance services. According to historian Steve Coll’s book The Deal of the Century, the prospect of breaking up AT&T and imposing other remedies permeated the government’s legal strategy.

After the breakup of AT&T, however, in concert with the purposeful decline in federal antitrust enforcement, the intellectual and institutional muscles supporting ambitious remedies quickly atrophied. Decades of underenforcement drained both the doctrinal imagination and the human talent needed to design and implement structural relief.

For the federal government, a new opportunity to implement robust structural remedies almost presented itself in its lawsuit against Microsoft in the late 1990s. A structural breakup was interrupted, however, due to improper judicial conduct and changes in political administrations. Enforcers ultimately abandoned a breakup in favor of paltry restrictions on Microsoft’s business practices. Subsequent academic literature criticized the handling of the lawsuit for the lack of critical thinking regarding the remedies that enforcers wanted. Not since the antitrust lawsuit against Microsoft has another opportunity of similar magnitude presented itself to federal enforcers.

A New Opportunity Arises

Now, more than a quarter century later, the public has another chance to witness the full thrust and potential of the antitrust laws. What is critical is that enforcers and judges need to be reminded that restructuring businesses to restore competitive conditions and prohibiting dominant corporations, like Google, from engaging in unlawful behavior has always been at the heart of what the antitrust laws can and must do.

In one sense, the ability to restructure the economy provides the clearest visible indicator to the public that justice has been served. For too long, the public has witnessed instance after instance of corporations engaging in often blatant lawbreaking and walking away with no more than token penalties, accompanied by the boilerplate legal phrase “this is not an admission of guilt” on the settlement form.

It’s no secret that the public’s confidence in our political system was shattered after the 2008 financial crisis, when only one low-level bank manager was jailed, and the biggest financial institutions only got bigger. Meanwhile, President Obama refused to use his executive authority to prevent millions of Americans from losing their homes and livelihoods. The precipitous collapse of white-collar crime prosecution after 2012 only intensified this perception of corporate impunity, a trend that continues today. It should go without saying that a well-functioning democracy of the kind the antitrust laws were meant to buttress requires punishing wrongdoers.

Structural remedies also bring clarity to the purpose of the antitrust laws. They ensure that corporations are adequately incentivized to and remain subservient to the public, and must adhere to established norms concerning what constitutes lawful means of operating in the marketplace. Remedies, in this sense, serve dual purposes: they deter future violations and, when applied to offenders, reinforce institutional legitimacy by ensuring meaningful consequences rather than merely symbolic fixes.

None of this was accidental. Congress wrote the antitrust laws with ambition. In both the statute’s text and its legislative history, Congress codified deeply held moral and ethical norms, grounded in principles of fair competition, non-domination, and democratic control of the economy. To facilitate these principles, Congress gave the public the tools for broad economic reordering in the event of a violation. It was the Supreme Court’s ideological shift, which began in the late 1970s and was subsequently adopted by the Reagan administration in 1981, that neutered the institutional will to enforce and interpret the laws as Congress intended. Nevertheless, once liability is established, structural change is not merely justified by law—it is mandated by it. Indeed, the Supreme Court has been uncharacteristically clear that liability obligates, not just authorizes, the courts to impose structural change.

In a decision from 1944, the Supreme Court stated, “The Court has quite consistently recognized…[d]issolution[s]…will be ordered where the creation of the combination is itself the violation.” In another decision, the Court opined about the absurdity that would arise if weak remedies were imposed on antitrust lawbreakers. “Such a course,” the Court stated, “would make enforcement of the Act a futile thing[.]” In another decision, the Supreme Court stated that “[c]ourts are authorized, indeed required, to decree relief effective to redress the violations, whatever the adverse effect of such a decree on private interests.” The jurisprudence is replete with many more judicial directives commanding the lower courts to impose sweeping remedies to effectuate Congress’s legislative command.

Not only is there a duty to impose structural remedies, but the Supreme Court has been straightforward that in all but the most wholly unwarranted situations, a district court judge—like Judge Mehta or Judge Brinkema presiding over their respective lawsuits against Google—is afforded broad discretion on what remedy to impose both to “avoid a recurrence of the violation and to eliminate its consequences.” As long as the remedy is a “reasonable method of eliminating the consequences of the illegal conduct,” judges operate with expansive discretion and face virtually no doctrinal constraints on what can be ordered.

If the desired outcomes are realized, a breakup of Google could fundamentally reorganize the structure of the internet and our experience with it. Requiring Google to spin off its digital advertising platform could enable journalists and content creators to diversify their revenue sources through new competitors, giving them greater autonomy over their income. Divestiture could also enable them to have more control over the distribution of their work products and reduce the constant risk of censorship and algorithmic manipulation that Google has deployed to maintain its monopoly over search and advertising. Moreover, a spinoff could also erode the surveillance advertising model, making privacy-friendly alternatives more viable competitors. For the public, more competition in search could expand options for finding and presenting information on the internet. Likewise, a divestiture of Google’s Chrome browser could open new pathways to access the internet and lessen dependence on a single dominant provider.

Corporate Allies Spring into Action

In an attempt to get ahead of the litigation game, executives and ideological friends in the legal academy have churned out scholarship and opinion pieces designed to deter enforcers and courts from imposing remedies deemed “too harsh” to Google’s operations. In a recent paper on structural remedies, Professor Herbert Hovenkamp, a leading establishment antitrust scholar (and a Big Tech sympathizer), provided a hierarchical schematic outlining how remedies should be considered and administered. One of his points stated that “Even with market dominance established, alternatives to structural relief are often superior, and simple injunctions are often best; for any problem, they should be the first place to look.” The International Center for Law and Economics, a member of Google’s “army of paid allies,” submitted an amicus brief to the district court overseeing Google’s antitrust lawsuit, erroneously stating that “structural remedies are disfavored in Section 2 cases[.]”

Naturally, too, Google’s business executives have ardently defended the company’s business practices. In April 2025, Google’s CEO Sundar Pichai testified that any breakup of Google would be “so far-reaching” that it would be a “de facto divestiture” of its search engine. Pichai also decried the forced sharing of the data that underpins Google’s search engine as a remedy that would leave the company with no value. It is revealing to hear the highest-ranking corporate executive at the company admit that Google’s success is dependent on a select few unlawful practices, rather than its business acumen, and that Google is apparently incapable of deploying lawful methods of competition to succeed in the marketplace. Since the filing of both federal lawsuits, Pichai has also embarked on a marketing tour to tout the company’s operations, defend the benefits of its business practices, and detail the potential unintended consequences of the government’s lawsuits. Such alarmism is a standard defensive tactic, deployed to influence judges and sap public support for real solutions.

But from the earliest days of antitrust law, the Supreme Court has consistently affirmed that breakups, divestitures, and other corporate restructuring remedies—though often described as “harsh,” “severe,” or inconvenient by violators—are time-tested, necessary, and appropriate for restoring competitive market conditions. In a forthright statement, the Court stated that antitrust litigation would be “a futile exercise if the [plaintiff] proves a violation but fails to secure a remedy adequate to redress it.” In fact, the Court has lamented that prohibitions on specific conduct—rather than breakups or other corporate reorganizations—are “cumbersome,” delay relief, and position the court to operate in a manner for which it is “ill suited.”

Rising to Meet the Moment

The vigorous enforcement of the antitrust laws in the post-World War II era compared with the drastic decline that began in the late 1970s is clear evidence of the changing “political judgment” (as Professors Andrew Gavil and Harry First call it) concerning what remedies should be imposed. Judges today are far different from their historical counterparts, who viewed antitrust as a facilitator of economic liberty, a bulwark against oligarchy, and fundamental to protecting our democracy. Punishing antitrust violators was not just a legal formality but also a moral imperative.

Even though many judges have not considered these issues in decades—or, in some cases, ever in their careers—during this profoundly important moment in American history, judges should be cognizant of what the jurisprudence plainly mandates them to do. If the rule of law retains any meaning, it demands that courts decisively address the harms the government has been litigating for a half-decade.

The question now is not whether courts can impose structural remedies; it is clear they can. It is whether they will rise to meet the moment. The remedies that judges will impose on Google and the other alleged monopolists in the government’s lawsuits will be a defining test of judicial integrity and democratic accountability to the rule of law. A failure to act calls into question the very legitimacy of our legal system to hold the powerful accountable. As the jurisprudence makes clear, anything less than structural relief results in the public “[winning] a lawsuit and [losing] a cause.

Daniel A. Hanley is Senior Legal Analyst at the Open Markets Institute.

On December 3, 2024, Chris Salinas officially entered a nightmare that would make Freddy Krueger proud—a nightmare in the medical industry known as prior authorization. Even I, as his gastroenterologist, didn’t know at the time that this one would become my biggest nightmare yet. Chris has given me permission to share the details of his experience, including his medical ailments.

Chris has ulcerative colitis, a chronic autoimmune disease of the colon that can cause flares of bloody diarrhea, abdominal pain, and fatigue, among many other symptoms. I have been treating Chris and his ulcerative colitis for over 15 years. Over those years, for various reasons, he had ultimately failed multiple medications for his illness. By December of last year, the next one we wanted to try was Entyvio, a drug under patent approved by the FDA for ulcerative colitis in 2014. Entyvio is considered in the industry a “specialty pharmacy” drug. In theory, nobody in the industry knows exactly what that means. In practice, what it means is big money, the money that pays for all those ads you see on TV for Ozempic, Wegovy, Skyrizi, and the like (rule of thumb: if the brand has a “z” or “v” or “y” in the name, it’s likely a moneymaking specialty drug). And in practice, what it meant for Chris and me is that I needed to get prior authorization for approval of Entyvio.

Just the phrase “prior authorization” sends a chill down every physician’s spine. On its face, prior authorization has a functional purpose: to control utilizing drugs that can be quite costly for a health plan to cover. Yet imagine your worst call climbing up the giant sequoia of customer service phone-trees. A call for a prior authorization is worse.

You think I’m exaggerating? Every year the American Medical Association conducts a nationwide survey of a 1,000 practicing physicians on prior authorization. In 2024, 40 percent of physicians had staff who worked exclusively on prior authorizations and spent 13 hours weekly on them; 75 percent reported that denials of prior authorizations had increased; 80 percent didn’t always appeal the denials, due to past failures or time constraints. And almost 90 percent of physicians reported burning out from the process. I know physicians who closed down their practices and retired early solely because of prior authorization.

According to the survey, the impact on patient care is no less nightmarish. Nearly 100 percent of physicians felt that prior authorization negatively affected patient outcomes; 23 percent noted that it led to a patient’s hospitalization; 18 percent said it led to a life-threatening event; and 8 percent said it led to permanent dysfunction or death. Fortunately, Chris wasn’t at death’s door. But even with a philosophical approach that most of the time in medicine, less is more, I felt that getting on Entyvio was Chris’s best shot at improving his quality of life, free of the extended flare of ulcerative colitis he was enduring.

So last December, I called Express Scripts, Inc. (“ESI”) to get the prior authorization for Entyvio. ESI is the pharmacy benefit manager (“PBM”) of Chris’s employer-based health plan. For those who haven’t studied the FTC’s damning 2024 interim report on the industry (“FTC report”), PBMs serve as the middlemen between the pharmacies that dispense drugs, the manufacturers that make them, and the government/employers/insurance plans that pay for them. I gave ESI all the required clinical information for approval of both the initial intermittent intravenous infusions for the first six weeks and the subsequent biweekly maintenance injections Chris would be giving to himself subcutaneously. As Chris had by then failed multiple other therapies, it was a relatively easy call for ESI to grant the prior authorization for Entyvio. I then called in both the intravenous and subcutaneous Entyvio prescriptions to Accredo, the administering specialty pharmacy, who assured me that we were good to go.

We were not good to go. At first, Accredo told me that Chris could get the infusions at home through a service set up by Accredo. Then it told me it couldn’t. Then it told me it would find a local infusion center to administer the drug. Then it told me I had to find one. At every twist and turn, I had to initiate the call to Accredo to find out why the infusions hadn’t started. So did Chris, who said about his Accredo calls: “There were many circular conversations where it just went nowhere. Every time they would tell me, you have to give us a week and then call us back; and then when I call them back, it’s just like starting the process over again. And then again, and again, and again.”

The breakdown in the prior authorization process that Chris and I were weathering is an endemic breakdown in healthcare quality. That breakdown in quality is the result of the incredible horizontal concentration and vertical integration the last few decades have wreaked in the healthcare industry. This chart from the same FTC report tells the tale.   

ESI appears in the second column under The Cigna Group, as the associated PBM. The 23 percent below its name indicates that ESI controls 23 percent of all prescriptions filled in the United States.

On horizontal concentration overall, the three biggest PBMs—CVS Caremark, ESI, and Optum Rx, manage nearly 80 percent of all prescriptions filled in the United States; in terms of the standard Herfindahl-Hirschman Index (“HHI”) measurement of horizontal concentration, the mean HHIs were pushing 4,000 and climbing in state and local geographic markets.

On vertical integration, the three biggest PBMs are owned by the three of the five biggest health insurance companies in America—CVS Caremark under Aetna, ESI under Cigna, and Optum Rx under United. The vertical integration doesn’t stop there: the healthcare conglomerates that own the biggest PBMs also increasingly own private labelers that manufacture the drugs, the providers who prescribe them, and the pharmacies that dispense them. This includes Accredo, the specialty pharmacy owned by Cigna—which owns ESI, the PBM!

To any reasonable person thinking through the inevitable conflicts of interest, beware. Your head may eventually explode—for which you’ll probably need a prior authorization for something. The FTC report catalogs many of the adverse effects of horizontal concentration and vertical integration involving PBMs: (1) excluding generic drugs from formularies in exchange for higher pay-to-play “rebates” from the manufacturers; (2) recasting group purchasing organizations into rebate aggregators, often headquartered offshore, that still enjoy safe harbor from anti-kickback law; (3) steering patients exclusively to the conglomerates’ own PBMs and specialty pharmacies while crowding out independent pharmacies, especially when high-profit specialty drugs are involved; (4) turning contracts with independent pharmacies that have little bargaining position effectively into opaque adhesion contracts with “clawbacks” that make it possible for the pharmacies even to lose money, in the end, from a sale; and (5) abusing prior authorization and other utilization management tools to preference the conglomerates’ financial interests over the patients’ best interests.

The FTC report was criticized by a dissenting Commissioner (Holyoak) for being prematurely released without “rational, evidence-based research” to show that horizontal concentration and vertical integration of PBMs raised consumer prices. Accordingly, in 2025, the FTC expanded on its 2024 report, by analyzing additional data received from PBMs, to show that significant markups on numerous specialty generic drugs—some exceeding 1,000 percent—made PBMs and their affiliated specialty pharmacies huge revenues. That, it followed, cost government and commercial plan sponsors significantly more money, along with the subscribers/patients who shared the increasing costs. 

Likely backroom political gamesmanship notwithstanding, the FTC should be praised and pushed to continue focusing on the effect of PBM consolidation on consumer drug prices. But with the focus on consumer prices, neither the 2024 interim report, nor its dissent, nor the 2025 update fully captures why horizontal concentration and vertical integration of PBMs are so bad for healthcare. Take it from an on-the-ground, practicing physician who has been at bedside, for over a quarter-century, observing policymakers afflicted with an evidence-enslaved (rather than evidence-informed) econometrics delirium rotting the core of what matters in healthcare.

It’s not just the price effects—it’s also the quality! Horizontal concentration and vertical integration of PBMs, and healthcare at large, are destroying quality in healthcare. Surely the FTC is aware of these non-price effects, as the over 1,200 public comments received by the FTC on PBMs’ business practices likely revealed. The 2023 Merger Guidelines published jointly by the FTC and the DOJ emphasized the same, adding a “T” to the end of the “SSNIP” of the Hypothetical Monopolist Test: “A SSNIPT may entail worsening terms along any dimension of competition, including price (SSNIP), but also other terms (broadly defined) such as quality, service, capacity investment, choice of product variety or features, or innovative effort.”

Yes, SSNIPTs may be harder to quantify than SSNIPs. But when it comes to PBMs, SSNIPTs galore smack you in the face.

Many phone calls later, and over a month after ESI had granted the prior authorization of Entyvio, Chris finally received his first of three infusions. It cost him around $300. He was then told the second infusion would cost him $1,800. That triggered another set of phone calls involving Accredo, with Chris noting, “They kept telling me that my insurance company denied [the Entyvio], and I said no, my insurance company did not deny it . . . I was actually getting ready to write an $1,800 check.” Fortunately, as Chris explained, the pharmacy and PBM then found the co-pay assistance program information Chris had sent, which they had apparently misplaced. The price of his second and third infusions? Five dollars each. 

Chris’s ulcerative colitis symptoms responded well to the Entyvio infusions. But the healthcare nightmare wasn’t over. Chris was supposed to transition to the subcutaneous Entyvio shots in late April, eight weeks after his last infusion. When he called Accredo in March to confirm, he was told that Accredo didn’t have the prescription for the shots. I then called Accredo and was told the opposite. Chris then called and learned that Accredo had two accounts in his name, one of which was dormant (supposedly from the change of calendar year) and where the prescription was hiding. Accredo assured him that it would merge the accounts, and all would be good. Chris called again a couple of days later, when Accredo told him it was waiting on approval from his insurance company—even though the Entyvio had already been approved. More phone calls the following week, and the prescription was lost again because the accounts had not in fact been merged. Late April came and went, with no continuation of drug.

Chris estimates that between March and May, he made over 20 phone calls to Accredo. During the lapse in treatment, his cramps, diarrhea, and urgency returned. “In the meantime, I’m having to deal with my diagnosis . . . which is not fun, right? I’m having to kind of manage my travel that I do for work around that situation, which is difficult. And it’s a lot of stress . . . being on airplanes and all that kind of stuff.” It was a lot of stress for me as well, knowing that the longer Chris was off Entyvio, the more potential he would have to develop antibodies to the drug that would render it ineffective when resumed.

It all came to a head on May 21, a month after Chris was supposed to start the Entyvio shots. I spoke to Paul at Accredo, then Jeanine at ESI, then Mark, Jeanine’s supervisor, while I made Paul stay on the line. ESI informed me for the first time that a fax was allegedly sent to me on January 3 denying the prior authorization for the subcutaneous Entyvio. When asked whether it had sent the denial by regular mail, ESI said no. The reason for the denial? According to ESI, I had not answered some clinical criteria question—a question I had certainly answered when the infusions and injections were initially approved back in December. When I insisted on clearing up this nonsense once and for all over the phone, ESI informed me that a policy change beginning in 2025 had eliminated approvals over the phone; they could only be done via fax. The very tech that had failed us in communication of the purported denial. All this, with ESI, as a PBM in the business of administering healthcare, knowing full well the impact this was having on Chris’s health.

I finally demanded to speak to a peer, for a peer-to-peer review. In 2024 and 2025, that no longer means a gastroenterologist, or even a medical doctor, but often a nurse or pharmacist. No offense to nurses or pharmacists; they are sacred to healthcare. But when it comes to prior authorization for a specialty drug like Entyvio, they are not my peers. Nonetheless, the peer here was a pharmacist named Stefan—the first person at ESI I felt wasn’t carbon-based AI. He acknowledged the injustice of the situation but said my only recourse was to fax clinical information again to seek approval.

And then, magically out of the blue . . . Stefan said he found some “override” to approve the subcutaneous Entyvio. Just like that, a personal-record-breaking two hours and 45 minutes into the call, the prior authorization was over.

I give you all these gory details (believe it or not, with many left out) not to be melodramatic, but because when it comes to going through the prior authorization gauntlet with these highly concentrated, vertically integrated PBMs, this is now the norm, not the exception. As chronicled in the public comment by the American Economic Liberties Project, past lapses in enforcement by the FTC and DOJ created these consolidated beasts. And if you download the appendices of that comment, you will read one similarly abominable experience after another sampled from anti-PBM Facebook groups with names like “DOWN WITH Express Scripts and Accredo!”. On the Better Business Bureau website, ESI has a 1.03/5-star rating, with 1,555 complaints in the last three years. Other PBMs fare no better. In other words, horrible quality is a feature, not a bug, of the biggest PBMs and the healthcare conglomerates who own them—and own you, should you be dependent on them. In the words of former FTC chair Lina Khan, these PBMs have become “too big to care.

Cory Doctorow says it best: “Enshittification is a choice.” He was talking about Google search. But he may as well have been talking about too-big-to-care healthcare. The lobbyist arm of the big insurance companies recently took notice of their “enshittification” and vowed to simplify prior authorization. But insurance companies have vowed that before, in 2018 and again in 2023.

I’m a GI doctor. I know good shit from bull. And now Chris does as well. I asked him how he would clean it up. “I have to go through [ESI and Accredo],” he lamented, “because of the plan that I’m on with my company, right? But if I had the choice because of customer service, I’d never deal with them again . . . I think that people aren’t looked at anymore as patients; they’re looked at as a business. There’s no personal side to it.”

Chris wants choice in healthcare. The only way he’ll get choice—and not the choice PBMs have made over and over again—is to break these behemoths up. Only then will these companies have skin in the game again and have to compete for customer loyalties. And only then will we renew the “care” in healthcare.

Venu Julapalli is a practicing gastroenterologist and recent graduate of the University of Houston Law Center.

There is much discussion about what Hal Singer has dubbed “Gangster Antitrust,” the extraction of payments, bribes, or other concessions to allow passage of an otherwise anticompetitive merger. Gangster Antitrust can also take the form of conditioning the approval of a procompetitive merger on a seemingly unrelated remedy that advances the political interests of the administration. “Nice merger—be a shame if anything happened to it!”

David Dayen of the American Prospect correctly wrote that there is a law that is supposed to prevent such skullduggery, the Tunney Act, to assure that consent decrees by the Department of Justice (DOJ) are in the “public interest.” The Tunney Act of 1974 was drafted to prevent judicial “rubber stamping” of consent decrees. As explained here, the Tunney Act and its 2004 amendment, prohibiting continued judicial rubber stamping, have yielded just more vigorous rubber stamping.

I’ve written on the Tunney Act twice before, once with John J. Flynn, who happened to assist in its drafting, about the misuse of the Tunney Act in the Microsoft cases to compel the district court to accept a weakened settlement. I wrote a second time, after the D.C. Circuit continued to engage in activist and blatant disregard for the 2004 Tunney Act amendment.

Unfortunately, the Tunney Act’s purpose has been stifled by D.C. Circuit caselaw, and even the Act’s most fundamental purpose of destroying corruption has been neutered.

A brief history of the Act

The Tunney Act, named after Senator John V. Tunney, emerged from scandal surrounding backroom dealings to settle a DOJ merger challenge. It first became a major issue during hearings on Richard Kleindienst’s nomination to be attorney general. Senator Tunney expressed outrage at such closed-door discussions.

In 1969, the DOJ sued to prevent ITT’s acquisition of three companies under Section 7 of the Clayton Act. The DOJ lost two of the three suits. In 1971, the DOJ and ITT agreed to a settlement of the remaining suit. ITT was allowed to retain Hartford Fire Insurance Company but was required to divest several Hartford subsidiaries. The DOJ made no public statement as to the underlying reasons for the settlement. Instead, as was common practice at the time, only the proposed decree was made public.

Two significant events occurred that made people suspicious. First, President Nixon nominated Richard Kleindienst to be attorney general. Kleindienst had been involved in the ITT litigation in his capacity as deputy attorney general, and questions arose concerning his participation in the settlement of the case. Second, ITT offered to help finance the 1972 Republican National Convention. While no quid pro quo was proven, the appearance of impropriety sparked significant debate. (If you want to hear President Nixon ordering a DOJ official to back off the merger, you can listen here.)

Moreover, Kleindienst’s confirmation hearings revealed to the public for the first time the underlying rationale for the DOJ settlement with ITT: Kleindienst asserted that one reason for the settlement was DOJ fear that divestiture would cause ITT’s stock price to fall, causing hardship to shareholders. Another DOJ concern was apparently that the plummeting stock price would ripple throughout the U.S. economy.

All of this seems tame by today’s standards. But at the time, it was a massive scandal. The Supreme Court typically deferred to the DOJ traditionally with respect to consent decrees.

How the Act lost its teeth

Despite the Tunney Act’s prohibition against rubber-stamping, with rare exception, courts have continued to serve as rubber stamps, and the D.C. Circuit caselaw has played an important role in the rubber stamping. The basic standard laid out by the D.C. Circuit appears in the first Microsoft case, in which Judge Sporkin rejected the DOJ’s mealy-mouthed remedies (and eventually led to Microsoft II). The D.C. Circuit wrote: 

A decree, even entered as a pretrial settlement, is a judicial act, and therefore the district judge is not obliged to accept one that, on its face and even after government explanation, appears to make a mockery of judicial power. Short of that eventuality, the Tunney Act cannot be interpreted as an authorization for a district judge to assume the role of Attorney General.

Subsequent cases in the D.C. Circuit cling to this standard to assure that courts don’t bother with the “public interest” determination.

Congress reacted, and in 2004 changed the Tunney Act to compel a public interest determination. The legislative history expressly and in detail decried the D.C. Circuit’s caselaw (and cited my work with John Flynn, thank you very much).

The D.C. Circuit and its district courts flat out ignored the amendment, choosing to resurrect its “mockery of judicial function standard.” As the D.C. Circuit explained in a 2016 Speedy Trial Act case:

As we have since explained, we “construed the public interest inquiry” under the Tunney Act “narrowly” in “part because of the constitutional questions that would be raised if courts were to subject the government’s exercise of its prosecutorial discretion to non-deferential review.” Mass. Sch. of Law at Andover, Inc. v. United States, 118 F.3d 776, 783 (D.C.Cir.1997); see Swift v. United States, 318 F.3d 250, 253 (D.C.Cir.2003). The upshot is that the “public interest” language in the Tunney Act, like the “leave of court” authority in Rule 48(a), confers no new power in the courts to scrutinize and countermand the prosecution’s exercise of its traditional authority over charging and enforcement decisions.

The basis of the Court’s decision was a misguided notion that failing to enter a consent decree—inherently a judicial function—trampled the DOJ’s prosecutorial discretion under Separation of Powers. It did not consider that forcing a consent decree down the throat of the court also presented separation-of-powers problems. Nor did the Court explain why it is permissible for courts to reject criminal plea bargains without separation of powers problems, yet they must accept consent decrees for the rich and powerful. And while not all of the cases are in the DC Circuit, the vast majority are, and other circuits rely on DC Circuit caselaw and experience. Only one Tunney Act consent decree rejection. Ever.

So, here we are.

The question arises, then, about what exactly would it take to create a mockery of the judicial function?

We don’t know, quite frankly. There does not appear to be much, if anything, out there to suggest what mockery of the judicial function would look like sufficient to reject a consent decree under the Tunney Act.

Prior deals under Tunney Act review have been rubber stamped

Nothing raises eyebrows with the courts when it comes to the Tunney Act. Consider a couple of examples.

In 2008, the DOJ brought a broad and sweeping complaint against American Airlines’ acquisition of U.S. Airways. But politics played a role, according to Propublica: “People were upset. The displeasure in the room was palpable,” said one attorney who worked on the case. “The staff was building a really good case and was almost entirely left out of the settlement decision.” One of the reasons they might have been upset is that President Barack Obama’s former Chief of Staff was now Mayor of Chicago and advocating for the merger at the White House.

In another airline merger, an attorney representing the merging parties became DAAG after the merger won DOJ approval in August of the same year. Sometimes the revolving door in antitrust just spins just that fast.

Even if the courts did awaken to such questions, there is little interest in doing anything about it. One might claim that Judge Leon did a heroic Tunney Act review in CVS-Aetna, but I do not think that the D.C. Circuit precedent left him in  a good position to do anything other than accept the decree.

In other circuits, it is possible (but not likely) for a court to reject a consent decree. For a rare (and non-merger) exception, see U.S. v. SG Interests I, Ltd., 2012 WL 6196131 (unpublished opinion rejecting entry of consent decree in a Sherman Act Section 1 collusive bidding case as settling the case for nothing more than “nuisance value”). 

Parties have also been known to close deals even before the Tunney Act review has been completed.  Judge Leon complained of this practice in CVS-Aetna, but again, the D.C. Circuit caselaw leaves little in the way of judicial action. 

Thus, I imagine that courts will continue to do what they have always done—ostrich-like abdication of their powers.  

The Tunney Act won’t save civilization, democracy, or even antitrust

Is there a problem with Paramount making a major settlement with Trump and firing Colbert and then having its merger with Skydance approved? We’ll never know, because the courts will only review the complaint, the competitive impact statement, and the proposed final judgment. And rubber stamp.

The HPE-Juniper deal also raises serious questions related to the role of lobbying and whether the DOJ’s acquiescence has precious little to do with separation of powers and prosecutorial discretion and more to do with gangster antitrust. As the Wall Street Journal reported, “Hewlett Packard Enterprise made commitments, not disclosed in court papers, that called for the company to create new jobs at a facility in the U.S., according to people familiar with the matter.”  This, if true, ought to be sufficient to reject the consent decree. But I doubt it. While SCOTUS is hard-core killing Chevron and administrative law, it seems totally fine with the extreme level of deference the DOJ gets under the bastardized interpretation of the Tunney Act.

As David Dayen pointed out, there’s a friendly district court judge in the HPE-Juniper matter, who is a former labor lawyer. And both the HPE-Juniper and the UnitedHealth Group-Amedisys matters are outside the D.C. Circuit, which is a reason for hope. Yet other cases have had friendly judges and there are still no cases rejecting a consent decree. And the reason for that is the D.C. Circuit caselaw, regardless of circuit.

How about American Express? According to the Wall Street Journal,

American Express GBT hired Brian Ballard—a longtime Trump backer, who raised $50 million for his 2024 election—to lobby the Justice Department on antitrust issues for the company, according to lobbying disclosure forms. The Justice Department last week dropped a lawsuit it had filed seeking to block American Express GBT’s acquisition of a competitor, CWT Holdings.

This raises another point. The Tunney Act is only involved when we’re dealing with consent decrees with the DOJ. There is zero transparency with respect to merger investigations that have been dropped due to Gangster Antitrust. Decades ago, there was a push for greater transparency for when the DOJ was investigating a matter, and reasoning behind closing a matter without more. That went nowhere, and we are living with the consequences of that as well. And, as administrative law falls for independent agencies like the FTC, there isn’t much to suggest that courts will get in the way of settlements at DOJ’s sister agency, either.

The future looks grim. Sure, Congress reformed the Tunney Act once already. How’d that turn out? And now, it seems unlikely that Congress (in its current sycophantic posture to the Executive Branch) would dare attempt to correct the unbridled power of the Executive Branch to sell out on the cheap or engage in Gangster Antitrust.

The wealthiest man in the world was President Trump’s largest campaign contributor, thirteen billionaires were selected for positions in the administration, and the fourth wealthiest man in the world announced that the third largest newspaper in the country would no longer publish any opinion pieces critical of free markets. As if this weren’t an already alarming indication of the dangerous connection between economic might and socio-political power, next year, the Supreme Court will hear National Republican Senatorial Committee, et al. v. Federal Election Commission, et al., a case with the potential to erode some of the last remaining campaign finance limits. If the Court embraces the petitioners’ argument that broad categories of political spending should be freed from expenditure limits, it could intensify the transformation of American democracy catalyzed by Citizens United v. FEC and related decisions.

This looming decision arrives amid a broader, heated debate about the relationship between economic concentration and political power. Neo-Brandeisians, concerned with the corrosive effects that “bigness” of dominant firms and concentrated markets may have on democracy, increasingly find themselves at odds with “abundance” liberals and traditional antitrust centrists, who argue that dominant firm size, in and of itself, may not be as large a threat to democracy as alleged.

Yet this debate takes place atop some flawed empirical foundations. At the core is an assumption that lobbying expenditure data provides a reliable proxy for measuring concentration of political power. In a new working paper, I challenge that assumption, arguing instead that the apparent lack of correlation between rising economic concentration and lobbying market concentration obscures the rise of a more diffuse, opaque, and powerful influence ecosystem, enabled and financed by the wealth created through economic concentration. In short, what we see in lobbying data is not the absence of political capture, but its concealment.

Beyond Lobbying: A Complex Architecture of Political Influence

Quantitative analyses often treat political influence as a transactional marketplace where dollars spent on lobbying translate into policy influence. But this market analogy is conceptually flawed and empirically misleading. Lobbying, as captured by the Lobbying Disclosure Act, represents only a narrow slice of the influence economy. A growing share of influence is exercised through “dark money” groups, strategic litigation, media ownership, academic funding, “astroturf” campaigns, and campaign contributions by ultra-wealthy individuals.

These alternative channels of influence are not only substitutable with traditional lobbying; they are often more effective and less transparent. For instance, wealthy actors can achieve their policy goals by funding academic research that shifts public discourse, or by supporting litigation strategies that circumvent Congress altogether. These efforts rarely show up in lobbying disclosures, making them functionally invisible to traditional metrics.

The Empirical Illusion of Stable Lobbying Markets

Studies observing low or relatively stable concentration in lobbying expenditure patterns suggest that economic concentration does not lead to disproportionate political power. For example, a recent study by Nolan McCarty and Sepehr Shahshahani, comparing two decades of Lobbying Data Act expense reports to economic concentration and revenue, found that increasing corporate revenue does not lead to a disproportionate growth in the corporation’s lobbying spend and that an industry’s market concentration does not lead to a corresponding concentration in the industry’s lobbying “market,” suggesting there is little relationship between economic concentration and concentration of lobbying power. The reliability of this conclusion rests on the assumption that measurement error in lobbying data, though incomplete, is at least stable over time. If the magnitude of underreporting or misclassification is consistent year over year, then trends in lobbying concentration should still be informative about broader political economy dynamics, even if the data are imperfect.

This assumption has intuitive appeal. Longitudinal trends in a flawed dataset can, under some conditions, reveal meaningful shifts in behavior or structure. But this defense falters upon closer inspection, because it ignores the dynamic nature of political influence and the fungibility of influence-seeking behavior across channels. In practice, actors in political influence markets routinely engage in cross-channel substitution—dynamically shifting resources between lobbying, campaign finance, judicial advocacy, and public relations—in response to legal, political, and reputational shifts. These substitutions are not random; they are deliberate adaptations to maximize influence under changing constraints.

For example, following the 2007 Honest Leadership and Open Government Act (HLOGA), which imposed stricter disclosure requirements on lobbyists, many influence professionals rebranded themselves as “strategic consultants,” continuing their work without triggering Lobbying Disclosure Act reporting thresholds. Even more significantly, after Citizens United lifted restrictions on independent political spending, wealthy individuals moved significant resources into super PACs and dark money vehicles in ways that do not appear in lobbying disclosures. These shifts render the assumption of time-invariant measurement error implausible. As the legal and regulatory landscape changes, so too does the composition and visibility of political influence-seeking behavior.

Dynamic Substitution and the Post-Citizens United Shift

The 2010 Citizens United and related rulings drastically altered the strategic calculus of political influence. It enabled unlimited independent expenditures, allowing ultra-wealthy individuals to create expansive political infrastructures outside traditional corporate channels. These donors now routinely bypass firm rent-seeking allocations and operate instead through super PACs, nonprofit advocacy groups, and partisan media.

This shift from firm-based lobbying to capital-based influence is critical. A capital-holder with a diverse portfolio of shares across multiple industries may find it more efficient to invest in ideological advocacy and policy environments that favor their overall economic position rather than through individual firms’ budget allocations for political rent-seeking. This strategy is especially potent when coordinated across issue advocacy, electoral influence, and thought leadership.

Notably, while aggregate corporate lobbying plateaued post-2010, independent expenditures by individuals and non-corporate entities skyrocketed. Consider the graph below constructed from data obtained from opensecrets.org. Using federal election years when campaign related influence expenditures are likely highest, I show that the previous exponential growth in lobbying prior to its sudden plateau may be accounted for by the explosion of alternative influence channels utilized by new forms of influence purchasing organizations made possible by Citizens United and its progeny. This divergence suggests a strategic reallocation of political investments, not a reduction in influence-seeking. It is not that political capture had stalled; it evolved beyond ostensibly observable lobbying to opaque new influence channels and beyond the firm as the principal influence market actor.

Note: All data from OpenSecrets.org

Market Concentration, Shareholder Wealth, and the Feedback Loop of Influence

The link between economic concentration and political capture is most visible in the distribution of extraordinary shareholder wealth. Dominant firms—particularly in sectors with significant barriers to entry like tech, finance, and pharmaceuticals—generate supra-competitive profits. These profits flow disproportionately to a small group of shareholders, often billionaires with significant stakes in multiple dominant firms. For example, just five companies account for over 50% of the Nasdaq index. Collectively, their market cap is worth over $16 trillion, and their profit margins range from 10% to over 50%.

Moreover, the top 1% of households now control over 50% of U.S. corporate equities, the top 10% own nearly 90%, and dominant large-cap firms drive the returns from that equity. This extreme skew ensures that shareholder value maximization, often invoked as the corporation’s principle purpose, effectively channels wealth to a small elite. To wit, Elon Musk just secured a $29 billion payment from Tesla, despite the electric vehicle company’s sliding sales, in part due to Musk’s politics that are antithetical to electric vehicle consumers. This elite, in turn, finances the ideological and institutional infrastructure that resists regulatory or redistributive reforms, reinforcing both economic and political concentration.

For example, Elon Musk, Jeff Bezos, Mark Zuckerberg, and the Koch brothers derive the vast majority of their wealth from dominant firms in their respective sectors. Through a mixture of campaign contributions, media control, and think tank funding, these individuals have become central players in shaping the policy landscape, far beyond what traditional corporate lobbying would suggest. For example, the over $290 million spent by Elon Musk on influencing the 2024 election included direct campaign contributions as well as $240 million funneled into Musk’s own America PAC and $50 million toward political ads through Citizens for Sanity PAC. Musk has leveraged his acquisition of X (formerly Twitter) to support conservative and ultraright political forces worldwide, with the Associated Press finding that Musk’s engagement can result in political hopefuls gaining  millions of views and tens of thousands of new followers on his platform. When Jeff Bezos acquired The Washington Post in 2013, he pledged journalistic independence. However, in early 2025 he directed that the Opinion section “write every day in support and defense of” free markets. In conjunction with Meta, Mark Zuckerberg has donated hundreds of millions of dollars to colleges and universities like MIT and UC Berkeley leading to concerns about how such largesse may influence research agendas. The billionaire Koch brothers fund a network of  91 think tanks and organizations including the American Enterprise Institute and the Heritage Foundation that often promote “pro-business” viewpoints.

Reassessing Political Influence in a Multi-Channel Ecosystem

Researchers must move beyond single-channel metrics like lobbying data and adopt frameworks that capture the full architecture of political influence. This includes recognizing the strategic complementarities and dynamic substitution between campaign contributions, dark money issue advocacy, judicial influence, media ecosystems, and think tank networks. Influence is now wielded across a portfolio of channels, often in coordinated fashion and backed by extraordinary wealth.

Just as antitrust scholars recognize the importance of cumulative advantage and non-price effects in assessing market power, political economists must account for the structural and cumulative nature of influence. Political power is not merely purchased in discrete transactions; it is cultivated over time, embedded in relationships, and reinforced through systemic advantages in access, ideology, and information.

From Misdiagnosis to Reform

The forthcoming Supreme Court decision in NRSC v. FEC threatens to further erode transparency in an already opaque political economy. To understand and confront the risks this poses, we must discard the illusion that political power can be adequately assessed through lobbying data alone. Political capture in the post-Citizens United era is no longer primarily the domain of corporations, which were only ever a proxy for the profit interests of their owners—it is the domain of capital.

Abundance liberals risk sleepwalking into this moment. By focusing only on the consumer-welfare concerns of prices, output, and innovation, they miss the broader political implications of concentration. They treat economic and political power as separate, when in fact they are intertwined in a dangerous feedback loop.

Economic concentration produces wealth inequality; that wealth finances multi-channel influence; that influence protects the structures that maintain concentration. This isn’t a conspiracy; it’s a rational response to a regulatory environment that allows wealth to become political power. Addressing this risk does not mean we have to choose between bigness or democracy, a false dichotomy suggested by some of the debate between Neo-Brandeisians and supply-side abundance liberals.

Policy tools exist that can break the cycle of capture by the wealthy and minimize the democratic harms associated with concentration, while insuring that economies of scale actually deliver abundance. We need, among other things:

The debate between Neo-Brandeisians, abundance liberals, and consumer welfare centrists is not merely theoretical. It reflects competing visions for the future of American democracy. If policymakers and scholars continue to underestimate the evolving architecture of political power and ignore the direct evidence of political capture by concentration-enabled capital, they risk providing rhetorical support for even further degradations to democratic responsiveness. The real threat of “bigness” isn’t just economic inefficiency. It’s the quiet capture of democracy itself.

Randy Kim is a municipal government attorney whose research and advocacy interests include economic justice, labor rights, the political dimensions of concentrated economic power, and their intersections. Opinions expressed herein are the author’s own and do not reflect the positions or opinions of his employers.

Railroad mergers haven’t happened in a while, and that’s a good thing. During the Reagan era, the country witnessed a rapid consolidation of its railroad industry. In the two decades following the 1980 Staggers Rail Act, the number of Class 1 freight railroads in the country fell from 39 to seven. The new millennium saw an industry dominated by four major railways that collectively controlled around 90 percent of the total domestic rail operating revenues.

In light of this rapid consolidation, regulators feared an eventual transcontinental rail duopoly. The Surface Transportation Board (STB) stymied railroad mergers in 2000, and issued new merger guidelines the following year, requiring that future deals would have to “enhance” competition. With the exception of Canadian Pacific-Kansas City Southern (CPKC), blessed with a waiver to be considered under the old merger guidelines, no Class 1 mergers have happened since the 2001 STB merger guidelines were released. That merger hiatus is set to be interrupted: In July, Union Pacific (UP) announced its intention to acquire Norfolk Southern (Norfolk) in an $85 billion deal.

The Economist touted the benefits of a UP-Norfolk tie-up, suggesting that the merger could lead to the “big four” railways becoming a “bigger two” railways. The magazine noted that UP-Norfolk merger would all but guarantee that BNSF and CSX combine as well. This potential rail duopoly presents a series of issues. Moving an already consolidated industry towards a duopoly means that shippers seeking to send traffic on overlapping routes will have fewer choices and will likely face higher prices. And railroad workers will have fewer employment options within an industry dominated by a duopoly. The STB will have to weigh the promised efficiencies against the harms to workers and shippers when considering this merger application.

Faster Speeds Are Not a Merger-Based Efficiency

Proponents of the UP-Norfolk merger argue that the arrangement will lead to faster trains, fewer delays, and better reliability. One purported benefit is that trains won’t need to interchange if they travel with the same railroad company along their entire route.

Here is The Economist’s merger-efficiency rationale, presumably helpfully shared by an industry lobbyist:

Avoiding interchanges between networks would mean faster trains and fewer delays. According to Oliver Wyman, a consultancy, the share of intermodal goods in America that travel by rail on journeys longer than 1,500 miles increases from 39% to 65% when served by a direct line. 

There’s one problem with that rationale: It doesn’t depend on the merger. Instead, such benefits could be achieved in many cases via contract without the associated merger harms. (And that stat from Oliver Wyman, is not impressive: The causation might run the other way, in the sense that demand for transport on a route might cause a railroad to acquire or invest in a direct line).

The STB’s merger guidelines require prospective merging parties to answer the question of “whether the particular merger benefits upon which they are relying could be achieved by means short of merger.” If the claimed benefits from the UP-Norfolk tie-up could be achieved absent a merger, then it follows that those benefits should be given no weight in the STB’s adjudication.

An examination of the rail industry today shows that railroads can already streamline their connections between networks. Nothing prevents two railroads from contracting to facilitate deliveries between separate lines. Indeed, Jim Vena, chief executive of Union Pacific, has acknowledged that UP has a track-sharing arrangement with BNSF in the Northwest. Track-sharing arrangements like the UP-BNSF agreement allow a railroad to move freight across the rail lines owned by another railroad.

After the UP and Southern Pacific (SP) merger in 1995, UP gave trackage rights to over 3,800 miles of track to BNSF. A press release announcing the agreement stated that “BNSF will be able to serve every shipper that is served jointly by UP and SP today.” The agreement “guarantees strong rail competition for the Gulf Coast petrochemical belt, U.S.-Mexico border points, the Intermountain West, California, and along the Pacific Coast.” If UP can survive and thrive with the BNSF trackage-rights agreement, what’s stopping them from creating one with Norfolk?

A 2001 STB report analyzing the UP-SP merger stated that “BNSF has competed vigorously for the traffic opened up to it by the BNSF Agreement,” and that it had “become an effective competitive replacement for the competition that would otherwise have been lost or reduced when UP and SP merged.” The report also cites enhanced competition for shippers who previously had only one rail carrier option. A potential UP-Norfolk track-sharing agreement could allow UP to service customers in Ohio and Kentucky without needing a merger.

Track-sharing agreements can also reduce the number of interchanges necessary to deliver rail cars to customers. A railroad company with a track-sharing agreement can allow another company’s trains to move along its tracks to service customers across two different rail lines without needing to swap engines. If UP and Norfolk made such a trackage agreement, a shipper in Minnesota could already ship their product to Pennsylvania without needing to swap train engines on the way, obviating the need for a merger.

The merging parties might argue that expanding these agreements will place a strain on their train dispatching systems. When two companies make a track-sharing agreement, the “host” railroad generally handles train dispatching on its own line (see the “operations” section of this Southern Pacific trackage rights agreement as an example). The host railroad has incentive to put the tenant railroad’s train at the end of the line for dispatching. (Anyone who has ridden Amtrak knows the experience of waiting for a freight train to pass ahead of you.) Hence, track-sharing agreements might lead to sub-optimal scheduling that would be avoided if all of the trains were owned by the host railroad. Rather than making investments in their train-dispatching systems to address these issues, railroads would rather merge.

It may be the case that some coast-to-coast routes cannot be streamlined through contracts, and that there are merger-specific efficiencies. In any event, the modest gains in efficiency will have to be carefully weighed against the anticompetitive effects felt by shippers and workers.

Past Mergers Have Caused Service Disruptions

Another weakness with the purported merger efficiency is that such benefits are undermined by the experiences of prior merger. Past mergers could not guarantee service improvements in the medium term. These operational failures directly contradict the recent promises that further consolidation of the railroad industry will improve speed and efficiency.

The aforementioned railroad merger, CPKC, was completed in 2023. During May and June of 2025, the combined CPKC firm experienced widespread service disruptions after merging the two legacy IT systems. The CPKC rail network experienced elevated delays, slower average velocity, and decreased on-time performance. In a report to the STB describing the situation, CPKC explained “Unfortunately, despite intensive efforts by CPKC over more than two years to prepare for a smooth transition, the Day N IT systems cut-over encountered unexpected difficulties.”

These issues aren’t isolated to the CPKC merger. When the Class 1 railroad Conrail was split between Norfolk Southern and CSX in 1999, both Norfolk and CSX experienced service disruptions. Shippers also experienced service disruptions in 1997 from the UP and Southern Pacific merger. For these major mergers, degraded performance in the medium term seems to be the rule rather than the exception. The STB’s 2001 merger guidelines recognize this history and place less weight on merger efficiencies that could take years to be realized.

Shippers Could Face Higher Prices on Overlapping Routes

Most of the business press coverage on the merger has ignored or handwaved the overlapping routes between UP and Norfolk. The New York Times has this to say on the subject:

Because Union Pacific and Norfolk Southern do not operate in each other’s regions, the tie-up would not reduce choice between railroads in those areas. Still, the companies together accounted for 43 percent of all rail freight movements last year, according to an analysis of regulatory carload data by Jason Miller, a professor of supply chain management at Michigan State University. (emphasis added)

Yet the map in same New York Times story (inserted below) shows that overlapping UP-Norfolk routes exist throughout Missouri and Illinois. Shippers who need to send products from, say, Kansas City or St. Louis through Chicago will have one less option.

Note: Norfolk Southern lines are shown in orange. Union Pacific lines are shown in maroon.

The eventual rail duopoly that would result from this merger would give many shippers either one or two carriers to choose from. Some shippers may become “captive shippers” who are beholden to a single railroad (aka monopoly) for their shipping needs. These captive shippers would likely face higher prices and worse service quality.

Recently, regulators have attempted to use reciprocal switching to solve the issue of captive shippers. When a shipper only has access to a single Class 1 railroad, a reciprocal switching agreement allows another outside railroad to compete for that shipper’s contract. If the outside railroad wins the contract, the incumbent railroad facilitates the freight pickup in exchange for a fee. These agreements enhance competition for shipping contracts while also compensating the incumbent railroad for using their tracks. It’s a win for both parties.

Yet the STB has only mandated these reciprocal switching agreements when a Class 1 railroad failed to meet performance standards regarding consistency and reliability. The competition standard for captive shippers should not be whether they reach a minimum level of service quality. When considering the possibility of creating a rail duopoly in this country, the STB should contemplate expanding their use of reciprocal switching agreements to enhance competition. But this reform is not a panacea for shippers facing a railroad monopoly from a merger. A recent Seventh Circuit decision overturned the STB’s 2024 reciprocal switching rule, finding it “inconsistent with the Board’s statutory authority.” This decision shows that ensuring competition cannot be achieved solely through rulemaking.

Workers Are Right to Be Skeptical of This Deal

Per The Economist, unions might “lie on the tracks” when it comes to the UP-Norfolk merger. The resulting rail duopoly from this merger would reduce the number of prospective rail employers, and prevent the bidding up of rail worker wages by rival railroads. The likelihood of layoffs and lower wages has already caused the Transport Workers Union to oppose the merger. The Sheet Metal, Air, Rail and Transportation Workers (SMART) union has also announced its opposition.

Despite the promises from UP’s and Norfolk’s management that they will preserve union jobs, the latest Class 1 merger should make rail workers skeptical. The recent CPKC post-merger service disruptions necessitated the temporary loaning of rail crews to address personnel shortages on their system. Yet one local SMART union alleges that the CPKC loaned crews reduced the number of yard jobs available for union employees. The local union alleges that CPKC took advantage of the service crisis to make these job changes. If that post-merger scenario is any guide, then the rail workers unions are justified in their opposition.

The Business Journalism Regarding the Deal Is Unbalanced

Despite the skepticism to the deal voiced by rail workers and shippers (the Freight Rail Customer Alliance representing over 3,500 businesses has criticized the proposed merger), business journalists seem enthusiastic about the deal. The New York Times and The Economist both trumpeted the deal, even citing the same industry analyst, Tony Hatch, in support of claimed efficiencies. Hatch’s support for the deal was widely shared throughout the business press, as he was also quoted by NPR, PBS, and the Bloomberg Podcast. Yet no message of skepticism of the merger efficiencies was widely shared in the media.

This is not to say that consulting a favorable industry expert is an issue. Yet instances like PBS gathering quotes from two analysts touting the benefits of the deal leaves readers with an unbalanced view. Our humble suggestion is that a business journalist, whether explaining a price hike or evaluating a merger, upon receiving a statement from an industry insider, should pick up the phone and seek an alternative opinion from a consumer or labor advocate (or, as a last resort, an economist not working for the merging parties).

The Surface Transportation Board Should Examine This Merger with a Skeptical Eye

The STB should ignore the glowing cheers from the business press and consider the harms to shippers and workers. This deal would likely create a transcontinental rail duopoly in this country, which would lead to even less choice for rail customers. The enhanced market power of the rail duopoly could lead to higher prices for consumers and less bargaining power for workers. Meanwhile, many of the promised service efficiencies could be gained without this merger. In light of the real costs and elusive benefits from this deal, the STB should examine the Union Pacific merger application with a skeptical eye and ensure that freight rail competition is maintained.

In a live discussion on Substack in June with Derek Thompson, neoliberal pundit Noah Smith called Dan Wang’s Breakneck a “companion volume” to Thompson’s and Klein’s recent bestseller Abundance. Wang is slated to speak at the upcoming Abundance 2025 conference, which is headlined by Klein and Thompson. Given the furious fight between the abundance faction and progressives for the soul of the Democratic party, I figured I should read it.

With Breakneck: China’s Quest to Engineer the Future (W.W. Norton 2025), the Hoover Institution’s Dan Wang makes a powerful authorial debut, deftly tracing the promise and pitfalls of China’s “engineering state” model of governance, which Wang presents as a foil to the American “lawyerly” society. In many ways, Breakneck is an ethnography of a government. And the portions of the book that lean into that frame are generally the best (largely chapters 3, 5, and 6). 

The book is fascinating and surprisingly fun—with Wang’s piercing dry wit interspersed at a near perfect frequency. It’s also frustrating. While the character of the engineering state is superbly developed, the points where its American “lawyerly” counterpart is brought into the mix are more tenuous. Wang leans heavily on a reiteration of Paul Sabin’s characterization of the American Left, from his book Public Citizens: The Attack on Big Government and the Remaking of American Liberalism (W.W. Norton 2021), with Wang’s explanation feeling paper thin next to the detailed portrait he paints of the engineering state. 

Some of this is entirely understandable; Breakneck is, first and foremost, a book about China, not about the United States. The comparatively flimsy explanation of the lawyerly state, however, is largely taken from a footnote in earlier chapters to a central point of framing in the book’s conclusion. 

While an overall terrific read, the book is somewhat uneven, great in most places but merely good in others. Perhaps the greatest criticism of the book is, in a way, a compliment to Wang as a writer: it should be at least 50 pages longer. 

About Abundance?

Wang makes no secret that he is sympathetic to the abundance movement, though he seems to indicate that he may not count himself as a part of it. He writes on page 50:

Under banners like “abundance agenda,” “supply side progressivism,” and “progress studies,” various movements are trying to loosen American supply constraints. These are excellent ideas that I hope are broadly adopted.

All of those movements are really better understood as constituents of the broader abundance movement. After all, people within the movement will identify them as such, plus they share a common network of funders, conferences, and prominent personalities.

Wang, though is not wedded to the abundance “lens,” as Ezra Klein and Derek Thompson term it, and instead is extremely clear in calling attention to some of the downsides such an approach entails when not balanced against other concerns.

In Wang’s parlance, the abundance movement seeks to make the United States less lawyerly and more engineering. And while Breakneck is supportive of that aim, it also cautions against overdoing it and promotes seeking to strike an appropriate balance between lawyers and engineers. 

This is one reason why even many critics of abundance may enjoy the book. One of the more frequent criticisms, both of Abundance and the movement sharing its name, is that it fails to engage in the hard tradeoffs that can accompany the quest for efficiency. Proponents will laud China’s infrastructure without considering the limitations on political freedoms or repression of ethnic minorities that enable its monumental building. Or, as Klein and Thompson do, they will lament the old times when Americans built things like the transcontinental railroad, with no acknowledgement of the genocide, racialized exploitation, or corruption that enabled it. 

Wang, largely, avoids this pitfall by confronting those questions head-on. He acknowledges, repeatedly, the environmental damage, persecution of ethnic minorities, and disastrous social engineering projects that came with high-speed rail, monumental bridges, and dams. The question presented by this acknowledgment, which could have been answered more substantially, though, is how possible it is to have an engineering society that steers around the worst of these harms. 

First Kill All the Lawyers?

The single biggest flaw in Breakneck is its chronicling of the role and influence of lawyers in the United States. The topline argument here is simple enough: we used to be more of a nation of engineers before becoming extremely lawyerly in the back half of the twentieth century. 

In particular, Wang points to Paul Sabin’s Public Citizens and blames this development substantially on the New Left, spearheaded by figures like Ralph Nader who continually sued the government to enact progressive ends. Yet Breakneck barely addresses the necessity of the lawyerly society to preserve and defend the public from powerful interests or corrupt political regimes. Red tape has its downsides, but it also can be an impediment to the worst impulses of those in power. Particularly in the age of Trumpian power-tripping and influence peddling, with the Supreme Court playing dead (at times quite convincingly), many people would probably like it if lawyers were able to slow things down more at this particular point.

At one point Wang proxies the level of lawyerliness on the number of lawyers per capita the United States has (p.14). Even by that metric, Israel, the Dominican Republic, Italy and Brazil are more lawyerly. Do they all struggle to build more than the United States?

There is also an issue of how one counts the lawyers. In the European continental civil law system, many nations separate out “lawyers” from “magistrates.” In the United States, by contrast, everyone who practices law is a lawyer, whether they are a defense attorney, a prosecutor, or a judge. In some places like France, for instance, prosecutors and judges are excluded. So when Ezra Klein and Derek Thompson make a similar point to Wang and cite the United States as having four times the lawyers per capita as France (Abundance p.92-93), it is, at best, misleading. 

Additionally, there are other metrics that would indicate the United States has become less lawyerly. If you look at presidents before and after the turn of the 20th century, the latter group has a much lower share of lawmen and includes the only two engineers (Hoover and Carter). Of the first eight presidents, six were lawyers. Of the eight most recent, only three went to law school. (Wang’s exploration specifically cites how many recent presidents were lawyers on page 4.) 

The greatest builder of all American presidents, Franklin Delano Roosevelt, was also a lawyer. The transcontinental railroad, one of our nation’s most defining megaprojects, was begun during the administration of Abraham Lincoln, famously a lawyer.

Wang quotes Alexis de Tocqueville calling lawyers the “American aristocracy” all the way back in 1833 (p.13). Breakneck’s conclusion would have been strengthened by an exploration of when America transitioned from an engineering society to a lawyerly one and how to determine the threshold where legalism becomes overburdening.

While there is some truth to the argument that the United States is a procedure-obsessed nation, Wang’s under exploration of this point results in the book concluding on what is probably the weakest link of his analysis.

Embracing Engineers

The limits of Wang’s description of the lawyerly state are stark because of how they stand in contrast to the detailed and nuanced articulation of the engineering state. To wit, Breakneck’s third chapter is a superb crash course on the cast of engineers who made China into the nation it is. Wang strikes an excellent balance here, keeping the conversation approachable to readers without much knowledge of modern Chinese history while still getting into the weeds enough to provide valuable commentary and insights. 

In the third chapter, Wang outlines a way of thinking about technology that sharply diverges from its American counterpart. He argues for what is in some ways a very worker-centered model of technology, noting that “Viewing technology as people and process knowledge isn’t only more accurate; it also empowers our sense of agency to control the technologies we are producing” (p.75). 

Reasoning from this model, Wang goes on to criticize the “elite consensus” that viewed American manufacturing as dispensable, and the dismissal of unions and heterodox economists who warned against deindustrialization (p.76, 78). Wang also points to financialization and corporate consolidation as major drivers in America’s transformation into a country that doesn’t build anymore (p.77). This is another key point where Wang aligns with many critics of abundance.

It is also at this point, however, where Wang begins a pattern of praising Tesla, calling it “America’s great hope in auto manufacturing” (p.77). While he does rebuke Musk’s shredding of the federal government towards the end, there are several other instances where Tesla is cited as an exemplar of American industry. That Elon Musk is gambling the company on a robo-taxi gambit and has directed Tesla away from its successful sedan models and towards the disastrous cybertruck is largely undiscussed. Also undiscussed are longstanding arguments that Tesla is actually a case in point of financialization driving manufacturing rationales and the company’s reliance on selling regulatory credits as a critical source of income.

The chapter also introduces the critique of America’s emphasis on software and intangible technology at the expense of what Xi Jinping calls the “real economy,” or making physical things. “Too many people,” Wang contends, “have argued away the strategic importance of manufacturing” (p.91). 

After establishing the importance of being able to build and transform the real world, the next two chapters explore the downsides of the engineering state that have accompanied China’s building prowess. The fourth chapter discusses how the cadre of engineers in the Deng Xiaoping-era tried to engage in social engineering, and the chaos that such policies sowed. Interspersed with stories of forced sterilization, infanticide, and coercive abortion, Wang’s telling of the one-child policy is chilling. As he explains, “the one-child policy could only have been implemented in the engineering state. While the state possessed a bureaucracy to enforce controls of such extraordinary scale, there wasn’t a sufficiently developed civil society to fight for legal protection against it” (p.118, emphasis in original).

The hardline positions of affixing social policy to clear numerical benchmarks is demonstrated again in the next chapter’s telling of Wang’s firsthand account of the zero covid policy.  This account is part of the dire warning embedded in Breakneck about what happens when a country is not lawyerly enough. 

Talking Tradeoffs

Wang argues that China and the United States would both benefit if they were a little more like one another. China should invite some of the lawyerliness in order to protect and serve its citizens, while we should do away with some procedural obsession to be able to build more. How exactly, and how much, to do that are left mostly open.

This is probably the point on which I most wish Wang had elaborated. I would like to know more how he envisions the United States becoming more engineering-focused without sacrificing the procedural protections provided by lawyerliness. 

One point completely absent from the book is how important the rule of law is at preventing authoritarianism. Particularly in light of January 6th, 2021 and the more recent attempted coup in Brazil, along with an attempted coup in South Korea and redistricting arms races at home, it seems that legal bulwarks’ ability to defend themselves is extremely topical. And yet Wang does not offer any thoughts on whether lawyerliness is important to stave off democratic backsliding.

With just eight pages left in the book, Wang declares that “For various American ideals to be fully realized, the country will need to recover its ethos of building, which I believe will solve most of its economic problems and many of its political problems too.” This is reflective of the last chapter, which tries to parlay the complex comparative tapestry Wang so intricately wove into a series of straightforward, broad conclusions. Those conclusions aren’t wrong, but are dependent on the context from which they are taken. Thus, the more sweeping general conclusions feel somewhat hollow coming after six chapters of richly developed contextualization. 

The Bottom Line

Breakneck will be of great value to anyone interested in modern China, geopolitics, infrastructure, or industrial policy. Wang’s perspective is invaluable, grounding debates on how to have a society that builds with his own lived experiences, interweaving historical analysis and personal reflection, and fleshing out the Chinese “engineering state” model, which is often invoked but rarely interrogated in contemporary American discussions of industrial policy. 

There is no shortage of debatable points the book makes, but there is absolutely no denying that it is an excellent read. Breakneck is handily one of the best-written popular nonfiction books in recent years. 

Delta president Glen Hauenstein told investors in July that AI-based pricing is currently used on about 3 percent of its domestic network, and that the company aimed to expand AI pricing to 20 percent of its network by the end of the year. This is bad news for flyers, and given the particular way Delta is accessing the technology, is particularly bad for competition.

Airlines have been using “dynamic pricing” for decades, which entails setting fares based on common (as opposed to individualized) factors like demand, timing, real-time supply, and pricing by competitors. A spokesperson for Delta insists the new technology is merely “streamlining” its dynamic pricing model. 

Personalized pricing, made possible via surveillance and AI, is distinct from dynamic pricing, in that the former allows a firm to condition pricing on the circumstances of the customer. Hence, two people shopping for airfares at the same time might see different prices based on things like travel purpose (business or leisure), income estimates, browsing behavior, ticket purchase history, website used, or type of device used.

To implement the new technology, Delta is working with Fetcherr, an Israeli-based GenAI pricing startup whose clients include other airlines like Virgin Atlantic and WestJet, to power the pricing changes. Alas, the three carriers share overlapping routes. From London, Virgin Atlantic flies to several U.S. destinations, including Atlanta, Boston, Miami, Las Vegas, Los Angeles, New York, Orlando, and Washington D.C. Delta also operates flights from London to many of those same U.S. cities, including Atlanta, Boston, Los Angeles, and New York. WestJet has expanded its network into the United States, including to such destinations as Anchorage, Atlanta Minneapolis, Raleigh, and Salt Lake City. (Some of these routes are in partnership with Delta.) Economists and antitrust authorities recognize that there could be anticompetitive effects if common pricing algorithms lead to collusion. Check out the DOJ’s antitrust case against RealPage, in which landlords are alleged to have to turned over their pricing decisions to a common algorithm (RealPage).

During the company’s second-quarter earnings, Delta CEO Ed Bastian noted “While we’re still in the test phase, results are encouraging.” Hauenstein called the AI a “super analyst” and results have been “amazingly favorable unit revenues.” These boasts, aimed at investors as opposed to consumers, mean that AI-based pricing is raising profits—else the results would be ambiguous or discouraging. And those extra profits are likely coming off the backs of consumers. And as we will soon see, rising unit revenues means that AI-based pricing is not leading to price reductions on average, contra the predictions of price-discrimination defenders.

Price Discrimination Is Bad for Consumers, Even When Implemented Unilaterally

Economic textbooks are filled with passages claiming that the welfare effects of price discrimination are ambiguous. It’s worth revisiting the key assumption that permits such an innocuous characterization—namely, an increase in output. As we will see shortly, this assumption isn’t easily satisfied in the airline industry.

Consumer welfare or “surplus” is recognized as the area underneath the demand curve bounded from below by the price. For a particular customer, surplus is the difference between her willingness to pay (WTP) and the price. Importantly, when it comes to first-degree price discrimination—charging each consumer her WTP—all consumer surplus is transferred to the producer, meaning consumers receive no benefit from the transaction beyond the good itself.

Let’s start with the basics. The figure below shows what happens when a firm facing a downward-sloping demand—an indicator of market power—is constrained to charging a single, uniform price to all comers. The profit-maximizing uniform price, P*, is found at the intersection of the marginal revenue and marginal cost curve, and then looking up to the demand curve to find the corresponding price.

Even at the profit-maximizing uniform price, P*, the firm with market power leaves some consumer surplus on the table, equal to the area of the triangle, ABP*. This failure to extract all consumer surplus motivates many anticompetitive restraints that we observe in the real world, such as bundled loyalty discounts. Another way to extract that surplus is, if possible, to charge each consumer along the demand curve between A and B her WTP. And that’s where AI-based personalized pricing comes in. Consumers along that portion of the demand curve are clearly worse off relative to a uniform pricing standard. The only consumer who is indifferent between the two regimes is the one whose WTP is just equal to P*, situated at point B of the demand curve.

Defenders of price discrimination are quick to point out that the price-discriminating firm can reduce its price, relative to P*, to customers on the demand curve from B to C, bringing fresh consumers into the market (who were previously priced out at P*) and expanding output. After all, they claim, there is incremental profit to be had there, equal to the difference between the WTP (of admittedly low-value consumers) and the firm’s marginal cost. There are at least two practical problems, however, with this theoretical argument as applied to airlines.

First, this argument presumes that airline capacity can be easily expanded. But an airline can only enhance output in a handful of costly ways. An airline can add more planes, which are not cheap, or more seats per plane, decreasing the quality of the experience for all passengers. An airline could also add more flights per day, but this too is costly because the airline must secure permissions from the airport at the gates.

Second, as noted above, the customers between B and C along the demand curve are the low-valuation types, who are not coveted by legacy carriers like Delta or United. These low-valuation and budget-conscious customers tend to fly (if at all) on the low-cost carriers and ultra-low-cost carriers like Southwest and Spirit, respectively. Serving these customers, as opposed to extracting greater surplus from high-valuation customers, is likely less attractive to Delta, especially if doing so would compromise the quality of existing customers (through, for example, cramming more seats on a plane), or would put downward pressure on prices of other items that are sold on a uniform basis (e.g., in-flight WiFi or alcoholic drinks).

Even if you don’t accept these practical arguments, it bears repeating once more that under first-degree price discrimination, there is no consumer surplus, even at the expanded output. So expanded output here is nothing to cheer about, unless you are an investor in the airlines or work as an airline lobbyist or consultant. And if there’s any doubt on the price effects from AI-based pricing, recall the boast from Delta’s executive—unit revenues are rising, which can’t happen if Delta is using the technology to drop prices on average to customers.

Price Discrimination Is Even Worse for Consumers When Implemented Jointly with Rivals

If this weren’t bad enough, there’s a knock-on effect from AI-based personalized pricing, especially if the technology vendor is also supplying the same pricing assistance to Delta’s rivals. Recall that Delta uses a pricing consultant that is also advising airlines with overlapping routes with Delta. In that case, the common pricing algorithm can facilitate collusion that would other not be possible. We can return to our figure to see how collusion can make consumers even worse off relative to discriminatory pricing.

Relative the original demand curve (Demand 1), the demand when prices are set via a common pricing algorithm (Demand 2) is less elastic, meaning that an increase in price does not generate as large a reduction in quantity. In lay terms, the demand is steeper. This rotation of the demand curve, made possible by weakening an outside substitute via collusion, causes the uniform profit maximizing price to rise above P* to P**. And this higher price opens the possibility of additional surplus extraction via price discrimination, equal to the area DAE, for the highest-value customers.

Where We Do Go from Here?

At this point, we have two different policy choices. The first is to pursue an antitrust case against Delta and Fetcherr. The problem with antitrust—and I make this argument against my own economic interests as an antitrust economist—is that such a case against Delta would not be resolved for years. The DOJ’s case against RealPage was filed nearly a year ago (August 2024), and we’ve seen little progress. In complex litigation, the defendants need time to produce voluminous data and records in response to subpoenas, the plaintiffs’ economists will have to understand those data and build econometric models that will be subjected to massive scrutiny by even more economists, there will be hearings, motions for summary judgment and to exclude testimony, and then a trial.

The second intervention is to ban, via regulation at either the city or federal level, the use of common pricing algorithms for airlines or more broadly. Similar bans have been imposed by cities against RealPage and Airbnb, which also has been accused of employing a common algorithm (and the subject of a forthcoming piece). Senators Ruben Gallego of Arizona,  Mark Warner of Virginia, and Richard Blumenthal of Connecticut sent a letter to Delta on July 22 correctly asserting the harms from Delta’s AI-based pricing, which will “likely mean fare price increases up to each individual consumer’s personal ‘pain point’ at a time when American families are already struggling with rising costs.” A senate hearing could be in order. But Delta won’t back off from this approach unless and until it perceives the threat of regulation to be credible.

Of the two options, I prefer the latter. With luck, Congress will too!

Back in February, Rob Manfred, the commissioner of Major League Baseball (MLB), sang a tune that is truly a classic in the history of labor relations in baseball. According to ESPN, Manfred noted that fans are sending emails expressing concern over the sport’s lack of a salary cap, purportedly spurred by an offseason spending spree by the Los Angeles Dodgers, a team that has won its division eleven times in twelve years. Manfred insisted that:

This is an issue that we need to be vigilant on. We need to pay attention to it and need to determine whether there are things that can be done to allay those kinds of concerns and make sure we have a competitive and healthy game going forward.

The NBA adopted a cap on payrolls (i.e., a salary cap) in 1983. Soon after, caps were instituted in the NFL, NHL, and WNBA. Despite the consistent efforts of baseball owners in the last years of the 20th century, MLB players have consistently resisted the establishment of any cap on payroll.

Back in the 20th century, this conflict over salary controls led to a number of player strikes and owner lockouts. The last of these labor disputes began during the 1994 season. This strike led to the cancellation of the 1994 World Series and the postponement of the start of the 1995 season. Despite inflicting these losses, the strike didn’t lead to any cap on team payrolls.

For the most part, calls for a cap seem to have subsided in the 21st century. But in 2024, the Los Angeles Dodgers, with a payroll of $265.9 million, won the World Series. In the offseason, the Dodgers added about $65 million more to their payroll, and now lead all of baseball in spending on players (in 2024, they ranked third). Because some seem to think that spending and wins are highly correlated in baseball, it might have appeared to some that the Dodgers were trying to buy another title. And apparently this led some fans to email Rob Manfred.

We don’t know how many e-mails Manfred actually got calling for a salary cap. We do know that it is a myth that baseball teams can buy championships in baseball. Back in 2006, we devoted an entire chapter in The Wages of Wins to the question “Can You Buy the Fan’s Love?” The chapter details all the reasons we thought baseball teams can’t simply buy wins and championships. For now, I’ll simply repeat the observation that from 1988 to 2006, only 18.1% of the variation in a team’s winning percentage could be explained by that team’s relative payroll (i.e. team payroll divided by average payroll that season). That leaves roughly 82% of the variation in winning percentage to be explained by factors other than what teams spent on players. In simple words, teams cannot simply buy wins!

This analysis was repeated from 2011 to 2024. Across these 14 seasons, only 13% of the variation of winning percentage could be explained by relative payroll.

So if spending can’t fully explain wins, can it explain championships? Turns out buying a title is even harder. From 2011 to 2024, none of the ten teams with the highest relative payroll even made it to the World Series. Yes, the highest spending teams in baseball didn’t even get a chance to lose in the World Series!

At the All-Star break in 2025, we seem to be seeing the same story. The top team in baseball in terms of winning percentage is the Detroit Tigers. The Dodgers rank second, but essentially are not much better than five or six other teams. Some of those teams, like the New York Mets, also have a very high payroll. The Milwaukee Brewers, with an impressive record of 16 games over .500, pay their players less than the Tigers.

How can the Tigers and Brewers compete with the Dodgers and Mets? It turns out that baseball effectively has two different labor markets. The Dodgers and Mets generally find their best players in the free agent market. To be in free agency, a player must complete six years of MLB service. Once a player’s career reaches that point and they are without a contract, they can sell their services to the highest bidder.

Back in 2016, the Detroit Tigers played in that market. But when the Tiger’s owner Mike Ilitch passed away in 2017, his son (Christopher) decided the Tigers would get out of the free agent market and try and find their best players in the draft. Hence, most of the Tigers today have less than six years of service. As the Tigers have shown this year, such players can be quite good. And relative to the players on the Dodgers, they are also quite cheap.

Of course, the Dodgers have also built a competitive team. And maybe the Dodgers do win the title in 2025. But at the All-Star break it seems clear that title is not guaranteed. So, why won’t all that spending ensure a Dodger repeat?

Let’s start with the obvious reason. Baseball is a game where you hit a round ball with a round stick. There is no hitter in baseball that a pitcher can’t get out. And there is no pitcher in baseball that a hitter can’t hit. The game simply has a large random element. In addition to random variation in performance, there is also no way to predict injuries. The injury issue is especially relevant in the free agent market, as many players are on the downside of their career after six years of playing.

Beyond the randomness of performance is the simple fact that the difference in playing talent has shrunk considerably across time. We can see this if we look at the level of competitive balance in baseball. As noted in The Wages of Wins, competitive balance improved dramatically in the second half of the 20th century as the talent pool got much bigger. Specifically, as teams started employing African Americans and then players from all over the world, the supply of very talented players increased. Consequently, more teams had access to very good players.

One can see this simply by looking at how often teams win more than two-thirds of their games. Since 1901 this has happened just 30 times. Of these, only three instances happened in the 21st century. It also happened six times in the second half of the 20th century. That means that prior to 1950, this happened 21 times (equal to 30 less three less six). Once upon a time, it was truly possible to build a baseball team that dominated the game. This happens when your team has lots of great players and other teams… well, they don’t!

Of course, that is just dominance in the regular season. As the Seattle Mariners learned in 2001, dominating the regular season doesn’t guarantee post-season happiness. After winning 116 games in 2001—tied for the most wins in baseball history—the Mariners were eliminated in the American League championship series.

At that time, eight teams participated in the playoffs. Today that number has grown to twelve. Because playoff teams are often not much different, the odds of any playoff team winning the World Series is probably less than 10%. And that is true regardless of how much money you spend. Player performance from week-to-week is simply not that predictable. If your star hitters or pitchers (or both) have a bad week in October, your fans will end the season sad.

All of this means the Dodgers simply can’t buy a title. So, why do owners want a salary cap? The spending by teams like the Dodgers does bid up the cost of free agents. If the league could cap spending, players would generally be cheaper. And that would transfer millions of dollars back to the owners.

Yes, none of this is about improving competitive balance and making the game better. In fact, as we noted in The Wages of Wins, there isn’t even much evidence fans truly want competitive balance. Extensive studies of consumer demand and competitive balance tell that story. And every baseball fan learned that lesson when the Texas Rangers played the Arizona Diamondbacks in the 2023 World Series. Fans of small market teams (i.e. not on the coasts) being competitive got what they wanted that year. But it turns out, few other people cared to watch.

In the end, the call for a salary cap has nothing to do with making the game more popular. Owners have consistently called for a cap on pay for the obvious reason they want to pay their workers less. And gullible fans (and members of the media) are often quite happy to help them achieve their dream.

But if baseball does achieve a cap on pay after 2026, you are not going to see balance in baseball improve dramatically. And you won’t see more fans in the stands or watching on television. What you will see is more owners counting more dollars.

Once again, we said all this twenty years ago in The Wages of Wins. Yes, sometimes it is fun to hear the classics!

Since the launch of ChatGPT back in November of 2022, what was once a concept confined to Sci-Fi novels has now certifiably hit the mainstream. The highly visible advances in artificial intelligence (AI) over the past few years have either been awe-inspiring or dread-inducing depending on your perspective, your occupation, and maybe how much Nvidia stock you owned before 2023. Many white-collar workers now fear that they may face the same job-displacing effects of automation that has plagued their blue-collar peers over the past several decades.

Nevertheless, at least one powerful constituency is absolutely thrilled with the rise of AI and is betting big on its success: Big Tech. Microsoft, currently the second most valuable company in the world with a mind-boggling $3.7 trillion market cap, is a leading AI zealot. This fiscal year alone, Microsoft plans to invest over $80 billion in AI-related projects.

As one of its big selling pitches to investors and consumers, Microsoft argues that AI has prompted massive efficiency gains internally, including eliminating a staggering 36,000 workers since 2023. Microsoft CEO Satya Nadella estimated that as much as 30 percent of the company’s code is now written by AI. Mr. Nadella, of course, has a lot riding on convincing shareholders and consumers that AI is a big deal. So, to what extent this claim is legitimate, or pure marketing fantasy, is uncertain. A recent working paper authored by Microsoft researchers and academics  analyzes the productivity increases in (non-terminated) software developers who use AI tools. The authors find that developers using AI tools saw an average 26 percent increase in their productivity. If such experimental results generalize to the broader labor market, AI will certainly have a dramatic impact. Despite evident benefits towards companies from this productivity boon whether workers themselves stand to gain remains uncertain.

A Look into Software Developers’ Compensation

AI models capable of assisting with writing and coding tasks have existed for a couple of years now. Taking Mr. Nadella’s statements at face value, such models enjoy widespread utilization by developers and coders working for Big Tech. As such, if workers—and not just their employers—stand to benefit from AI, then worker compensation should reflect at least some evidence of these productivity gains.

Simple economic models of the labor market suggest that a technology that boosts the marginal productivity of labor will cause a concomitant increase in worker pay. After all, in competitive labor markets, workers should capture 100 percent of their marginal revenue product (MRP), which increases with productivity, though such an outcome rests upon a strong and often-violated assumption that the relevant labor market is perfectly competitive. When an employer has buying power, it can drive a wedge between the worker’s MRP and her wage. In lay terms, this means the employer can appropriate value created by the worker without sharing in the gains, the Pigouvian definition of exploitation. Thus, the extent to which workers benefit from this AI-induced productivity remains unclear. (In addition, a monopsony reduces employment relative to a competitive labor market; Microsoft’s mass firings since its acquisition of Activision in 2023 is also consistent with the exercise of monopsony power.)

While a recent article in The Economist highlights how the AI boom has led to some “superstar coders” seeing their “pay [] going ballistic,” this subset of workers represents a tiny sliver of the total labor market of developers. In that same article, The Economist also produced a graph showing a dramatic slowdown in hiring—job postings for software developers have dropped by more than two-thirds since the beginning of 2022. To understand how AI is affecting workers, we need to look at the labor market at large. Unfortunately, our analysis suggests that software developers have not yet benefited (and may never fully benefit) from their increase in productivity.

Figure 1 below takes the broadest look at how all software developers and computer programmers in the United States have (or have not) benefited from the rise in AI. The results are not pretty: While 2022 inflation has hit all workers hard, eroding much of their nominal wage increases, both computer programmers and software developers are faring much worse than the average worker. Per the BLS, the median wage of computer programmers decreased by 5.89 percent between 2022 and 2024.

Figure 1: Real Wages Are Flat for Most Workers, But Have Declined for Programmers and Developers

Source: Bureau of Labor Statistics’ Occupational Employment and Wage Statistics Annual Report; CPI sourced from FRED. Notes: We transformed this nominal data using CPI to be in 2024 dollars. Hence, this chart shows the real change in wages between 2022 to 2024 (i.e., accounting for inflation). 2024 is the most recent data release, and the 2024 data are not inclusive of data from Colorado.

Not even the top ten percent of software developers, including the “superstar coders” as dubbed by The Economist, appear to be thriving. Figure 1 also shows that the highest paid computer programmers (90th percentile) saw their real wages fall by 4.11 percent.

Workers for Big Tech fared no better. Indeed, the percentage change in the median compensation for software engineers employed by Big Tech effectively mirrors that reported in Figure 1—the median software engineer saw a 2.22 percent decrease in their real wages from 2022 to 2024 per data from Levels.fyi.

Figure 2: Software Engineers Working Big Tech Also Have Not Seen a Dramatic Rise in Wages

Source: Levels.fyi 2024 and 2023 year-end reports; CPI sourced from FRED. Notes: Levels.fyi collects self-reported data “for the top paying tech companies and locations.” Total compensation is inclusive of base salaries, stock grants, and bonuses. Note that Levels.fyi’s trend table has slightly different median compensation estimates than the box charts that we source our data from. It is unclear what causes this discrepancy. We transformed this nominal data using CPI to be in 2024 dollars. Hence, this chart shows the real change in total compensation between 2022 to 2024 (i.e., accounting for inflation).

At the very least, we see evidence that software engineering managers (depicted in yellow) have seen their compensation rise (by 2.61 percent), though nowhere near their supposed AI-powered productivity increase.

Microsoft-specific wage data were not easily accessible. The Economist reported that the median pay for software developers at “tech giants including Alphabet, Microsoft and, until recently, Meta” was close to $300,000. Lucky for us, however, Microsoft sponsors thousands of H-1B visas, which provides a source of publicly available salary data. Using these data, we can get a sense of the trend in how Microsoft software engineer compensation has evolved over time. Because they are beholden to their American employer, H-1B visa-holders likely earn wages below their American counterparts. Nevertheless, the trajectory of wages of H-1B workers should roughly track the trajectory of wages of their American peers.

Figure 3: H-1B Data Suggest That Microsoft Software Engineers’ Real Wage Stagnated in the 2020s

Source: Data is from H1B Grader.com which states that “salaries data is extracted from the H1B Labor Condition Applications (LCAs) filed with the US Department of Labor by [the] Microsoft Corporation.”Notes: We combined various positions’ pay information to produce this average salary measure. Positions that were consolidated had titles that indicated they were roles in software engineering or development. We explicitly excluded IT-specific roles.

While H-1B software engineers working at Microsoft saw real wage increases during the 2010s, by the 2020s, real wages appear to have stagnated. This trajectory likely reflects the trend for all Microsoft developers, including domestic workers.

While these figures are by no means perfect, if workers truly reaped benefits from their AI-boosted productivity in a significant way, the above charts should have reflected such an outcome. Unfortunately, from what we can see, wages have not captured much of AI’s productivity impact. This lends credence to the hypothesis of monopsony exploitation restraining wage growth—in other words, Microsoft (the employer) is appropriating the productivity gains of its workers, presumably because the workers do not have credible outside employment options to which they could turn easily in response to a wage cut.

Software Developers Face an Uncertain Future

Unfortunately, not only do software developers not receive boosts in their compensation commensurate with their productivity increases, but many also now risk losing their jobs. As noted above, Microsoft has shed 36,000 jobs since 2023.

The cause of these mass layoffs does not appear to lie with any underperformance on Microsoft’s part. On the contrary, Microsoft’s gross profits have continued to rise over the past few years, as seen in Figure 4 below.

Figure 4: Microsoft Has Seen Significant Profit Growth in the Past Five Years

Source: MacroTrends.net.

Microsoft stock has also performed tremendously since the release of ChatGPT. If anyone is benefiting from the increased productivity of its workers, it appears it is Microsoft itself. (To be fair, given that Big Tech workers’ compensation packages often include stock, they too benefit from the AI rally even if the compensation figures reviewed above may not reflect such increases.) The combination of layoffs and no real impact on pay appears to at least suggest that AI will function as a substitute, rather than a complement, to human labor.

Figure 5: Microsoft Stock Has Performed Well in the Age of AI

Source: Data retrieved using getsymbols package in Stata, sourced from Yahoo! Finance. Notes: As is standard, we used the adjusted closing stock price. Data is from Jan. 2, 2020 to July 11, 2025. Closing price indexed such that  November 30, 2022 equals 100 (notable for being the date OpenAI first publicly released a demo of ChatGPT, which would go on to reach a million users in less than a week).

AI Fits a Trend of Growing Productivity and Wage Stagnation

Whether AI will truly revolutionize the workplace and make many human workers “go the way of the horses” remains to be seen. From what we have analyzed, however, even if AI does not replace human labor, workers should not put too much hope that they will reap the rewards of their increased productivity. AI continues a trend that started back in the 1980s: the divergence between worker’s productivity growth and their wages. Without a significant policy intervention in labor markets, such as a federal job guarantee or unionization to countervail monopsony power, AI may be a technology that continues to exacerbate the inequality of the 21st century.

Last September, then vice-presidential candidate JD Vance proclaimed in a Pennsylvanian supermarket, “Eggs, when Kamala Harris took office, were short of $1.50 a dozen. Now a dozen eggs will cost you around $4.” The implication was clear—the Biden Administration’s policies allegedly caused egg prices to skyrocket. While Vance was mocked at the time for the contradiction between his statement and the dozen eggs on sale for $2.99 behind him, to the chagrin of his critics, we now know that inflationary conditions, regardless of the cause, may have been a key factor that brought right-wing populism back to the White House.

Despite the contemporaneous criticism of Vance’s statement, his critique highlighted a key vulnerability for Democrats. According to the Bureau of Labor Statistics (BLS), the average retail price of a dozen eggs was $1.46 in January 2020 when former President Biden took office. Through late 2022 to early 2023 and then again in late 2023 to early 2025, egg prices experienced two separate price spikes. For the latter episode, retail prices reached $3.82 in September 2024 (up 85 percent from the previous September) and then continued to soar to an all-time high of $6.23 in March 2025 (more than double the prices from the previous March).

The mainstream media, egged on by egg industry lobbyists, pointed to a one main culprit—the bird flu. Certainly, the mass culling of hens to prevent viral spread decreased egg supply, putting upward pressure on prices. In February of this year, USDA Chief Economist Seth Meyer stated that the United States had about 291 million egg-laying birds compared to a normal flock size of 320 to 325 million (roughly a nine percent decline). From an economic perspective, it is unremarkable that a sudden and economically significant supply-side shock would cause a price increase for an inelastic good. Yet bird flu may just be one part of the story.

There May Be Some Rotten Eggs

In a January 2025 letter to the Trump Administration, Senator Elizabeth Warren called for increased efforts by the Department of Justice (DOJ) and Federal Trade Commission (FTC) to investigate and curtail anticompetitive activity in the agricultural sector, pointing, in part, to the behavior of Cal-Maine, the nation’s largest egg producer. Democrats in Congress continued their advocacy for investigations through February. Then in February and March, reports by Farm Action and Food & Water Watch used empirical analysis to cast doubt on the story that bird flu alone caused the explosion in egg prices. These reports provide evidence that bird flu’s impact on total egg production has been relatively minor.

For instance, Farm Action found that monthly egg production since 2021 (the year before the bird flu epidemic) were only down three to five percent on average. These reports specifically highlight the role of Cal-Maine, producing 21 percent of domestic egg consumption, for its conduct in actively consolidating the egg industry. This period of elevated prices has been hugely beneficial to the egg producers with major egg firm, like Cal-Maine, seeing their profits triple to octuple.

One potential cause for the inflated prices may have something to do with the conduct of chicken hatcheries—that is, the firms that supply egg companies their chickens. Antitrust attorney Basel Musharbash explains that typically there is an increase in demand for replacement chicken following “Fowl Plagues.”  For this crisis, however, hatcheries appeared to have reduced the quantity of hens supplied to these egg producers. And no, this does not seem to be a consequence of the bird flu affecting these hatcheries, with only 123,000 breeder hens culled since 2022 (representing merely three to four percent of the U.S. breeder flock at any given time). As Musharbash explains, this quantity decrease is likely a strategic decision by the two companies, Erich Wesjohann Group and Hendrix Genetics, that form the duopoly that controls the production of new egg-laying hens. This lack of competition may lead to higher prices which have pass through to consumers.

In contrast to the hatcheries that supplies them chickens, the egg industry itself is far from being concentrated by traditional antitrust standards. This industry structure suggests that, absent price coordination, egg prices should reflect competitive levels or something approaching marginal costs. Using data from Egg Industry’s Top Egg Company Survey, we can provide a rough estimate of the Herfindahl-Hirschman Index (HHI) for the industry. Based on the end-of-year egg laying flock size of the top 52 largest U.S. egg producers, and assuming no overlapping ownership interests, the HHI for the egg industry in 2024 was approximately equal to 480. This measure is well below the 1,000 threshold that the DOJ and FTC view as indicating a concentrated industry. HHI does not always tell the whole story, however, and with the top five largest egg producers representing nearly half the industry, the conditions are ripe for collusion.

After all, if something suspect is occurring with prices, it would not be the first time for the egg industry. Back in 2023, a jury held Cal-Maine and other egg producers liable for participating in a price-fixing scheme running from 1999 to 2008, forcing egg producers to pay $53 million in damages. Defendants in that case used a trade organization, named United Egg Producers, to run a hub-and-spoke conspiracy to set-egg prices among major egg producing chains in America. Once companies get used to colluding among themselves, it is often a hard habit for them to break. In addition to lingering coordinated conduct, there could also be nefarious unilateral conduct: Cal-Maine was sued by the state of Texas for allegedly price gouging during the Covid-19 pandemic. This history is rarely mentioned in media coverage explaining egg prices, even when factors other than bird flu are mentioned. (Examples of other cited factors include fuel and feed costs, often times without noting that both have decreased or remained stable in the past year.)

Behold the Bully Pulpit

On March 6, Capitol Forum broke a major story that the DOJ was actively investigating several egg producers, including Cal-Maine, for leading a potential price-fixing conspiracy. As Capitol Forum elaborated earlier this week, the investigates appears to be centering on Expana (formerly Urner Barry), which produces the egg industry’s primary pricing index. Farm Action found almost all egg prices are based off Expana’s indices. Indeed, when Cal-Maine CEO asserts that the company has little control over prices and instead sets prices based on a “benchmark price for eggs,” he is likely referring to an Expana index. Benchmarking companies, such as Expana, have been increasingly put under the spotlight for how they can facilitate collusion. For instance, the benchmarking firm Agri Stats allegedly facilitated collusion among poultry processing companies (the case settled for $169 million).

As one of us touched on in a piece last month, the pricing impact of this DOJ inquiry was potentially significant. The figure below shows how news of the investigation corresponded to a dramatic collapse in wholesale egg prices. On March 5, the average cost of a dozen large white eggs was $8.12. Just two weeks after the March 6 Capitol Forum story, on March 19, those same eggs cost $3.03—a 62.7 percent decrease.

Source: USDA Weekly Combined Regional Shell Egg Report. Data from the Biden Administration available here. Note: Caged large white, Grade A eggs account for roughly half of total egg production in the United States.

While retail egg prices lag behind their wholesale benchmark, retail prices also have started to tick down shortly after the Capitol Forum report.

Source: BLS retrieved from FRED.

That there is an inflection at roughly the same time as the announced DOJ investigation does not, by itself, prove a causal impact on prices. As a prominent instance of a confounding variable, the threat of bird flu also diminished around the same time, with only 2.1 million birds affected in March compared to 23 million in January and about 13 million in February. This diminished threat of bird flu undoubtedly weakened supply-side pressures on prices. According to the libertarian Cato Institute, which struggles to conceive of any problem being caused by bad actors, the diminished threat of bird flu may explain most or all of the price decline. Simultaneously, the United States also sought to increase its egg imports to push down prices.

Despite these other factors, it is striking how precisely news of the DOJ investigation coincides with the drop in wholesale prices. Hence, it is reasonable to infer that the initiation of this DOJ investigation may have altered the pricing behavior of major egg producers. After all, the first rule of any conspiracy is to stop conspiring while under the microscope of an investigation.

Indeed, the history of DOJ investigations contributing to price declines seems to make this potential causality more plausible. For instance, during the FDR administration, a massive increase in antitrust enforcement meant that the mere launching of an antitrust investigation by the DOJ corresponded with a 18 to 33 percent reduction in prices in the industry under investigation. Famously, the use of the bully pulpit by JFK also reversed massive steel price hikes in 1962. Though we note that the impacts of JFK’s approach are not without its critics.

Lessons for Enforcement

As one of us wrote last month, pursuing antitrust claims in court might by itself be insufficient to combat the degree of price-gouging, common pricing algorithms, and surveillance pricing that we have witnessed recently. Furthermore, just as bad actors took advantage of the Covid-19 pandemic to artificially inflate their prices, the economic instability generally (and tariffs in particular) inflicted by the Trump Administration may provide similar cover for further price gouging. Yet the DOJ inquiry into egg prices demonstrates that there may be another way forward. Criminal activity thrives when it is in the dark. Just as street lamps deter night time crime, proactive DOJ investigations can highlight, and therefore deter, anticompetitive activity. In our work in a myriad of price-fixing cases, we have often seen firsthand how scrutiny by authorities is the straw that breaks the cartel’s back.

This finding suggests more investigations are needed. Yet we need to deviate from our haphazard system where issues like egg prices are investigated due to heavy news coverage while less politically flashy topics, like the explosion in the price of car insurance, are left to the wayside. According to Einer Elhauge of Harvard Law School, FDR had particular success in his antitrust crusade of the 1940s by making enforcement far more “systematic and focused.” The signal of potential anticompetitive activity that rapidly exploding prices send should be front of mind for our antitrust authorities.

To make enforcement more proactive, we reiterate the call for the DOJ and FTC to adopt formal rules outlining automatic investigation criteria in the wake of rapidly increasing prices. For instance, the DOJ could automatically investigate firms in industries where inflation exceeded some multiple (say two to three times) of the general CPI, particularly if an increase in gross profit margins accompanied this price inflation. (Note that Cal-Maine now earns margins of 70 to 145 percent for a dozen eggs.) Such a rule would not only likely catch more cartels in the act, but it would also serve as a deterrent for companies engaging in this behavior in the first place. Of course, the DOJ and FTC need sufficient funding (and staffing!) to rigorously enforce this proposed rule. We note, and strongly advise against, the Trump Administration considerations of the disastrous idea of shrinking the DOJ Antitrust Division, including possibly closing down field offices focused on the agricultural sector.

Whatever the details of a specific rule, this much is clear: It is time we expand our toolkit to tamp down inflationary pressures arising from novel forms of coordinated pricing. The historical evidence and our recent experience with egg prices demonstrate that automatic DOJ investigations may be one way to get more serious about tackling inflation.

Disclosure: Hal Singer served as an economic expert on behalf of plaintiffs in two cases concerning Agri Stats: Pork Antitrust Litigation, No. 0:18-cv-01776 (D. Minn.) and Broiler Chicken Growing Antitrust Litigation (No. II), 6:20-MD-02977-RJS-CMR (E.D. Ok Aug. 19, 2021).

In May, Heatmap’s Robinson Meyer and Matthew Zeitlin wrote an article about House Republicans’ plan to weaken environmental review to accelerate the construction of new infrastructure. The subject line of the email promoting the piece read, “Permitting Reform Is Back, Baby,” a rather nonchalant way to describe the latest legislative plan to gut the National Environmental Policy Act (NEPA). The proposal, part of the GOP’s budget reconciliation package, seeks to allow developers to pay a fee in exchange for an expedited environmental assessment or impact study that would be exempt from judicial review. Other provisions in the budget reconciliation bill would enable oil and gas companies building pipelines and export terminals to pay for favorable national interest determinations from the Department of Energy and expedited permitting from the Federal Energy Regulatory Commission. 

On May 21, Meyer and Thomas Hochman of the Foundation for American Innovation—a right-wing mouthpiece for the “abundance agenda”—discussed in a webinar the legislation’s pay-to-play NEPA provisions. Yet both commentators failed to acknowledge the context in which debates and developments related to “permitting reform” are taking place. To properly understand what’s happening, one must consider how tech- and petro-capitalists are now invoking society’s “need” for data centers to rationalize an irrational increase in fossil energy production.

According to proponents of the so-called abundance agenda, regulations are a major obstacle to building all sorts of infrastructure, including socially beneficial goods like affordable housing, mass transit, and clean energy. Meanwhile, NEPA has long been villainized by the fossil fuel industry and its allies, who lament how environmental review processes can delay, and occasionally thwart, dirty energy production. Abundance advocates misleadingly cast NEPA as the main barrier to the growth of renewables, even though an interconnection backlog at regional power grids dominated by private, profit-maximizing utilities is a far greater problem. 

A shared disdain for NEPA goes a long way toward explaining why some conservative commentators have been so complimentary of nominally liberal abundance advocates. American Enterprise Institute senior fellow James Pethokoukis, for example, recently urged “pro-growth conservatives and supply-side liberals” (e.g., Abundance co-authors Ezra Klein and Derek Thompson) to team up. He sees, correctly, that the corporate-backed abundance agenda’s deregulatory impulse dovetails with many of the right’s (often corporate-backed) goals. 

The admiration is mutual, as evidenced by neoliberal Democrat and prominent abundance champion Matt Yglesias’s early praise for Interior Secretary Doug Burgum, who proceeded to derail offshore wind projects and embrace coal. What’s more, when Open Philanthropy, a Democratic-leaning “effective altruism” organization founded by Facebook billionaire Dustin Moskovitz, announced its $120 million Abundance and Growth Fund, it cited three Republicans—Burgum, Energy Secretary Chris Wright, and President Donald Trump—as positive embodiments of abundance-enhancing deregulation. This announcement, two months into Trump’s second term, ignored the Trump administration’s extreme actions to benefit oil, gas, and coal interests. 

During the Biden administration, the United States became the world’s largest producer of oil and exporter of liquefied methane gas. Despite this development, Trump has made clear that one of his main objectives is to further increase hydrocarbon production, expand liquified methane gas exports, and revive the moribund coal sector. Echoing rhetoric used by Klein and Thompson, the Energy Secretary said in April that the Trump administration “will replace energy scarcity with energy abundance” by “prioritizing infrastructure development and cutting regulatory red tape.” 

Yet abundant renewable energy does not appear to be a priority. A recent analysis found that more than $14 billion in clean energy projects have been canceled or delayed in the United States so far this year, with more investments in jeopardy due to the GOP’s proposed rollback of the Inflation Reduction Act.

But when it comes to fossil fuels,Trump officials are barreling full speed ahead—reversing regulations, further opening industry access to public lands, and criminalizing dissident activism. Ironically, Trump’s “drill, baby, drill” edict and incoherent tariffs have earned the ire of oil executives, who typically prefer to strategically limit supply to boost prices and profits. More importantly, accelerating the construction of even more fossil fuel infrastructure is completely at odds with the scientific and moral imperative to decarbonize society as quickly as possible.

Why would Trump, who received nearly $100 million from fossil fuel interests during the 2024 election cycle, encourage unlimited dirty energy production even though it could hurt the oil industry’s bottom line, and will surely exacerbate the deadly impacts of the climate crisis? One key factor to consider is the nascent surge in the construction of energy-hungry data centers, the infrastructural backbone of both artificial intelligence (AI) and cryptocurrency.

In short, the heavily subsidized AI boom, and the concomitant buildout of land-, water-, and electricity-intensive data centers, is creating the impression that the United States “needs” to significantly increase energy supply (clean and dirty alike) to satisfy an unprecedented surge in demand. This narrative persists even though the DeepSeek model developed by Chinese graduate students proved that even if one values AI highly, it does not necessarily require a massive increase in energy use. 

Hours after he was inaugurated, Trump declared a “national energy emergency,” implying in Abundance-like fashion that overregulation is creating energy “scarcity.” Three days later, Trump issued an executive order to remove “barriers to American AI innovation.” This order rescinded a Biden-era directive aimed at the “safe, secure, and trustworthy” development and use of AI. It bears noting, however, that Trump has built on Biden’s eleventh-hour executive order to fast-track the construction of AI data centers on federal land.

In addition to AI, cryptocurrency mining is also a major source of rising electricity demand, and Trump has gone to great lengths to boost that industry as well. He claims that digital assets will “unleash an explosion of economic growth.” For himself, maybe; the Trump family has already reaped billions through memecoin corruption.

Trump’s unbridling of AI, crypto, and dirty energy supply must be understood as a singular, inseparable process. In effect, Big Tech has thrown Big Oil & Gas a lifeline by fabricating speculative justifications for fossil fuel expansion. 

In April, during a House committee hearing on AI’s energy and transmission “needs,” former Google CEO Eric Schmidt claimed that “demand for our industry will go from 3% to 99% of total generation.” He told lawmakers that “we need the energy in all forms, renewable, non-renewable, whatever. It needs to be there, and it needs to be there quickly.”

And if it isn’t? The implicit message is that humanity will be deprived of ostensibly life-enhancing technological advancements. The United States managed to expand average life expectancy by ten years (from 69 to 79 years) without this technology since the 1960s. There is, evidently, no appreciation of the fact that if fossil fuel combustion isn’t curtailed, humanity will be deprived of life-sustaining ecological conditions. 

The explicit warning is that if the United States doesn’t win the “AI race,” then China will, and that would be bad. Here’s Alexandr Wang, founder and CEO of Scale AI, during the same hearing: “If we fall behind the Chinese Communist Party, this technology will enable the CCP as well as other authoritarian regimes to utilize the technology to, over time, effectively take over the world.” But if the energy required to win the “AI race” ensures the degradation of life on earth, what would China be taking over? 

Remarkably, Scale AI’s CEO failed to apply the logic of his cautionary tale about authoritarian abuses of AI to Trump’s fascist government and its corporate allies. Sinophobia, now en vogue across much of the political spectrum, appears to have prevented greater recognition of the dangers of entrusting AI policy to Silicon Valley’s far-right billionaires, the members of Congress they’ve bought, and the Trump administration. 

For his part, Interior Secretary Burgum describes the stakes this way: “The U.S. is in an AI arms race with China. The only way we win is with more electricity.” Meanwhile, upon announcing a May 8 Senate committee hearing, Sen. Ted Cruz (R-TX) said that “the way to beat China in the AI race is to outrace them in innovation, not saddle AI developers with European-style regulations.”

In the wake of that hearing, the Koch-affiliated Abundance Institute reiterated its demands for federal lawmakers to preempt state-level regulation of the AI industry and accelerate energy permitting. (The GOP’s budget reconciliation bill would do both.) In so doing, the institute simultaneously confirmed two things about the abundance movement: (1) its anti-democratic nature; and (2) the centrality of expanding gas-powered data centers. The term “supply-side liberals” is an oxymoron. 

Notwithstanding oil producers’ complaints about Trump’s maximalist approach, other fossil fuel players who bankrolled Trump’s campaign, especially those in the fracked gas industry, are poised to capitalize on the AI- and crypto-fueled growth in energy-hungry data centers. For example, Energy Transfer—the company behind the Dakota Access Pipeline—has already received requests to supply 70 new data centers with methane gas, according to a recent investigation. That represents a 75 percent increase since Trump took office, a big return on Energy Transfer’s $5 million investment in Trump’s Make America Great Again Super PAC. Moreover, Trump recently signed executive orders to expand the use of coal, which he has characterized as a good option for off-grid backup power.

The rapid growth of data centers is deepening reliance on fossil fuels and jeopardizing our already-delinquent transition to renewables (not to mention stressing water supplies in drought-stricken areas and harming ecosystems). Existing energy injustices are being intensified, and we are likely to see a further increase in electric bills, as utilities pass costs onto ratepayers. Ultimately, the data center boom threatens to make life more expensive for working people in general given that AI-induced mass unemployment could suppress wages and because any increase in greenhouse gas pollution means more frequent and severe extreme weather, and those shocks devastate communities and disrupt supply chains.

While the left has long been adamant about the need to discipline (fossil) capital, self-described “supply-side liberals” have contended that streamlining environmental review would automatically lead to better outcomes because it would enable cheaper renewables to outcompete fossil fuels. Amid Trump’s coal, oil, and gas-friendly deregulatory blitz, however, it’s clearer than ever that if clean energy is to replace dirty energy, and not just complement it, we must take steps to eliminate polluter handouts and phase out fossil fuel production.

If that means Big Tech’s data centers can’t be built and powered as quickly as Big Tech and its abundance-aligned lobbyists would like, so be it. We must put our energy resources to good use, including the electrification of our built environment and transportation systems. We are not obligated to destroy our one livable planet just so that a few eugenicist tech billionaires can force-feed us alienating and dehumanizing AI garbage designed to further exploit us and enrich themselves and their shareholders.

Kenny Stancil is a senior researcher at the Revolving Door Project.

The Inflation Reduction Act’s failure to garner votes for Democrats has generated significant handwringing in political circles. Although targeted toward benefiting red states, the IRA failed to produce meaningful impact before the 2024 presidential election. Voters presumably care about one type of spending—the type that results in an immediate reward. Delayed gratification through subsidizing the conversion of coal-powered energy towards cleaner technologies, for example, cannot muster a political groundswell. Many policies can appeal to voters on different levels, but one surefire solution is to give the voter a good-paying job.

Matthew Zeitlin has weighed in on how the political theory undergirding the IRA broke down, and Brian Callaci noted that “centrist Democrats jettisoned the stuff that was would have kicked in immediately, been visible to public.” In my opinion, the process of converting government spending authorized by the IRA into jobs takes too long. That’s because government agencies must contract with private entities pursuant to onerous rules (out of an aBuNdAnCe of caution). And upon securing their contracts, contractors must solicit job applicants, and finally hire. The government could cut out the middleman by employing the workforce directly, as it did in the highly successful New Deal programs, the Civilian Conservation Corps, and the Works Progress Administration (WPA).

Per the Biden White House, in the first two years since passage of the IRA, clean energy projects created a meager 330,000 jobs. That’s hardly enough votes to swing an election. An analysis by the Political Economy Research Institute at the University of Massachusetts Amherst estimated that, before it was pared by the Trump administration, the IRA’s climate, energy, and environmental investments would create more than 9 million jobs over the next decade. The problem from a political perspective is that jobs created under the next administration don’t count for much, and perversely could benefit your political opponent.

It’s the Jobs, Stupid

In light of DOGE’s wicked winnowing of the federal workforce in the name of “efficiency,” the lodestar for the next Democratic campaign should be the immediate replacement of lost government jobs and the creation of new government jobs, not government spending. A spending program is just a clumsy vehicle for job creation. Aside from garnering votes, a jobs program would build human capital for workers to deploy in their future work in private or public sector. A jobs program critically would shift power balance in labor market towards workers, allowing workers to capture a larger wage share.

The U.S. economy creates jobs, but not all jobs are equal. Many jobs do not offer a pathway towards career and income advancement. With apologies to Uber drivers, who suffer mightily under their employer’s flooding of the market with replacements, we would never dream of our children becoming independent ride-hailing operators. And the prospects for recent graduates in particular is dim. The chief economic opportunity officer at LinkedIn explained how Artificial intelligence (AI) is threatening entry-level jobs. To wit, the unemployment rate for college grads has increased to 30 percent since September 2022, compared to 18 percent for all workers.

The solution to this labor market problem, which AI has materially worsened, lies in a massive federal jobs program. Such a program would provide entry-level positions with opportunities for continued development in the public sector or advancement in the private sector—that is, the very opposite of what Elon and the tech bros tried to achieve with DOGE. The notion of finding “inefficiency” among government jobs is at best a thinly veiled attempt at demonizing government workers and setting neighbor against neighbor. Government workers fresh out of college or nearing retirement might lack the skills (for different reasons) to seamlessly transition into a new position. Should they be tossed overboard?

As even The Economist admits, much if not most government spending on basic research will lead nowhere or never be commercialized; but that doesn’t mean the investment in supporting scientists in the interim was a waste of taxpayer funding. We kept a bunch of scientists gainfully employed during the project, fine-tuning their research skills. This by itself is a worthy investment. Government-funded research is an investment in the public welfare. Hence, Trump’s attacks on universities generally (and Harvard in particular) are an attack on the public welfare.

On a personal note, I was hired by the Securities and Exchange Commission (SEC) while finishing up my dissertation. At that entry-level job, I learned how to code in SAS and, along with a colleague at the SEC, I published my first paper. I took those skills with me into the private sector, advanced as a consultant, and even sold a firm to a publicly trade consultancy. Would Elon (who ironically has nursed from the teat of government for decades) have approved of that public investment in me? Who knows. Was it a waste of taxpayer money? Certainly not based on what I’ve paid in taxes over my lifetime or in the staff that I’ve been able to keep employed. That limited government investment has paid dividends many multiples over what I earned at my first SEC job.

The Benefits of a Jobs Program Would Be Significant

Public sector workers account for roughly 15 percent of all employment in the United States. By contrast, the comparable share is 30 percent in Norway. Citing work from the CBO, Gregory Acs of the Urban Institute explains that a WPA-style jobs program would create 6.5 million publicly funded jobs. He notes that the WPA was up and running in just four months, and only six months after its creation, the WPA employed about 2.7 million Americans. A 2018 paper by the Center on Budget Priorities called for the provision of universal job coverage for all adult Americans, including health insurance for all full-time workers in the program. Among the benefits of such a plan would be (1) the elimination of involuntary unemployment, (2) the establishment of a “de facto floor in the labor market, greatly increasing the bargaining position of workers throughout the economy,” and (3) increased employment, and therefore expenditures and tax revenues for local and state governments. 

Several studies have documented the benefits from public sector employment, in terms of their effects on wages and employment.

An alternative to a federal job is a federal wage subsidy, such as the earned income tax credit, in which the government gives a tax break to workers whose incomes are below a certain threshold. A recent paper by Maxime Gravoueille (2025) finds that local labor markets in France more exposed to an increase in wage subsidies realized faster growth in hours worked and slower growth in average hourly wages. Unlike a federal job (or job offer), a wage subsidy cannot alter the bargaining position of a worker vis-à-vis its employer.

Another weaker alternative to a federal job is a federal training program. Training displaces the worker’s income while she is being trained, and there is no guarantee of a job (let alone, a superior job) at the end of the training. In 2019, the Council of Economic Advisors under the first Trump administration sought to evaluate the benefits of federal training programs. It concludes that the evidence is mixed, with the “positive effects of training in the [Department of Labor’s Workforce Innovation and Opportunity Act] Adult program … only found in smaller scale, non-random studies.”

Spread the Wealth from Federal Jobs

Federal jobs have been centralized in or near Washington DC. That’s great for DC-area homeowners (like myself), but there is no reason to concentrate the jobs and associated benefits here. Better to spread the jobs across the country, so that each region can benefit from the federal jobs program. By maintaining a parochial presence, the federal government can engender a broader realization of what it can contribute and effectively rebut the mindless “starve the beast” echo chamber. Claiming that the federal government doesn’t understand local issues becomes far less convincing when one’s neighbors work for the Bureau of Land Management or the Census Bureau.

Imagine what would happen to wages if there were a massive new employer in every region of the country. Recent grads could be hired directly out of school, acquire on-the-job skills (e.g., programming) and experience, and then enjoy the option of staying in the federal job or transitioning to the private sector with a job in hand. Such optionality would profoundly shift the power balance towards workers, as private-sector employers would be forced to share a larger portion of the worker’s marginal revenue product, driving up the labor share. In the absence of government job guarantee, a worker’s best outside option is often welfare or Uber.

Pure self-interest motivates Elon and other tech bros’ desire to defund the government generally and federal jobs in particular. These large employers want to a desperate workforce that they can exploit to “drive shareholder value”. Competing against the government for skilled programmers or scientists cuts into the tech bros’ profits. In response to massive spending cuts at research universities, The Economist reports that the number of applications for overseas jobs from American scientists in the first three months of 2025 increased by a third compared to the comparable period in 2024. The lack of outside options for these scientists, or the prohibitive transfer costs of taking an overseas position, means they would be more willing to take a wage cut at a private sector employer.

Not convinced? Remember the time before antitrust litigation forced the NCAA to loosen its collusive grip on athlete labor and implement the transfer portal. Unsurprisingly, very few athletes sought to evade the restriction to play overseas directly out of high school. Doing so entailed significant costs for younger athletes, costs that time and family considerations only amplify for more experienced workers. The removal of that restraint has now allowed competition to flourish and labor to benefit. Of course, this exact sort of competition casts a pall of fear over “shareholder value” crowd, aghast at the prospect of having to pay workers a fair wage. After all, just over a decade ago, the Silicon Valley tech giants settled the In Re High Tech no-poach litigation, which accused them of agreeing not to compete for each other’s workers.

In summary, a federal jobs program would generate enormous social, political and economic benefits. A federal worker is more likely to vote for the party responsible for creating her job. The Democrats’ notion of getting voters excited about clean energy was a pipe dream. Democrats can pursue policies that support the environment, but that issue isn’t sufficiently potent to drive votes. It’s time to shift messages from government spending to government jobs.