It might seem paradoxical to think that a company that has fallen behind other firms on a major new innovation could at the same time be an anticompetitive leviathan that wields unshakeable monopoly power.
But such is the case for Apple, after it threw in the towel on its failed homegrown AI project, known as Apple Intelligence, in favor of a deal to license Google’s Gemini AI to run under the hood of Apple’s own services.
The failure of Apple Intelligence is a stain on the company, much more profound than some of its obscure and inconsequential failures like the “AirPower” wireless charging mat or a social network for iTunes called Ping. To be fair, other tech behemoths also have failed spectacularly, as evidenced by Meta’s slip into (and prompt retreat from) the Metaverse.
But Apple’s failure to innovate on an epochal technological innovation like AI is the worst failure in the company’s history since the Steve Jobs era—having fallen breathtakingly far behind the state of technology today. Its struggle has been plainly on display for years with Siri, Apple’s once innovative, but now embarrassingly obsolete AI tool. This episode is best understood as a market failure. Apple built its notorious walled garden to foreclose having to compete on things like Siri (or app sales, or integrated cloud storage, or messaging and video calling, etc.) where trapped consumers were forced to accept its substandard Siri chatbot offering, even as the surrounding tech industry was advancing. As a result, Apple was mute to the warning chimes of innovation—unaware of just how far behind it was.
Hey Siri, can you send a message?
In 2011, Siri launched as a native voice assistant users could talk to on the iPhone 4S, taking the world by storm. But in broken, monopolistic markets like the one for distribution of apps and software on the iOS operating system, users are beholden to Apple and have no way to install any fully-integrated AI-powered assistant other than Siri. In the absence of any competitive pressure on Apple to innovate its AI technology, Siri did not advance much in the 15 years after it was initially released.
Siri’s underlying technology, known as “open agent architecture,” was originally pioneered in 1993 at SRI International and was developed into an independent app by 2008. The original Siri app caught the attention of Steve Jobs, who loved the vision of an assistant that could search the internet and autonomously handle tasks for users without them needing to open individual apps or browsers.
Unlike modern AI, Siri was a rule-based AI. Instead of rebuilding Siri’s architecture from scratch, Apple simply updated the original, brittle code. Anyone who has struggled mightily with Siri’s voice dictation to understand plain English—like intuiting whether you said “and” or “in” given the context of the sentence or do maddeningly simple things like navigate somewhere and add in a coffee shop that’s along the route—knows Apple’s technology has hardly advanced over time.
Because of Apple’s penchant for closed ecosystems, Apple waited five whole years to open Siri to developers through APIs to be able to do tasks with other apps.
Modern AI tools like Claude and ChatGPT, on the other hand, are both built on an architecture called Transformer, first introduced by Google researchers in 2017. The key innovation was a mechanism called self-attention—the model learns to weigh every word in a sentence against every other word to understand context and meaning, and can understand the relationships between words. Saying “I saw a bat in the cave” means something very different from “I swung a bat at the game,” and the model can figure out what you’re trying to say by examining the surrounding context dynamically.
But such models only become truly powerful when scaled up massively. They are trained on enormous text corpora, with hundreds of billions of parameters (numerical weights that encode knowledge) over many years. Training involves a deceptively simple objective: predict the next token (roughly, the next word or word-fragment). Do this billions of times across trillions of examples, with a big enough network and enough compute, and the useful, quasi-sentient ability today’s AI is known for emerges.
The result is a general-purpose reasoning engine.
Apple never innovated Siri beyond its original, ancient architecture. With no other firm able to sell users a competing service that would challenge Siri’s centrality given Apple’s walled garden, the company felt no competitive pressures to improve it. Apple’s infamously sticky ecosystem with high switching costs makes iOS users unlikely to switch to Android for more innovative features, even if they wanted to.
Getting lost in your own walled garden
But by 2023, as it became clear AI would be the biggest technological innovation since the internet, Apple had little choice but to change course. Internally fearing its products would become “dumb bricks,” it had to do something. The problem was leading AI firms like Anthropic and OpenAI had already invested many years and hundreds of billions of dollars developing their large language models. Long walled off from the innovative forces of competition, Apple was starting from scratch.
Realizing the company was now years behind, Apple aggressively shifted resources to AI—even canceling its highly anticipated Apple Car project to reassign engineers and capital.
Apple launched a prominent ad campaign in 2024 to promote the upcoming iPhone 16 around Apple Intelligence, touting a dramatic AI-driven revamp of Siri and positioning a useful, AI-driven Siri as the cornerstone of the iPhone’s appeal in the emerging AI economy: the exact kind of revamp consumers had long been waiting for. When Apple Intelligence launched, however, it contained only disappointing rewriting tools, notification summaries, and minor cosmetic improvements to the same poorly performing Siri of old, with no major overhaul. Facing controversy and legal challenges on the grounds its marketing was misleading, Apple quietly pared down its website and winded down marketing of its AI offerings, having failed to deliver. The promised Siri overhaul was delayed, with Apple acknowledging development was taking longer than anticipated.
Apple AI chief John Giannandrea told employees as late as 2022 that he didn’t believe new AI tools like ChatGPT would have staying power, and senior executives reacted with little alarm to ChatGPT’s launch posture that would prove catastrophically costly.
Apple weighed building both small and large language models dubbed “Mini Mouse” and “Mighty Mouse,” then scrapped this contingency in favor of a single large cloud-based model, before shifting gears again, with indecision and organizational failures frustrating engineers and driving some to leave the company. Apple, quite simply, was not used to having to compete.
Internally, Apple’s AI/ML group was mockingly nicknamed “AIMLess” by its own employees, while workers referred to Siri as a “hot potato” constantly being passed between teams without meaningful improvement. The dysfunction ran straight to the top of the product: Siri lead Robby Walker spent his energy on “small wins” like reducing Siri response wait times, and devoted over two years to the project of removing the word “hey” from “Hey Siri,” while fundamental AI capability gaps went unaddressed.
Internal assessments concluded Apple Intelligence lagged behind OpenAI’s ChatGPT by roughly 25 percent in accuracy and that ChatGPT could answer approximately 30 percent more questions—and internal data showed Apple remains years behind its competition.
As a company whose business model is to wall its ecosystem off from competition, Apple also walled itself off from the competitive forces that punish bad products and allow innovation to flourish organically—and much earlier on.
Documents surfaced in the Epic v. Apple antitrust litigation revealed that as early as 2013, Apple executives decided not to bring iMessage to Android because it would lower the barrier for users leaving the platform. Users who hate Apple’s “dumb brick” limitations and dearth of state-of-the-art AI cannot simply swap in Google Gemini or ChatGPT as a system-level default. Consumers and often their households, families, and friends are hostages to Apple’s ecosystem lock-in.
In a genuinely competitive market, Apple would have faced hard revenue consequences for its stagnation years earlier, and been compelled to invest in research and development to create innovative products instead of sitting back and becoming a toll booth that skims off App Store revenue. The Apple Intelligence disaster is the story of a decade of anticompetitive insulation that let Apple take its customers for granted, degrade its own products without losing business, and then face ugly, expensive choices only when the technological gaps became too disastrous and conspicuous.
A company operating under genuine competitive pressure doesn’t spend a decade doing nothing about an embarrassing AI assistant that can’t understand basic grammar, answer basic questions, or do anything even mildly complicated. Apple’s walled garden insulated it from innovative pressures just long enough for the gap between it and the state of the art to become a canyon.
Moreover, rather than owning a world-class AI capability built in-house, Apple now finds itself writing checks to Google to the tune of roughly $1 billion a year to license Google’s AI model, having failed to make its own. The irony of course writes itself: Google already pays an estimated $20 billion per year to Apple to be the default search engine on iPhones, a comical display of industry rent-seeking and monopoly maintenance.
For years, iPhone users have been locked into an assistant that couldn’t handle multi-step requests, routinely misheard commands, and lagged years behind what others do, not because the engineering was impossible, but because the business model made tolerating a broken product a perfectly rational option for Apple’s executives. The result is that consumers who spent a premium on Apple hardware for years were trapped with mediocre technology, and a running joke, then a broken promise, then a marketing scandal, and finally a product that outsourced its intelligence completely.
This is what a decade of an anticompetitive walled garden looks like when the bills finally come due.
Tom Blakely is a Boston-based attorney and federal judicial law clerk who served in the United States Department of Justice, the Massachusetts Office of the Senate Counsel, and practiced at an international law firm. He frequently writes about numerous legal topics. Tom serves on the board of Boston College Law School where he hosted the Just Law Podcast. Tom is an avid sports fan, enjoys the outdoors and splits his time between Cape Cod and Washington, DC.