Back to Podcast Digest
AskwhoCasts AI··22m

AI's biggest critic has lost the plot - By Kelsey Piper

TL;DR

  • Kelsey Piper says Ed Zitron’s AI-bubble thesis stopped keeping up with reality — in 2024, skepticism that AI might not improve or monetize made sense, but by 2026 costs had fallen dramatically, enterprise adoption had surged, and the old argument no longer matched the market.

  • The biggest broken prediction is capability and cost — Piper points out that GPT-4-level intelligence is now roughly 1/1000th the cost of launch-era GPT-4, while model progress from 2024 to 2026 has been faster than from 2022 to 2024.

  • Zitron’s case has shifted from economics to implied fraud — instead of arguing mainly that companies aren’t using AI, he now leans on claims that OpenAI and Anthropic may be lying about revenue, with OpenAI’s reported $2 billion per month becoming a central target.

  • Piper’s core complaint is that skepticism without model-specific analysis becomes useless — she contrasts Zitron’s sweeping “none of this is real” posture with more concrete work like Epoch AI’s analysis that GPT-5 serving may be profitable even if training costs still aren’t fully recouped.

  • The real bear case, in Piper’s telling, is competition and capital intensity, not that AI has no value — OpenAI and Anthropic may be in a race where model launches come too fast to fully earn back training costs, and one or both could still fail despite having real customers.

  • Her closing point is not ‘stop being skeptical,’ but ‘be skeptical better’ — AI today clearly delivers economic value in coding, enterprise tools, transcription, and personal tasks, so criticism should focus on weakest links like buildout assumptions, margins, and market size rather than denying obvious usage.

The Breakdown

A debate invite, then straight into the target

The video opens with a plug for a live May 13 debate in San Francisco between Kelsey Piper and a skeptic over whether AI is actually changing science or just powering “a very expensive illusion.” Then Piper turns to Ed Zitron, one of the most prominent AI-bubble critics, saying she’s glad someone is making the case — she just wishes it were being made better.

The 2024 bubble case was reasonable at the time

Piper revisits the original dot-com-style framing: AI could be transformative and still leave many of today’s companies overvalued or dead, just like Pets.com during the internet boom. In 2024, that looked plausible because GPT-4 was exciting more for what it hinted at than for clear economic utility, and a lot depended on whether bigger models would keep getting better.

What Zitron predicted — and what actually happened

She zeroes in on Zitron’s repeated claim that generative AI had already peaked or was close, quoting his 2024 line that it “cannot do much more than it is currently doing.” Piper says that prediction aged badly: from 2024 to 2026, progress accelerated, costs collapsed, and GPT-4-level capability became about one-thousandth as expensive as it was at launch.

Adoption is no longer hypothetical

Piper says a 2026 bubble argument has to start from the fact that AI is already being used: about 30% of Fortune 500 companies have enterprise deals with a leading AI startup, and more than half of Americans use chatbots weekly or more. That means the old “businesses aren’t really using it” line no longer works, so the burden shifts to harder questions like whether current usage can justify the giant capital buildout.

From weak economics to FTX-style fraud claims

Her sharpest criticism is that Zitron has increasingly replaced revenue and usage analysis with insinuations that OpenAI and Anthropic are basically lying. She highlights his reaction to OpenAI’s reported $2 billion monthly revenue and huge user scale, where he analogizes the company to FTX and speculates — without proof — that free tokens may be counted as revenue.

What a stronger skeptical case would look like

Piper contrasts that with Epoch AI’s more grounded analysis of GPT-5 economics: OpenAI may have made money serving the model, but not enough to recoup training costs before moving on to the next one. That’s a real bear thesis, she says — not that AI has no value, but that brutal competition, expensive training runs, and rapid release cycles could make this a winner-take-all game or even a game where the loser goes bankrupt.

AI is doing real work, especially in coding

The video gets concrete when Piper lists tasks she actually gives Claude: reproducing academic papers, putting coding projects online, checking missing school library books, identifying a local soccer club, and making robot-themed handwriting exercises for a kindergartener. She says coding is the big one — with 1.9 million Americans in software roles in 2024 and median software-engineer pay around $130,000, even modest productivity gains matter economically.

The closing shot: not less skepticism, better skepticism

Piper ends by saying the AI world absolutely deserves scrutiny because these companies oversell, contradict themselves, and are led by people she doesn’t consider especially trustworthy. But skepticism that denies obvious usage, obvious value, and every piece of financial data at once collapses into “perhaps nothing we see is real,” which she sees as less analysis than reflexive disbelief.