What is going on with AI?
TL;DR
AI infrastructure is historically massive, not just hype — Shapiro says the data center buildout is the second-largest inflation-adjusted mega-project as a share of GDP after the Marshall Plan, and unlike Apollo or the interstate highway system, this one is being funded privately.
Calling data centers a 'bubble' misses the asset underneath — he argues that data centers are 50-plus-year capital assets and GPUs are depreciating but still usable, resellable, and amortizable, so the economics look more like railroads or internet infrastructure than tulips.
The internet is his model for what happens next — even if some AI infrastructure investors fail, Shapiro says the buildout still leaves durable capacity behind, much like the dot-com era overbuilt the internet and society kept benefiting from it from 2003 to 2012.
His real target is slow academic analysis, not just Cal Newport — Shapiro claims commentators like Newport and economists such as Daron Acemoglu rely on studies that are often 2-3 years behind, based on old models like GPT-4 or even ChatGPT 3.5, while AI capabilities keep moving.
'95% of AI pilots fail' is not the gotcha critics think it is — his point is that most tech pilots fail by design, and treating that MIT-style finding as proof that AI is a 'nothing burger' shows a misunderstanding of how enterprise technology adoption actually works.
He trusts frontline anecdotes over lagging studies in fast-moving AI — citing his own workflow, Shapiro says he is routinely 10x more productive because he can run 10 parallel AI conversations, and he thinks academia is missing those power-user gains because students and faculty are often discouraged from using AI openly.
The Breakdown
The question behind the video: why does AI feel so contradictory?
Shapiro opens by saying his audience keeps asking the same thing: how can AI be both overhyped and world-changing at the same time? He frames the confusion around two buckets — the very concrete story about data centers and the more narrative-driven story shaped by public intellectuals like Cal Newport.
Data centers as a private mega-project, not a disposable fad
His big claim is that the AI data center buildout is enormous by historical standards: second only to the Marshall Plan as a share of GDP, and uniquely private compared with state-led efforts like the Manhattan Project, Apollo, or the interstate highway system. That scale makes people reach for the word “bubble,” but he says that’s too simplistic when the spending is creating durable infrastructure.
Why he thinks the bubble analogy breaks down
Shapiro compares AI infrastructure less to tulips and more to railroads and the internet. Railroads took 15 to 20 years to justify their costs, and dot-com internet overbuild left behind useful infrastructure even after companies died; for him, the same logic applies to data centers and GPUs, which depreciate but do not suddenly become worthless after two years.
The zoning fight is real, but it’s local, not existential
He briefly corrects himself from an earlier video, acknowledging that newer high-performance data centers can be noisier and produce more exhaust than older facilities he worked around. But he insists that this is what zoning and permitting are for: setting decibel and environmental limits locally, not treating every complaint as evidence that AI infrastructure requires a state or federal moratorium.
The Cal Newport problem: sounding authoritative without industry context
From there the video shifts into a full-on critique of academic-style commentators. Shapiro says Newport writes in a polished, convincing register but lacks lived experience in the office and tech environments he critiques, because he went straight from higher education into academia.
Why academia keeps missing AI’s speed
Shapiro grants that studying systems from the outside can reveal blind spots, but says that method breaks down in AI because the field moves too fast. By the time a paper is funded, run, reviewed, and published, he says it’s often 2 to 3 years late, using cheaper or older models instead of current frontier systems.
His issue with the studies everyone keeps citing
This is where he gets animated: studies saying “95% of AI pilots fail” or “AI makes engineers slower” are, in his view, badly framed. His example is research that puts engineers in codebases they already know well, then hands them unfamiliar AI tools and treats the resulting slowdown as a verdict on AI itself — which he calls effectively misleading, even if dressed up in academic rigor.
Anecdotes, power users, and the people the papers don’t measure
He closes on a personal note, arguing that he is regularly 10x more productive with AI because he can juggle 10 parallel conversations across research, learning, and building. That’s why he leans on Bezos’s line — when the anecdote disagrees with the data, go with the anecdote — and says that in AI, YouTube, TikTok, and practitioners in the trenches can currently be more up to date than MIT or Ivy League papers, especially when students and professors are still afraid to use the tools openly.