We need to talk
TL;DR
David Shapiro treats the alleged Molotov attack on Sam Altman’s house as a serious warning sign, not an isolated internet freakout — he says the suspect appears linked to Pause AI / Stop AI circles and argues this may fit the pattern of “stochastic terrorism,” even while stressing that facts are still incomplete.
His core point is blunt: violence will not slow AI down — beyond the legal and moral line, he says attacks like this only make anti-AI critics look “crazy and unhinged,” which hands the pro-AI side an easy thought-stopping label.
He draws a sharp distinction between illegal violence and legitimate resistance — union action, internal passive resistance, and workers or Gen Z employees slowing AI rollouts are not the same thing as threats, arson, or calls to “firebomb data centers.”
Shapiro thinks today’s fear is just the beginning because the biggest shocks haven’t even hit yet — mass AI layoffs haven’t arrived, generative AI is only barely entering military use, and current systems still haven’t shown the full impact of their existing capabilities.
His accelerationist view is pragmatic, not triumphant — he argues AI progress is the default because of U.S.–China competition and market incentives, so the more realistic response is adaptation rather than believing the whole system can be stopped.
He frames AI as a general-purpose technology like electricity, with consequences nobody can fully map ahead of time — just as electricity took years to yield light bulbs, motors, radios, and chips, AI will keep spawning new uses, which makes fear understandable but also makes blanket resistance less viable.
The Breakdown
The Sam Altman incident forces a more sober conversation
Shapiro opens carefully, emphasizing the gravity of reports that someone was arrested and charged with attempted murder after allegedly throwing a Molotov cocktail at Sam Altman’s house. He avoids overclaiming on details, but says the incident appears connected to Pause AI / Stop AI circles and argues it’s time to talk seriously about the anger and fear building around AI.
Why he uses the phrase “stochastic terrorism”
He says he’s been a critic of effective altruism and LessWrong-style thinking, but this video is not about settling scores with individuals. The issue, for him, is that some people in anti-AI spaces have openly said things like “firebomb data centers” or “go to jail to stop AI,” and he argues that rhetoric like that can become a real-world accelerant.
Violence won’t stop AI — it will probably speed the backlash
Shapiro’s practical argument is that violence achieves “absolutely nothing” if the goal is slowing AI. In his telling, it backfires by letting critics of AI get dismissed not just as doomers but as extremists, which makes serious public concern easier to wave away.
Not all resistance is the same, and he wants that distinction preserved
He’s very deliberate here: anti-AI doomers advocating illegal acts are not the same as workers, union members, or younger employees resisting AI rollouts through lawful means. He mentions passive resistance, sabotage in the workplace sense, and union tools, while drawing a hard line around property destruction, threats, and death threats.
We haven’t even seen the real social shock yet
Part of why he made the video now is that he thinks commentators are converging on a darker reality: the anger is already intense before the biggest disruptions arrive. He points out that large-scale AI layoffs haven’t really happened yet, military integration is still early, and current models haven’t exhausted what they can do in today’s form.
From silly protests to very real fear
He recalls the small protests outside OpenAI and Anthropic — people dressed like wizards, meme signs like “you wouldn’t download the Torment Nexus,” and a generally unserious vibe. But he says it would be a mistake to stop at mockery, because behind the cosplay are artists, copywriters, freelancers, and other workers who are genuinely watching work dry up.
His optimism includes admitting the mess
Shapiro says he’s often accused of wearing rose-colored glasses because he believes problems are solvable, but he insists optimism without realism is irresponsible. He ties this to his broader “post-labor economics” vision: getting to a world where livelihood, dignity, and civic leverage are no longer chained to jobs, while admitting that shrugging and saying “the only way out is through” feels like a copout.
AI is electricity all over again — and adaptation matters more than fantasy control
His closing framework is that generative AI is a general-purpose technology, like electricity: first the obvious uses, then years of second- and third-order applications nobody foresaw. Because incentives from U.S.–China rivalry and market competition make acceleration the default, he says the honest path is not pretending AI can simply be halted, but finding better ways to adapt, coordinate, and face the fear without turning to violence.