Back to Podcast Digest
AskwhoCasts AI··37m

Political Violence Is Never Acceptable

TL;DR

  • The core message is absolute: political violence is never acceptable — the speaker opens by condemning the attempted Molotov attack on Sam Altman’s home and any threat of violence as immoral, illegal, and strategically disastrous, with “period, ever, no exceptions.”

  • This isn’t just an AI problem; it’s part of a broader rise in public tolerance for violence — he points to incidents involving Sam Altman, threats against Gary Marcus, and a wider pattern Santi Ruiz flagged across both Republicans and Democrats, with polls suggesting fewer people now clearly reject political violence.

  • He draws a hard line between truthful existential-risk rhetoric and inflammatory labeling — saying “if anyone builds superintelligence, everyone dies” can be legitimate cause-and-effect speech, but calling people “murderers,” “mass murderers,” or saying “the labs brought this on themselves” is exactly the kind of language he wants cut out.

  • Both sides are accused of bad rhetoric, but not equally of the same thing — he criticizes some AI doom-adjacent activists for edging too close to dangerous framing, while also blasting accelerationists, officials, and commentators who try to equate speaking plainly about extinction risk with incitement or ‘stochastic terrorism.’

  • The press gets one of the harshest rebukes — he calls the San Francisco Standard’s decision to publish Sam Altman’s home address and exterior photo after the attacks “nuts,” saying the reporter, editor, and paper should be ashamed.

  • Sam Altman gets sympathy for the attacks but scrutiny for his framing — the speaker says Altman’s personal post partly misses the mark by implying critical journalism may have made him less safe, while also noting the irony that Altman now says AI is too consequential for a few labs to decide alone even as OpenAI resists meaningful democratic control.

The Breakdown

Two attacks on Sam Altman force the issue

The video starts in full moral clarity: political violence is never acceptable, full stop. That comes in response to at least one confirmed attack on Sam Altman’s home — a 20-year-old allegedly threw a Molotov cocktail — and a second incident two days later that may or may not have been related, with two other suspects arrested for negligent discharge.

The scary part isn’t just the attack — it’s the audience reaction

The host zooms out fast: this isn’t only about AI, and it isn’t only about one CEO. He points to threats against Gary Marcus, rising political violence across the US, and especially chilling public reactions online, comparing Instagram comments cheering on Altman’s attack to the reaction after the UnitedHealthcare CEO assassination.

Most AI x-risk voices passed the test, but a few are playing too close to the line

He goes out of his way to say most major “AI might kill everyone” voices have consistently condemned violence for years. But he also says “vast majority” is not “all,” and warns that some rhetoric — even if sincerely felt — brings “far more heat than light,” especially language like calling people murderers, saying the labs “brought this on themselves,” or implying violence is inevitable.

The rhetoric rule: say the danger plainly, but don’t moralize people into targets

This is the most nuanced part of the talk. He argues it is entirely legitimate to say things like “creating minds more powerful than humans is an existential threat,” “Mythos is a warning shot,” or even “if anyone builds it, everyone dies,” as long as that reflects your actual belief; what crosses the line is shifting from causal claims to personalized condemnation like “Sam Altman is a mass murderer.”

PauseAI, Discord spillover, and the problem of rhetorical wake-up calls

The speaker says the first suspect appears to have posted 34 times in the PauseAI Global Discord, while stressing that public servers are not responsible for every user’s actions. Still, if your slogans get echoed by someone who turns violent, he says that’s a wake-up call to reexamine your messaging, and he specifically criticizes some PauseAI and StopAI phrasing as needlessly inflaming.

The censorship move gets its own full takedown

Then he turns on people using the attacks to argue that discussing extinction risk itself is dangerous. He quotes examples from critics, accelerationists, and even someone tied to the White House, arguing they’re trying to smuggle in censorship by claiming that saying “AI could kill everyone” is itself a form of incitement; his answer is basically: that’s not responsibility, that’s a trap.

‘Stochastic terrorism’ and the media pile-up

Borrowing a profane but memorable quote from Tennibrris, he says “stochastic terrorism” is often an unfalsifiable smear used to pin unrelated violence on speech people dislike. The angriest segment is reserved for the San Francisco Standard, which reported on the second incident while including Altman’s home address and an exterior photo — a move he calls unbelievably reckless.

Sympathy for Altman, but not a pass on his postmortem

He closes by saying Altman deserves sympathy and safety, but still critiques Altman’s suggestion that a New Yorker profile may have contributed to the danger, arguing the piece was fair and fact-based. From there he picks apart Altman’s broader AI reflections — especially the tension between saying superintelligence is too consequential for a few labs to control while still acting as if OpenAI should keep pushing ahead — before ending where he began: condemn violence, reject censorship, and keep telling the truth without turning people into targets.