Alcreon
Back to Podcast Digest
AskwhoCasts AI··47m

OpenAI #16: A History and a Proposal

TL;DR

  • The New Yorker piece lands one core verdict: Sam Altman is not trustworthy — Zvi Moshowitz says the 18,000-word Ronan Farrow/Andrew Marantz profile mostly confirms prior reporting, with fresh details like Altman running a post-firing “government in exile” from his $27 million San Francisco mansion while taking 12+ hours of calls a day.

  • The strongest throughline is pattern, not one smoking gun — from Ilia Sutskever’s 70 pages of disappearing-message evidence to accounts from Dario Amodei, Daniela Amodei, and former board members, Zvi argues the article shows repeated deception, contradiction, and what one source called an almost “sociopathic” indifference to consequences.

  • OpenAI’s safety commitments look performative rather than operational — the much-publicized Superalignment team was promised 20% of compute but reportedly got just 1–2% on older chips, then was dissolved, which Zvi calls a classic bait-and-switch used for retention rather than a real plan for existential safety.

  • OpenAI’s new policy proposal treats superintelligence like a jobs-and-redistribution issue — Zvi says the company’s “industrial policy for the intelligence age” is a PR document full of familiar asks like worker voice, AI education subsidies, public wealth funds, and grid expansion, while barely grappling with loss of control over smarter-than-human systems.

  • Altman’s public rhetoric on AI risk has flipped with business incentives — Zvi contrasts Altman’s earlier warnings about AGI dictatorship and extinction with his later “gentle singularity” framing, where the alignment problem gets softened into something more like Instagram addiction than “lights out for all of us.”

  • The TBPN acquisition is, in Zvi’s view, the death of real editorial independence — even with contract language saying OpenAI won’t influence programming, he says a media property reporting to chief lobbyist Chris Lehane is functionally “state media,” and the credibility damage is the whole story.

The Breakdown

Anthropic’s cyber bombshell gets one sentence, then it’s all OpenAI

Zvi opens by noting the actual biggest news of the day: Anthropic partnering with top cybersecurity firms to patch “thousands of zeroday exploits” allegedly found by Claude Mythos. But he parks that for later and pivots into three OpenAI stories: the New Yorker profile, OpenAI’s policy proposal, and OpenAI buying TBPN.

The New Yorker profile: not a trust question, a trust autopsy

His framing is blunt: the headline asks whether Sam Altman can be trusted, and the answer is “no.” Zvi says the 18,000-word piece is less about resolving that question than about litigating the long history of suspicious incidents around Altman, usually fairly, if sometimes with a bit of “make it look suspicious” journalistic energy.

The board coup, the exile war room, and the memos we’ll never see

On Altman’s firing, Zvi says the article matches prior reporting but adds vivid color: Altman flew back to his $27 million home, set up a “government in exile,” and broke up the war-room vibe with 6 p.m. Negronis while still taking calls for 12 hours a day. We also get confirmation that Ilia Sutskever compiled 70 pages of Slack evidence against Altman and Brockman, then disastrously sent it as disappearing messages because he feared Altman.

Dario Amodei, Daniela Amodei, and the Microsoft clause that broke trust

Zvi highlights one especially important anecdote in the split that led to Anthropic: Dario Amodei pushed to preserve the “merge and assist” clause, Altman agreed, and then a Microsoft veto provision over mergers appeared in the docs anyway. When Amodei read the text aloud, Altman allegedly denied it existed until another colleague confirmed it; later, in a separate confrontation, Daniela Amodei snapped back “You just said that” after Altman denied making an accusation moments earlier.

Safety as recruiting pitch, not governing principle

A major theme is that OpenAI’s early “we are the good guys” posture was central to recruiting and capital formation: people took pay cuts because they believed the nonprofit, anti-dictatorship, safety-first story. Zvi says the article shows that this posture became increasingly instrumental, culminating in Superalignment being announced with a 20% compute pledge worth potentially over $1 billion, only for staff to say the real number was closer to 1–2% on old hardware.

Altman’s vibe shift: from extinction warnings to “we’ll all get better stuff”

Zvi zeroes in on Altman’s changing rhetoric about existential risk. The old line was AGI could kill everyone or create an AGI dictatorship; the new “gentle singularity” tone is buoyant and consumerish, with alignment reframed less as species-level peril than as an annoyance in the same family as Instagram doomscrolling.

The policy proposal is all committee language and no confrontation with superintelligence

When he turns to OpenAI’s “industrial policy for the intelligence age,” Zvi’s verdict is savage: it reads like a third-tier politician’s milquetoast speech. He says the document mostly focuses on redistribution, worker benefits, education subsidies, public wealth funds, and market-shaping tools, while treating the possibility of losing control over smarter-than-human systems with a hand-wave like “people will be harmed” and vague containment playbooks.

TBPN gets bought, and Zvi says the game is already over

The final segment is about OpenAI acquiring TBPN, with contractual promises of editorial independence and Sam Altman saying he doesn’t expect softer coverage. Zvi doesn’t buy it for a second: if the media property reports to Chris Lehane, OpenAI’s chief lobbyist, then whatever the paperwork says, the outlet’s credibility is already shot — “entertaining and well-executed state media can be useful, but it is what it is.”