Open source is dead now?
TL;DR
Cal.com closed its core codebase over AI-driven security fears — Theo says the team now believes source visibility has become “exposure,” even as they launch a smaller MIT-licensed Cal DIY project for hobbyists.
Theo’s core argument is that AI collapsed the domain-knowledge barrier for exploits — where attackers once needed to be strong at both security and a stack like full-stack TypeScript, models now supply the codebase understanding, leaving humans to contribute only modest security knowledge.
Project Glass Wing and Anthropic’s Mythos are the scary proof points — Theo highlights Mythos finding a 27-year-old OpenBSD vulnerability and cites a 32-step network-attack benchmark where Mythos was the first model to fully complete the takeover in 3 of 10 runs.
Closing source may only buy a little time, not real safety — Theo argues models are already getting better at deobfuscation and reverse engineering, so hiding code only temporarily raises the attacker difficulty from roughly “1/10” back to maybe “4/10.”
The new security model looks like proof-of-work economics — he leans on Sim Wilson and Drew’s framing that defenders now have to spend more tokens hardening software than attackers spend trying to break it, with Anthropic’s 100 million-token runs costing about $12,500 each.
Theo still lands on ‘fight for open source,’ but with more hardening and funding — he says shared projects can pool defensive effort better than isolated closed implementations, while warning that maintainers who dismiss AI-generated reports as “CVE slop” are leaving doors open for real attackers.
The Breakdown
The Cal.com announcement that genuinely rattled him
Theo opens from a personal place: he’s become more pro-open-source lately, and Cal.com flipping from one of the best open full-stack TypeScript examples into closed source feels like a gut punch. He knows the team, had been talking to them behind the scenes, and admits he even hoped his recent pro-open-source videos might pressure them not to do this — “and I have failed.”
Why Cal says AI changed the rules
He reads Cal’s statement in full: open source helped build the company, but AI now makes code “scanned, mapped, and exploited at near zero cost,” so transparency becomes exposure. They’re not abandoning builders entirely — they’re releasing Cal DIY under MIT — but the core business code is now closed in the name of customer protection.
The two-skill model of hacking just broke
Theo’s big explanatory frame is simple: meaningful exploitation used to require both deep security knowledge and deep domain knowledge of a specific codebase or stack. He asks his chat to rate their security-research skill from 0 to 10, gets mostly numbers below 3, and uses that moment to show the old bottleneck: TypeScript people weren’t usually elite security people, and vice versa.
AI is now the domain expert for you
That’s the scary change: models don’t need to be world-class hackers if they already understand the codebase better than the attacker does. Theo says where you used to need to be “a seven out of 10 on both sides,” now you can be near zero on domain knowledge and maybe a four on security, then brute-force the rest with agents and token budget.
Mythos, OpenBSD, and the brute-force-every-file strategy
To show this isn’t hypothetical, he points to Anthropic’s Mythos preview finding a 27-year-old OpenBSD bug — which he calls terrifying precisely because OpenBSD is so carefully maintained. He describes Anthropic’s method as almost insultingly straightforward: start an agent from every file in the source tree and have it trace outward for CVEs and exploit paths; not “trained to hack,” just a coding model applied at absurd scale.
Why closing source only buys time
Theo’s first real objection to Cal’s move is that it’s temporary. Closed source makes decompilation and reverse engineering harder than reading raw source, sure, but only for now; he thinks the advantage shrinks as models improve, especially since frontend clients, endpoints, and infinite agent time still leak plenty to work with.
The new security economy is tokens versus tokens
From there he brings in Tanner Linsley, Peter from OpenCode/OpenAI, and especially Sim Wilson summarizing Drew’s argument: cybersecurity now looks like proof of work. Theo walks through the economics using Anthropic’s benchmark — 100 million tokens per run, about $12,500 per Mythos attempt, with no clear diminishing returns — and says defenders increasingly win not by being clever, but by outspending attackers on hardening.
Open source is still worth defending — but maintainers have to adapt
He ends in a conflicted but firm place: he understands why Cal is scared, yet still thinks open source is the better long-term path because defensive spending can pool across companies using the same dependency. His warning is for maintainers too: if projects like FFmpeg wave away AI-assisted reports as “CVE slop,” attackers will exploit that gap, and the people trying to help harden the commons will stop being heard.