China just won the AI race and nobody noticed
TL;DR
The real US weakness isn’t talent — it’s the business model for open source AI — Matthew Berman argues American labs can build great open models, but once they give away the weights, rivals can sell inference with better margins because they didn’t pay the R&D bill.
China’s edge comes from state-backed strategy, not magic — he says the CCP can subsidize companies and use a classic catch-up move: give away a very good model for cheap or free, crush margins, and make it hard for US firms to compete.
Most enterprises don’t need frontier intelligence, so cheap Chinese open models are dangerously attractive — if DeepSeek is good enough for “spreadsheets, coding, and making a schedule,” then pricey proprietary models from OpenAI or Anthropic lose on cost and control for 99% of business use cases.
Nvidia may be the only US company with a clean open-source AI business model — Berman points to Nvidia’s $26 billion open-source AI push and says it can afford to give models away because every cloud or startup serving them still buys Nvidia chips.
Building the US economy on Chinese open models could shift power over chips, standards, and even culture — his concern is not just inference revenue, but that China could shape optimization targets, hardware demand, and subtle behavioral defaults embedded in widely used models.
The biggest counterargument is ‘none of this matters if AGI arrives first’ — Anthropic’s Dario and team, in his framing, are betting on a straight shot to AGI via a recursive coding flywheel, but Berman says that outcome is uncertain and the US still needs an open-source strategy now.
The Breakdown
The stark opening: there’s no middle ground
Berman opens with maximum urgency: either the US is “screwed” or it “wins everything” in AI. He frames the whole issue as bigger than model releases or Twitter discourse — 40% of the stock market is tied up in seven tech companies whose fate is tightly linked to AI going right.
Why open source matters — and why it breaks in America
He gives the clean definition: open-source AI means releasing the recipe and usually the weights, so anyone can download, fine-tune, and run it. The upside is obvious — better security hardening, more efficiency work from the community, more control for users — but the catch is brutal: the lab pays to train the model, then everyone else gets to monetize it with higher margins.
China’s playbook: subsidize, go cheap, kill margins
Berman says China’s success with open models like Qwen and DeepSeek comes from how the CCP picks winners and subsidizes strategically important companies. His point is simple and sharp: if you’re behind in a tech race, one of the best moves is to give away something really good for free, because you don’t need to be the best if you can make the leader’s economics collapse.
Why US companies may choose Chinese models anyway
This is where he brings it down to enterprise reality: most businesses are not doing frontier math or scientific discovery. They’re doing coding, spreadsheets, scheduling, and internal workflows — so if DeepSeek is almost as good at 99.9% of tasks for a fraction of the price, with local deployment and more control, Berman thinks the choice becomes obvious.
The shaky US lineup, and why Nvidia looks like the exception
He walks through the field almost like a scorecard: Meta was loudly pro-open-source with Llama, then backed off; OpenAI’s GPT-OSS exists, but feels like a side quest; Anthropic has “zero” open-source strategy; Google’s Gemma is good, but aimed more at local use than company-scale frontier systems. Nvidia, though, might be the “white knight,” because spending billions on open models still drives demand for the chips underneath everything.
Why Chinese open source is not harmless just because it’s open
Berman pushes back on the easy objection — that US firms can just self-host Chinese models and avoid risk. His concern is structural: if US enterprise standardizes on Chinese models, China gains leverage over AI standards, chip optimization, and possibly future hardware demand, especially as its models evolve around domestic chips due to Nvidia export controls.
The AGI counterargument — and why he doesn’t buy it as a complete answer
He gives the strongest opposing case fairly: Anthropic and Dario are aiming for a straight shot to AGI, where the first lab to hit recursive self-improvement effectively ends the game. He points to Anthropic’s coding flywheel and its reported $30 billion ARR, but says that’s still just one possible future, and the US could get strategically weakened long before any hard takeoff arrives.
His fixes: subsidies, procurement, vertical models, standards
Berman ends in policy-and-strategy mode. He suggests federal grants or compute quotas for open-source labs, treating open source as national infrastructure, getting AMD and Intel to copy Nvidia’s hardware-funded model, focusing startups on vertical models for legal, biotech, code, and defense, and creating standards so small companies aren’t rebuilding everything from scratch. The trigger for the whole video, he says, was DeepSeek releasing a strong new model that made all of this feel immediate.