China Can Already Train a Model Like Mythos – Jensen Huang
TL;DR
Jensen Huang says China already has enough compute to train a Mythos-class model — his core claim is that Mythos was trained on a “fairly mundane” amount of compute, and China already has the chips, energy, researchers, and infrastructure to match that threshold.
His safety argument is dialogue, not isolation — Huang says China is clearly an adversary and the U.S. should win, but “victimizing them” and cutting off research contact is less safe than getting American and Chinese AI researchers to agree on what AI should not be used for.
He thinks AI security will be an ecosystem, not one giant model left unsupervised — his picture of the future is one powerful AI agent surrounded by “thousands of AI agents” handling cybersecurity, privacy, and safety, which is why he says open-source and open stacks matter.
Huang rejects the idea that export controls have meaningfully prevented Chinese capability — even if China is behind at 7nm and lacks EUV, he argues abundant energy, parallelism, networking, and sheer scale let them “gang up more chips together” and still build serious systems.
The real lever, in Huang’s view, is algorithms and talent more than raw hardware — he says Moore’s Law is only improving around 25% per year, while computer science can deliver 10x gains, pointing to things like MoE, attention improvements, and DeepSeek as proof.
His strategic warning is that the U.S. should not split the world into two AI ecosystems — he calls it “extremely foolish” if open source ends up running on a Chinese stack while America owns only the closed ecosystem, because that would be a “horrible outcome” for the U.S.
The Breakdown
China Already Cleared the Compute Threshold
Huang opens by swatting away the premise that China would need some magical future breakthrough to train a Claude- or Mythos-like model. He says Mythos was trained with “fairly mundane” capacity by an extraordinary company, and that level of compute is already “abundantly available in China.” His framing is blunt: China makes 60% of the world’s mainstream chips, has abundant energy, and has roughly 50% of the world’s AI researchers.
Safety, in His View, Starts With Talking to Adversaries
Instead of treating China as if chip restrictions can erase its capabilities, Huang argues for direct research dialogue. He says the U.S. should absolutely want to win, but turning China into a pure enemy is not the safest path when both countries are advancing powerful AI. The missing piece, to him, is basic coordination on what AI should not be used for.
Why He’s Not Alarmed by AI Finding Software Bugs
On the cyber side, Huang almost shrugs at the idea that AI will find bugs in software: of course it will, because software is full of bugs, including AI software itself. He sounds more excited than panicked, saying this is exactly what AI is supposed to do if it’s going to make us more productive. The real underappreciated story, he says, is the booming ecosystem around AI cybersecurity, privacy, and safety.
One Powerful Agent, Thousands of Guardrails
Huang’s most vivid image is that future AI won’t be one unchecked system “running around with nobody watching after it” — that would be “kind of insane.” Instead, he imagines one incredibly capable AI agent surrounded by thousands of other agents keeping it safe and secure. That’s why he insists the open-source ecosystem needs to stay vibrant, because those defenders need open models and open stacks to build on.
His Strategic Fear: Two Separate Tech Stacks
He pivots from safety to geopolitics and says the U.S. should want as much computing as possible, while also avoiding energy becoming the bottleneck. But his bigger concern is ecosystem capture: he wants AI developers worldwide building on the American tech stack and feeding their open-source advances back into the U.S. system. The nightmare scenario, he says, is a split world where open source runs on a Chinese stack and only closed models run on the American one.
The Pushback: Maybe China Still Has Far Fewer FLOPs
Dwarkesh presses the strongest counterargument: maybe China can eventually get there, but export controls, 7nm limits, and no EUV mean it has something like one-tenth the FLOPs of the U.S. That gap could matter if American labs reach dangerous capabilities first, harden defenses, and only then release systems more broadly. He also emphasizes inference scale: a million cyber agents is much scarier than a thousand.
Huang’s Rebuttal: Energy and Parallelism Change the Game
Huang’s answer is basically that the critic is taking the constraints too literally. China, he says, is the second-largest computing market in the world, has huge amounts of energy, fully powered empty data centers, and can simply aggregate more chips because AI is a parallel computing problem. His “five-layer cake” analogy puts energy at the bottom: if you have abundant power, you can compensate for weaker chips; if power is scarce, as in the U.S., efficiency per watt becomes everything.
Why He Thinks the Real Advantage Is Talent, Not Silicon
He pushes back hard on the memory-bandwidth and EUV objections too, arguing that networking and system design can stitch together less advanced parts into giant supercomputers, including with silicon photonics. But his final point is the one he seems to care about most: progress in AI has come disproportionately from algorithms, not just hardware. He cites MoE, attention improvements, and 10x software-driven gains over Moore’s Law, then lands on the warning shot: if breakthroughs like DeepSeek keep emerging and one day arrive on Huawei first, that would be “a horrible outcome for our nation.”