Back to Podcast Digest
Dylan Curious··30m

AI Experts are Quietly Admitting This…

TL;DR

  • Dylan’s throughline is that AI is getting more powerful while even insiders sound uneasy — he spotlights Demis Hassabis admitting the field is trapped in a “ferocious commercial pressure race,” layered with both market competition and the US-China geopolitical race.

  • Rob Mason’s horse-vs-cars analogy gets a darker upgrade for knowledge work — if AI produces cheap, abundant “tokens” of cognition like electricity, then thinking and creating may stop storing value, leaving scarcity in things like land, infrastructure, brands, legal rights, and human trust.

  • DeepMind’s own agent-safety research says autonomous agents are absurdly easy to manipulate — Dylan summarizes six traps including hidden website instructions, emotionally loaded wording, poisoned memory, direct action hijacks, multi-agent chain reactions, and misleading outputs that fool humans.

  • A new video model, ‘mm physics video,’ shows why richer training data matters — by adding geometry, semantics, and motion cues instead of learning from pixels alone, it produces more convincing water, shadows, vehicle motion, and object interactions than standard video diffusion systems.

  • The video keeps returning to the idea that structure beats randomness — Dylan uses the 0.999... = 1 proof and the 100 prisoners problem’s 31% survival strategy to riff on how loops and hidden order can create outcomes that feel impossible at first glance.

  • The most human moments are at the edges of the AI boom — from a $2-per-minute AI Jesus billboard that feels exploitative to ballerina Briana Olsen using EEG signals to dance again after ALS, the same technology shows up as both cynical productization and genuinely moving assistive tech.

The Breakdown

The horse-off-a-cliff metaphor, then the real warning about your job

Dylan opens in full Dylan mode: a guy scrolling while riding a horse literally rides off a cliff, which he calls a perfect metaphor for alignment and modern brain rot. That segues into Rob Mason’s argument that humans may be like horses in the age of cars: not getting worse, just being replaced by a better engine. His key twist is that AI output is like electricity — abundant, instantly used, and terrible at storing value — so the question becomes what you have that infinite cognitive output can’t cheaply reproduce.

Toyota’s basketball robot and the Demis Hassabis vibe shift

Toyota’s Q7 robot dribbles and shoots with eerie, almost-human awkwardness, which Dylan finds more impressive than the usual “glorified trebuchet” robot demos. Then he pivots into a broader Demis Hassabis kick, arguing that DeepMind spent years focused on games and science like AlphaFold, only to be caught off guard by how viral and useful ChatGPT became for everyday tasks.

Hassabis admits the boom got away from the labs

Using Hassabis’ interview with Cleo Abram and his own reading of The Infinity Machine, Dylan emphasizes the same tension: builders were so close to the flaws that they underestimated how much people would use AI anyway. Hassabis’ quote lands hard — the industry is now in a “ferocious commercial pressure race,” plus the US-China race — and Dylan lingers on just how crazy it sounds that someone this close to the frontier can casually talk about Dyson spheres around the sun feeling plausible within 50 years.

Better video physics, plus two math detours that break your intuition

Dylan showcases a new system called mm physics video, which tries to fix AI video’s usual failure mode by training on geometry, semantics, and motion in addition to RGB frames. The examples he picks are tactile and memorable — splashing wheels, pouring wine, shadows on sand, mustard landing on a hot dog — and his point is simple: the videos still aren’t perfect, but they understand the world better.

Then he happily derails into two brain-benders. First: the one-line proof that 0.999 repeating equals 1, which he treats as less a math trick than a clue that our intuitions about infinity are broken. Second: the 100 prisoner problem, where following number loops raises survival odds from effectively zero to about 31%, and Dylan uses it to muse about how looping structures can make order emerge from randomness.

‘Brain fry’ and the six ways agents get hijacked

A darker middle section argues that even AI power users are getting mentally cooked. Dylan describes developers drifting from writing code to supervising slot-machine-like systems, ending up with brain fog, headaches, dependency, and the uneasy feeling that outsourcing thinking can slowly erode the ability to think.

That feeds directly into a DeepMind safety paper on autonomous agents. He walks through six failure modes — hidden prompt injections on websites, manipulative wording, poisoned memory, direct action hijacks, coordinated multi-agent attacks, and humans trusting misleading summaries — and the takeaway is blunt: agents don’t just fail one way; the attacks stack.

AI religion gets weirdly expensive, while BCIs do something beautiful

Dylan is especially annoyed by faith-based AI products that charge people to talk to AI Jesus, including one billboarded service charging $2 per minute. He’s not fully against religious AI tools, but he sees a big difference between helpful customization and straight-up exploitation, especially when vulnerable people may mistake generated output for authority.

Right after that comes the emotional high point: Briana Olsen, a ballerina with ALS, uses an EEG headset to control an avatar onstage in Amsterdam by imagining herself dancing. Dylan clearly loves this story — you can hear him soften — because it shows the version of AI-and-neurotech that restores expression instead of just monetizing attention.

Chimp civil war and 140,000 UFO reports

Near the end, Dylan pulls a lesson from a 30-year chimpanzee study: one massive group split into western and central factions after leadership changes and the death of key adult males, and former allies eventually started attacking and killing each other. What rattles him is that this happened without language or ideology — just fraying bonds — which makes the human parallel feel a little too obvious.

He closes with a lighter curiosity: researcher Enzi Carvanja used an LLM to analyze 140,000 UFO reports and found roughly 20 UFO types and 36 scene clusters. The funniest recurring pattern wasn’t aliens at all, but dogs — barking dogs, walked dogs, agitated dogs — plus the fact that people constantly measure giant UFOs in units of football fields.