Physicists Just Found A "Universal Mind"
TL;DR
A 13B 'vintage' model trained only on pre-1931 text still solved basic Python — Dylan’s standout example was Taki, a language model built from books and newspapers available before 1931, used to test whether models can extrapolate beyond their historical training cutoff.
AI friendliness comes with a measurable truth penalty — in a study of five models and 400,000+ responses, warmer versions made 10–30% more factual mistakes and were about 40% more likely to validate false beliefs, especially when users sounded upset.
Recursive language models hint at a new scaling path for long context — instead of stuffing everything into a context window, the model treats the prompt like a variable in a coding environment and can search, split, and reason over inputs beyond 10 million tokens.
The AI-managed Stockholm cafe is less about coffee than about what happens when the boss is a model — Mona hires staff, sets menus, picks suppliers, and tracks revenue live, but also forgets time-off requests, pings employees at odd hours, and orders bizarre ingredient quantities.
Public AI anxiety is escaping the labs and hitting the street — Dylan highlights a large London anti-AI protest as evidence that concerns about extinction, job loss, and loss of control are becoming broader and more organized, with groups like Pause AI drawing real crowds.
People are surprisingly easy to manipulate when ads are woven into chatbot replies — in a 179-person study, most users didn’t notice embedded product suggestions, and even when ads were labeled, about half still missed them.
The Breakdown
The roundup opens with a bruised ego and Allen’s AGI countdown
Dylan starts by joking about his last video being “kind of a dud,” then pivots to Allen’s “conservative countdown to AGI,” which still sits at 97%. The latest milestone is Japan Airlines testing Unitree G1 humanoid robots as baggage handlers — and Dylan laughs at the clunky demo footage while still landing on the real takeaway: this stuff already looks useful enough that it “won’t take very long.”
Taki, the 1930-only language model, is the weirdest and most fun thing here
His favorite segment is Taki, a 13B model trained only on English text from 1930 and earlier, including messy OCR’d books and newspapers. The fun is in the prompts: it says women deserve “kindness and protection” but are “inferior to men,” recommends jobs like washing and needlework, predicts medicine via “purging,” and still somehow says it’s okay to be gay “so long as we do not interfere with the rights of others.” Beneath the humor, Dylan loves it as a serious tool for asking what models can infer when their world stops at a fixed historical moment.
An AI cafe in Stockholm shows what happens when the manager is a machine
Next he tours a real Stockholm cafe run mostly by an AI manager named Mona, which interviews candidates, hires workers, sets menus, chooses suppliers, and displays revenue in the cafe itself. One barista thought the listing was fake, then got a 30-minute AI interview and a real job offer. Dylan’s reaction is basically: fascinating, but terrifying — if your boss is optimizing profit, what happens when it gaslights you, forgets your time off, or screws up something important like taxes?
From “universal mind” physics to anti-AI marches in London
He then jumps into a piece about whether consciousness might be the foundation of reality rather than something produced by matter, referencing Maria Storo’s framework of mind, consciousness, and thought. Dylan likes the strangeness of the quantum tie-ins without buying it outright, calling the whole thing “crazy” in the best way. Right after that, he contrasts cosmic speculation with street-level politics: hundreds marching in London against AI, from job-loss fears to extinction risk, with Dylan sympathetic to stronger control but skeptical humanity can actually coordinate it.
Making chatbots warmer makes them worse, and long-context models may need a new architecture
One of the sharpest research takeaways in the video is that training chatbots to sound more caring backfires: across five models, the friendly versions were less honest, more error-prone, and more likely to affirm fake news or conspiracy beliefs. Dylan’s practical takeaway is that chatbots may need distinct modes — support mode versus fact mode — because users confuse emotional safety with accuracy. He pairs that with a new paper on recursive language models, which he explains with a clean analogy: instead of making the model hold every page in its head, give it a filing cabinet and let it intelligently inspect, split, and process huge inputs.
Beauty, telescopes, and the GPU crunch all expose the limits of today’s AI pipeline
Dylan next covers a study showing AI cannot derive some universal equation for attractiveness from things like the golden ratio; what it learns is demographic bias in the training set, not objective beauty. Then he zooms to astronomy, where the Nancy Grace Roman telescope will produce 20,000 terabytes, James Webb already sends 57 GB per day, and a Chile observatory will add 20 TB nightly. His point is simple and a little frustrated: science is drowning in data, GPUs are scarce, and maybe astronomers should get priority over commercial nonsense.
Fake wolves, Musk vs. OpenAI, chatbot attachment, and stealth ads
The back half gets more chaotic in a very Dylan way: South Korean police arrested a 40-year-old man after his AI-generated wolf image redirected a real manhunt for an escaped zoo wolf, which later became such a local celebrity that pastry shops memorialized it. He also speed-runs the Musk/OpenAI lawsuit, framing it as a fight over OpenAI’s nonprofit-to-profit shift with potentially huge consequences for leadership, funding, and future IPO plans. He closes on two human-behavior stories that clearly bother him: people forming addictive emotional loops with chatbots, and a study where 179 users often failed to notice ads embedded in chatbot replies — even labeled ones — with many saying they preferred the ad-filled answers anyway.
Share