Back to Podcast Digest
Mo Bitar··9m

Microsoft accidentally told the truth about AI

TL;DR

  • The software job market is rewarding AI leverage over craftsmanship — Mo opens with a developer named Garrett who lost a role after nine interview rounds because another candidate allegedly used “100% AI” and shipped 30 more features, even as software engineering postings are up 11%.

  • Microsoft’s pricing shift exposes the real cost of ‘cheap’ AI coding help — he says Copilot is moving to token-based billing after weekly operating costs nearly doubled since January, turning the old “$20 a month for a half-senior engineer” pitch into a bill for every token, including the model’s own rambling chain-of-thought-like detours.

  • Microsoft Research published the part everyone tries not to say out loud — citing a new paper, “LLMs corrupt your documents when you delegate,” Mo highlights that frontier models corrupted 25% of document content across 52 domains in long workflows, and tool use made results 6% worse.

  • The video’s real argument isn’t just that AI is flawed — it’s that it strips away the human value of making things — Mo contrasts a child’s crude drawing, which moves him because it contains four years of growth and story, with a beautiful AI-generated artwork that leaves him cold because “there’s no story.”

  • The parenting dilemma around AI is framed as a values question, not a productivity question — in the Reddit story about a 9-year-old using Gemini for advice, fanfiction, and self-improvement, Mo sides with the dad who intervened because he sees struggle, learning, and voice as more important than early delegation skills.

  • His bottom line is that AI may be commercially useful, but as a learning tool it can hollow out the point of the work — for hobby projects especially, he argues AI often removes both the enjoyment of the process and the chance to build skill, leaving kids and adults better at outsourcing than understanding.

The Breakdown

A bloodshot eye becomes a metaphor for the 2026 dev market

Mo starts with a brutal YouTube anecdote: a laid-off developer with a red eye says the stress of AI-driven job hunting literally burst a blood vessel. The punchline is bleak — after nine rounds, he loses to a candidate who supposedly used “100% AI” and delivered 30 more features, while he used “50% AI and 50% manual” and got nothing but heartbreak.

The market’s message: more, faster, cheaper

From there, Mo zooms out: software engineering postings are up 11%, so the jobs exist, but the criteria have changed. His point is that the market no longer rewards clean code or maintainability the way it did in 2019; it rewards throughput, and the people willing to “face the slop” are winning.

Microsoft’s token billing leak ruins the fantasy of unlimited AI

He says the last two years sold AI coding tools as the best bargain in software: “20 bucks a month” for something like a half-brilliant senior engineer. But now, with Microsoft reportedly moving Copilot to token-based billing after weekly costs nearly doubled since January, “unlimited” turns out to mean “until we measure demand and charge you properly.”

You’re paying for the model to panic in public

Mo’s funniest and sharpest stretch is about tokens: bad answers cost as much as good ones, and users are billed for all the model’s self-corrections and anxious internal meandering. In his framing, you’re paying for the machine to have a nervous breakdown in front of you, syllable by syllable, and it still ends up wrong a quarter of the time.

Microsoft Research says delegated document work goes off the rails

That tees up the paper he thinks accidentally told the truth: “LLMs corrupt your documents when you delegate.” He emphasizes the headline result — across 52 domains, frontier models corrupted 25% of document content in long workflows — and mocks the paper’s phrase “sparse but severe,” saying that sounds less like a quirk than a sniper. Even better, giving the models tools made them 6% worse, and one task was basically “edit a document then undo it,” which the best systems still mangled.

A dad catches his 9-year-old using Gemini

The video then pivots hard into a Reddit story about a father who finds his daughter secretly using Google AI for help with swimming faster, getting along with her sisters, and writing fanfiction. Mo jokes about the dad teaching her words like “sycophantic” and “insidious,” but underneath that, he clearly sympathizes with the impossible parenting problem: how do you regulate an “omniscient always-on ghost” in your kid’s pocket when no one really understands the long-term effects?

Why a kid’s ugly drawing matters more than perfect AI art

This is the emotional core of the video. Mo says he’d cry over his daughter’s sloppy drawing of a house because it contains years of development, love, struggle, and family history, while an objectively gorgeous AI-generated oil painting leaves him unmoved because it has width but no depth, polish but no story.

The real fear: raising kids who can delegate but not become

He closes by arguing that for hobby projects and learning, AI often steals the two things that matter most: joy in the process and growth through effort. Even if the strict anti-AI dad turns out to be wrong and future models make delegation effortless anyway, Mo thinks the kid who struggled will still have something the prompting kid may not — a voice, a perspective, and an actual story.