Back to Podcast Digest
Mo Bitar··6m

The real reason they keep saying AI will take your job

TL;DR

  • Mo Bitar says the “AI will take your job” drumbeat is labor market strategy, not neutral forecasting — he argues CEOs like Anthropic’s Dario Amodei benefit when workers feel replaceable, because fear suppresses raises and lowers wages while companies redirect savings into AI vendors.

  • His core villain is the “token budget” — Bitar calls internal metrics like Meta/Facebook-style token leaderboards dystopian because they reward burning model spend and generating “slop,” not reviewing code or producing better work.

  • He claims AI adoption is being exaggerated inside companies because the story itself is useful — even if tools aren’t working well, executives and investors both like the narrative that the company is modernizing and becoming more efficient.

  • Bitar’s big technical point is that LLMs break down as precision requirements go up — his joking “Bitar lesson” is that the more exactness you need in code, art, or communication, the less useful AI becomes, because it’s just stacked approximations of intent through language.

  • The promised productivity boom has become a cleanup tax — instead of eliminating work, he says AI often creates a “second job” where employees must monitor, fix, and justify model output while being paid less and tracked more closely.

  • His prescription is simple: workers need to publicly compare notes — he urges people on YouTube, TikTok, and X to honestly say whether AI is actually helping at their companies, arguing the pro-AI “bull story” is winning because workers still lack a shared counter-narrative.

The Breakdown

The Job-Loss Story as Corporate Leverage

Bitar opens hot, calling the “token budget” a spreading cancer and framing AI doom-talk as one of Silicon Valley’s most dystopian inventions. He singles out Anthropic CEO Dario Amodei, arguing that constant predictions of mass unemployment function as elite marketing: scare workers enough, and they’ll accept lower pay, skip raise negotiations, and cling to whatever job they still have.

Why the Narrative Pays Even If the Tools Don’t

His point isn’t just that AI vendors want attention — it’s that the whole ecosystem profits when everyone acts like AI is already delivering. Companies get leverage over employees and a shiny investor narrative about being “forward moving,” while Anthropic and similar firms make money every time labor gets cheaper and AI spend rises.

A Call for Workers to “Spill the Beans”

From there he turns to what workers should do: talk publicly about whether AI is actually working inside their companies. He wants more honest stories on YouTube, TikTok, and elsewhere because, in his view, capital has a unified bullish story while workers are isolated, panicked, and missing a shared front.

Token Leaderboards and the Rise of Measured Slop

Then comes the concrete example: Bitar says Facebook has a token leaderboard pushing employees to generate output at a ridiculous pace so they look productive by spending enough on AI. He says the metric is upside down — the person at the top is reviewing zero code — and compares it to failed old-school management proxies like lines of code, except now the tracking is dressed up as innovation.

AI as a Second Job, Not a Labor Saver

One of his sharpest turns is the complaint that AI didn’t remove work; it doubled it. Instead of freedom, workers now have to generate, monitor, clean up, and justify AI output, which he describes as getting a second job while being paid less, all while leaders like Nvidia’s Jensen Huang talk as if spending $250,000 per employee on tokens is the benchmark for productivity.

Startups Can’t Figure It Out — So Why Pretend Fortune 500s Have?

Bitar points to Dax, founder of Open Code, who tweeted that many teams still don’t know how to make AI actually useful. His reaction is basically: if a nimble startup can’t confidently crack this, what makes anyone think giant Fortune 500 companies with 80,000 employees across 40 time zones have solved it?

The “Bitar Lesson”: Precision Is Where AI Falls Apart

He lands on a more durable thesis: LLMs are “insanely unreliable autocomplete,” and their usefulness shrinks as precision demands rise. His memorable formulation — the “Bitar lesson,” riffing on the “bitter lesson” — is that language already approximates intent, and then AI approximates language, so there’s always a gap; it can maybe get you 80% there, but the last 20% was the hard, human part all along.

Slow, Careful Builders Will Outlast the Slop Metrics

By the end, Bitar says the people obsessing over token budgets are going to lose, while the winners will be the ones staying slow, protecting quality, and paying attention to what customers actually want. He closes with a personal note: earlier this year he thought his skepticism would age badly as models improved, but instead he feels those doubts are only getting more accurate as more people wake up to the mismatch between the hype and the reality.