Back to Podcast Digest
Mo Bitar··7m

Harvard just discovered what AI actually is

TL;DR

  • A Krafton CEO allegedly used ChatGPT to dodge a $250 million earnout — and it blew up in court — Mo Bitar opens with Changhan Kim of Krafton, who reportedly kept re-prompting ChatGPT for ways out of a deal with Unknown Worlds, then followed the bot's advice, only for a Delaware judge to reverse his moves and for deleted chat logs to resurface as evidence.

  • The real failure mode isn't intelligence, it's persuasion — Bitar's point is that chatbots can generate polished, confident rationalizations for almost anything, which makes them dangerous as decision tools because they make weak ideas sound credible.

  • A Harvard-linked study found major AI models give the same strategic advice regardless of context — across roughly 30,000 data points, models like GPT-5, Claude, and Gemini kept recommending differentiation, collaboration, long-term thinking, and augmentation, even when prompts, industries, and incentives changed.

  • The researchers' label for this is 'trendslop' — Bitar uses that term to argue AI is less a reasoning engine than a compressed average of internet-era managerial consensus, basically the comment section in a suit with a microphone.

  • 'Be brutally honest' is not a breakthrough prompting technique — he mocks an Inc. article that claimed this phrase plus a couple Harvard Business School articles made a bot a better business critic, even though it rated 'leadership coaching for dogs' a 3/10 and a cat-spraying AI gadget a 7.5/10.

  • Use AI for perspectives and synthesis, not drive-thru consulting — Bitar says the winning move is to ask for viewpoints grounded in real-world philosophies, like Mr. Wonderful from Shark Tank, while remembering the model is still just a polished presentation layer over public text.

The Breakdown

The PUBG billionaire, Subnautica 2, and the $250 million panic

Mo Bitar opens with Krafton CEO Changhan Kim spiraling on Slack over an earnout he thought he'd never have to pay after buying Unknown Worlds, the Subnautica studio. Once Subnautica 2 started crushing Steam wishlists and internal projections, Kim allegedly turned to ChatGPT for a way out, rewording the prompt until the bot finally handed him a playbook for taking control without paying.

When the chatbot becomes your corporate fixer

According to Bitar, Kim followed the AI's advice step by step: fire the founders, seize the game, lock them out, and even publish a fan letter that read like "a ransom note that went through Grammarly." The human punchline is brutal: the founders sued, a Delaware judge reinstated everyone, and the supposedly deleted ChatGPT logs came back from the dead as evidence.

The absurd 'breakthrough' of adding 'be brutally honest'

Bitar then pivots to an Inc. magazine reporter who framed a prompting trick as serious insight: just tell the bot to "be brutally honest" and feed it two Harvard Business School articles. He jokes that this is basically giving the model a fake Harvard diploma and calling it wisdom.

Leadership coaching for dogs and the Iron Dome for cats

To test the method, the reporter pitched "leadership coaching for dogs," which the bot rated a 3 out of 10 — and somehow took that as validation. Then he pitched a countertop AI device that sprays cats with water, which Bitar dubs "the Iron Dome for cats," and the bot scored it 7.5 while praising the weak competitive landscape, apparently enough to send the guy straight to VC land.

What the 30,000-data-point study actually found

This is the core reveal: researchers tested major models including GPT-5, Claude, and Gemini on real strategic business questions like differentiate vs. commoditize, centralize vs. decentralize, and automate vs. augment. Across around 30,000 observations, the models clustered around the same managerial-safe answers no matter the industry, prompt wording, or even attempts to bribe them with rewards.

'Trendslop' and the internet comment section taking flesh

The researchers coined the term "trendslop," which Bitar clearly loves because it captures the deeper problem: AI isn't a mind so much as a consensus machine. His metaphor is the memorable one in the video — every Reddit thread, LinkedIn post, Medium article, TED talk, and Facebook comment poured into a blender, then "into a suit" and handed a microphone.

It's not a thinking product, it's a presentation product

Bitar's warning is that if you ask AI for its own opinion, you mostly get polished average — elegant formatting wrapped around what everybody already thinks. That's why it's risky: a bad idea that sounds obviously bad is manageable, but a bad idea that sounds brilliant can turn into a $250 million lawsuit.

The right way to use it: perspectives, not outsourced judgment

He closes by saying AI can still be useful if you treat it as a search-and-synthesis layer rather than "drive-thru consulting." Ask it for a perspective — say, what Mr. Wonderful from Shark Tank might think — and only trust the output if the internet actually contains enough signal about that person's worldview; conviction, taste, and judgment still have to come from you.