Back to Podcast Digest
Matthew Berman··19m

I messed up...

TL;DR

  • An $800 Vercel bill exposed the hidden cost of vibe coding — Matthew Berman blindly accepted AI-recommended defaults, ended up on Vercel’s Turbo build machine at 12 cents per build minute, and was paying for duplicate concurrent builds after deploying dozens of times a day.

  • A few simple fixes cut costs from hundreds per week to a couple dollars — switching to Elastic, disabling immediate concurrent builds, shortening build times from 3-4 minutes to about 1 minute, and moving builds to GitHub hooks dramatically reduced the bill.

  • AI coding is accelerating shipping while reducing human oversight — Berman says tools like Cursor, Codex, and Claude Code are clearly moving toward chat-first interfaces where code review is secondary, reflecting a broader trend of shipping without reading the code.

  • The real risk isn’t just bad code, it’s misunderstood systems and services — beyond not reviewing implementation, he admits he also stopped evaluating platform risk, uptime, pricing, support, and fit when AI kept recommending tools like Vercel, Resend, Fly.io, and Railway.

  • Generative AI optimization is now shaping winners in dev tools — Berman argues GEO, the AI-era cousin of SEO, matters because services repeatedly recommended by coding agents are compounding growth, citing Resend’s jump from 1 million to 2 million users in four months.

  • His bigger warning is that natural language may be the last human-friendly abstraction — if AI eventually writes code in formats optimized for machines rather than humans, developers could be relying on systems they can no longer truly inspect or understand.

The Breakdown

The $800 Jump Scare From Vercel

Berman opens with the gut punch: after one of his best months of vibe coding and shipping multiple products, Vercel hit him with an $800 bill after just two weeks. He says the AI coding assistant told him to deploy there, he clicked deploy, and never looked closely at the services, configs, or pricing until the bill forced him to.

The Two Default Settings That Burned Money Fast

The first culprit was Vercel’s default Turbo build machine, which he says charged him 12 cents per build minute versus Elastic starting around 3/10 of a penny per minute. The second was concurrent builds: because he was pushing tiny fixes constantly, he’d have multiple nearly identical deploys running at the same time and getting billed for all of them.

Twitter Roasts, Theo’s Reply, and the Obvious-in-Hindsight Fixes

After posting about the surprise bill on X, Theo jumped in with a blunt “WTF is wrong with your build process,” which Berman takes in stride because the advice was useful. That thread helped him realize the deeper issue was absurdly slow builds — often 3 to 4 minutes — and after tuning things, getting builds down to seconds or around a minute, plus using GitHub hooks for builds and Vercel just for deploys, the cost dropped to just a couple dollars.

The Sponsor Break, Then the Real Broader Question

After a short sponsor segment for Recall 2.0, he widens the frame beyond Vercel. Around four or five months ago, he says, something changed with AI coding — citing Opus 4.5 and people like Anthropic’s Boris Cherny and OpenClaw founder Peter Steinberger saying they’ve largely stopped writing or even reading code by hand.

AI Isn’t Just Writing the Code — It’s Choosing the Stack

Berman says the automation doesn’t stop at implementation: AI also keeps recommending the same infrastructure choices like Vercel, Resend, Fly.io, and Railway, and he’s found himself accepting those suggestions without thinking through plan fit, support quality, uptime, or dependency risk. He points out that this mattered deeply when he previously started software companies, but vibe coding has made him sloppier about those business-critical decisions.

GEO Is Becoming a Real Distribution Advantage

He argues that GEO — showing up in generative AI recommendations — is now as important as SEO for dev tools. That helps explain why the same companies keep surfacing in coding agents and why they’re growing so quickly, with Resend’s founder recently posting that the company went from 1 million to 2 million users in just four months.

Why “Just Review It” Stops Making Sense at AI Speed

The core argument lands here: it’s physically impossible to review all AI-generated code once output scales beyond human bandwidth. He says even reviewing functionality in natural language doesn’t fully solve it, because specs become giant essays, implementation can drift from what you thought you asked for, and features can appear that make you think, “I don’t remember ever asking for that.”

Coding Tools Are Quietly Training You Not to Look at Code

He traces the interface shift from IDEs centered on files and tab completion to Cursor, Codex, and Claude Code experiences where chat is the main surface and code is hidden behind clicks. In some cases, the tools now emphasize the rendered product in a browser instead of the implementation, which he says makes the industry direction pretty clear: not reading the code is becoming a feature, not a bug.

The Bigger Fear: AI-Written Code Humans Can’t Really Parse

Berman ends on a more philosophical note: today’s languages like Python and Ruby were designed to be readable for humans, but AI doesn’t need that constraint. If models eventually write code in representations optimized for machines, humans may only get a natural-language explanation of systems they can’t actually inspect — which is why he circles back to the Vercel bill and says fundamentals still matter, especially if you’re building production systems.