The Friction is Your Judgment — Armin Ronacher & Cristina Poncela Cubeiro, Earendil
TL;DR
AI speed has turned into organizational pressure — Cristina Poncela Cubeiro says the early “fun” productivity gains from copilots quickly became a new baseline, so teams now feel pushed to ship more code faster without the time to review or think.
The bottleneck has flipped from writing code to judging it — Armin Ronacher argues every engineer now has “a multitude of producing power compared to their reviewing power,” which leads to 5,000-line PRs, rubber-stamped reviews, and more non-engineers like marketers or ex-CEOs shipping code into systems they don’t own.
Agents optimize for progress, not prudence — Ronacher’s example is config-loading code that silently falls back to defaults: it helps the model keep moving, but creates failure modes a human would usually avoid because people feel the risk and agents don’t.
Libraries are a much better fit for agent coding than products — Earendil found agents perform best on tightly scoped library work with clear APIs, while product code breaks down because UI, API responses, permissions, billing, and feature flags exceed the model’s global understanding.
An “agent-legible” codebase needs hard constraints, not vibes — Their practical tactics include lint rules like no bare catch-alls, one SQL query interface, one primitives UI library, no dynamic imports, unique function names, and TypeScript’s erasable-syntax-only mode to reduce ambiguity.
Friction is where human judgment belongs — Their core message is not anti-AI; it’s that reviews should deliberately force humans to stop on migrations, permissions, and dependency choices, because “without friction there’s no steering.”
The Breakdown
The accidental slogan that set up the whole talk
Ronacher opens with a perfect irony: a security-incident forum post got an auto-generated social card carrying the company tagline “ship without friction.” His point lands immediately — after watching AI-assisted engineering up close, he and Cristina want to argue for adding some friction back in.
Two builders, two generations of AI engineering
Ronacher frames himself as a 20-year open-source developer, creator of Flask, who left Sentry in April and then fell “deep into a hole” of AI coding before starting Earendil in October. Cristina Poncela Cubeiro introduces herself as a “native AI engineer,” someone who learned software engineering through these tools rather than adding them later, which gives the conversation a useful old-world/new-world tension.
The psychological trap: output feels like progress
Cristina says the first big problem is not technical but mental: even when everyone tells you to slow down and think, it’s “just one more prompt.” The tools became addictive because they produce so much visible output that engineers start mistaking velocity for effectiveness, even as they lose the time to ask basic design questions like whether this is the best implementation at all.
How AI changes team dynamics, not just coding speed
Ronacher says the really sneaky shift happens at team scale: code creation used to be supply-constrained, but now every engineer can generate far more than they can responsibly review. That imbalance gets worse as more people outside classic engineering — marketing folks, former CEOs, other adjacent roles — start shipping code, while responsibility still sits with the engineering org.
Why agent-written code feels wrong in a human way
His engineering critique is sharp: agents are rewarded for making progress, so they write code that runs and unblocks itself, even if it creates hidden failure conditions. His config-file example is memorable — an agent happily falls back to defaults, while a human engineer would often rather fail loudly than discover two hours later that the database has been filled with bad records.
Libraries good, products bad
Cristina says Earendil has seen a strong pattern: agents do much better on libraries than products. Libraries usually have a clear problem, a constrained API surface, and a simple core; product code is the opposite, with intertwined UI, API, permissions, billing, and feature-flag logic that no model can reliably hold in context, so it looks reasonable locally and “a bit demented” globally.
Designing codebases so agents can actually read them
Their answer is an “agent-legible codebase,” meaning the codebase itself becomes infrastructure you intentionally shape for machine comprehension. That includes modularization, reducing hidden magic, following common patterns, and enforcing mechanical constraints like one SQL interface, one UI primitives library, no dynamic imports, unique function names, and TypeScript’s erasable-syntax-only mode so the model isn’t juggling multiple truths.
The review system that puts judgment back in
Earendil built a PR review extension that separates issues the agent should auto-fix from callouts that should wake the human up. Database migrations, permission changes, and new dependencies are exactly where Ronacher wants the human brain to “reactivate,” because those are the spots where speed feels great right up until you regret trusting it.
Friction isn’t the enemy — it’s how you steer
The talk ends on the main metaphor: software teams usually talk about removing friction, and sometimes that’s right, but some friction is intentional and healthy, just like SLOs force you to ask whether a system deserves a certain reliability level. With agents generating months of technical debt in days, Ronacher’s punchline is simple and sticky: friction is where your judgment lives, and without friction there’s no steering.