Back to Podcast Digest
AI Engineer··44m

State of the Claw — Peter Steinberger

TL;DR

  • OpenClaw went from zero to one of GitHub’s biggest software projects in five months — Peter Steinberger says it’s nearing 2,000 contributors, ~30,000 commits, and ~30,000 PRs, with growth so steep a friend called it “stripper pole growth.”

  • Security became the main tax of success, not a side issue — OpenClaw has received 1,142 advisories so far, about 16.6 per day, including 99 marked critical, and Steinberger says many are AI-generated “slop” that still require human review.

  • A lot of the scariest OpenClaw security headlines were technically valid but practically misleading — Steinberger cites CVSS 10 issues and the Belgium RCE panic as cases where users had to ignore the project’s recommended local/private-network setup or even modify code to create the dangerous condition.

  • OpenAI joining doesn’t mean OpenClaw is becoming ‘ClosedClaw’ — Steinberger says OpenAI now sees open source as strategically important, but he’s intentionally building a multi-company ecosystem with contributors from Nvidia, Microsoft, Tencent, ByteDance, Red Hat, Salesforce, Slack, and others so no single company controls it.

  • His core product philosophy is local control plus agent power, even if that makes the system messier — the whole point, he says, is having your data under your control and letting an agent click through the web like a human, rather than waiting for corporate APIs and permissions like Gmail integrations.

  • The real moat in AI engineering is still taste and system design — Steinberger argues that even with 5-10 coding agents running in parallel, humans still have to supply direction, constraints, and the instinct to say no, because otherwise AI just pushes products into incoherent feature sprawl.

The Breakdown

Five Months In: OpenClaw’s Absurd Growth Curve

Steinberger opens with a flex backed by numbers: OpenClaw is only five months old and already one of the biggest software projects on GitHub. He says the graph didn’t even look like a hockey stick — it was a straight vertical line, joking that a friend called it “stripper pole growth” — and that kind of velocity brings its own chaos.

Running a Foundation Is ‘Company on Hard Mode’

He explains that after deciding he didn’t want to do the startup thing again, he joined OpenAI while also helping create the OpenClaw Foundation. That means two jobs at once, with the extra headache that foundations depend heavily on volunteers you can’t simply direct, so he’s been obsessed with improving the project’s “bus factor.”

The Security Flood: 1,142 Advisories and a Lot of Noise

The heart of the talk is security: OpenClaw has been hammered with 1,142 advisories, around 16.6 per day, which he compares to the Linux kernel’s 8 or 9. His blunt rule of thumb is memorable: the louder someone screams that an issue is critical, the more likely it is “slop,” especially now that AI tools can generate weird exploit chains at scale.

Why ‘Critical’ Didn’t Always Mean Dangerous in Practice

Steinberger walks through examples where OpenClaw got scary headlines for edge cases that required fighting the recommended setup. One CVSS 10 issue involved an unshipped iPhone sync app and a permission model nobody was even using; another RCE warning that panicked Belgium depended on exposing a gateway token in ways the default local/private-network install explicitly avoids.

Researchers, Vendors, and the Incentive to Make OpenClaw Look Reckless

He’s clearly frustrated by what he sees as fearmongering from both companies and academia. He calls out the “Agents of Chaos” paper for detailing OpenClaw’s architecture while skipping the security page, and says the researchers even ran the system in sudo mode for “maximum power” — then left that part out because it weakened the story.

Surviving the Slop Required Real Company Help

At first he tried to triage security reports himself and calls that impossible. What actually helped was getting companies involved: Nvidia supplied people who effectively work full-time on hardening the codebase, while Microsoft, Red Hat, Tencent, ByteDance, Slack, Telegram, Salesforce, and others now support pieces of the ecosystem.

OpenAI, ‘ClosedClaw,’ and Why He’s Building a Switzerland

In the AMA, Steinberger tackles the obvious question directly: is OpenClaw becoming ClosedClaw now that he works at OpenAI? He says no — OpenAI understands the strategic value of getting more people to actually use AI, and he’s deliberately building the foundation as neutral “Switzerland” so the project stays open, model-agnostic, and not captured by one company.

How He Actually Builds: Six Agents, Iteration, and Taste

On workflow, he says the viral screenshot of him running many coding sessions was real: at times nearly 10 at once, now more like 5 or 6 as tooling gets faster. But he rejects the pure “dark factory” idea where AI just ships code unattended, arguing that good software isn’t a straight-line waterfall process — it’s iterative, full of detours, and still bottlenecked on human taste, system design, and the discipline to say no.

The Future: Ubiquitous Personal Agents, Memory, and Dreaming

His ideal form factor isn’t just a phone app but a Star Trek-style ambient agent you can talk to in any room, with iPads or glasses as nearby display surfaces. He also wants to spend more time on “dreaming,” a memory-reconciliation system inspired by how humans sleep, consolidate experiences, and turn some short-term context into longer-term knowledge.