Alcreon
Back to Podcast Digest
Matthew Berman··13m

Anthropic banned OpenClaw...

TL;DR

  • Anthropic gave OpenClaw users less than 24 hours to stop using Claude subscriptions through third-party harnesses — the April 4 policy explicitly named OpenClaw and shifted those users onto paid “extra usage,” with automatic refunds offered for people who cancel.

  • This looks like a capacity crisis, not a random policy whim — Matthew ties the crackdown to Anthropic’s GPU crunch, citing peak-hour throttling from 5 to 11 Pacific, doubled off-peak usage as a carrot, and Claude uptime slipping to 98.77% on claude.ai.

  • Model switching is basically frictionless if your stack is set up right — he says he swapped OpenClaw from Claude to GPT-5.4 in about three minutes, with Jack Dorsey even agreeing that there’s “literally zero switching cost” if prompts are already optimized per model.

  • The real problem is Anthropic’s policy ambiguity — Boris Cherny said there were “no changes to agents SDK at this time,” but Matthew argues weeks later the rules are still muddy enough that it’s not worth the risk of staying on Claude inside OpenClaw.

  • Anthropic’s pain comes alongside explosive growth — he cites a report that Claude’s $200 subscription can represent roughly $2,000 of Anthropic credits and says the company’s revenue run rate jumped from $9 billion at the end of 2025 to $30 billion now.

  • His takeaway is strategic, not emotional: build multi-model systems — use frontier models for planning and coding, but offload tasks like classification, extraction, and summarization to open-source models such as Gemma 4 and Qwen 3.5 so policy changes don’t break your product.

The Breakdown

Anthropic drops the hammer on OpenClaw

Matthew opens mid-vacation because, in his words, Anthropic “dropped a bomb” Friday at 4:00 p.m. The email gave users under 24 hours’ notice that using Claude subscriptions through third-party harnesses — specifically naming OpenClaw — was now against the rules and would require paid extra usage instead of normal plan limits.

Why Anthropic is squeezing usage so hard

He frames this as Anthropic dealing with two things at once: demand is exploding, and capacity is not keeping up. First came the carrot — 2x usage outside peak hours and all weekend — then the stick, with faster burn through 5-hour session limits during weekday peak hours from 5 to 11 Pacific, a change he suspects hit the heaviest agentic users most of all.

The quota pain is real, and the switch was easy

Matthew says users were already seeing quotas vanish overnight, so the ban landed on top of existing frustration. Still, when the email arrived, he switched his OpenClaw setup from Claude to GPT-5.4 through the Codex API in about three minutes, arguing there is effectively “zero switching cost” if you’ve already structured your prompt files per model.

Prompt engineering, not loyalty, is the real moat

His practical tip is that every model wants different prompting: an Opus 4.6 prompt should not look like a GPT-5.4 prompt, even for the same task. That setup makes vendor changes painless, and he underscores the point by showing Jack Dorsey agreeing with his “zero switching cost” take.

Anthropic’s policy is still weirdly unclear

The messiest part is whether Anthropic’s agents SDK is still allowed inside the OpenClaw ecosystem. Matthew points to Boris Cherny saying “No changes to agents SDK at this time” and “working on improving clarity more,” but says the lack of a clean answer makes staying on Claude feel risky, especially when OpenAI quotas have been far more forgiving.

The GPU crunch shows up in the status page and the revenue chart

He pulls up Claude’s status page — lots of red, 98.77% uptime for claude.ai — and says anything below 99% is basically unusable at this level. At the same time, Anthropic is reportedly rocketing from a $9 billion revenue run rate at the end of 2025 to $30 billion now, plus locking in TPU support from Google, which makes the capacity squeeze feel like a side effect of hypergrowth, not stagnation.

Even prompts are getting flagged now

Matthew highlights a report from Peter Steinberger showing Claude refusing usage when the system prompt said it was “running inside OpenClaw,” which made it look like Anthropic was effectively banning prompt text. Boris replied that this was “not intentional” and likely an overactive abuse classifier, but Matthew’s reaction is basically: how are users supposed to build on this when the policy keeps shifting and the enforcement is murky?

The bigger lesson: go multi-model now

He ends on strategy instead of outrage. OpenAI is openly courting OpenClaw users, Peter Steinberger says OpenClaw now makes GPT-5.4 feel more like Claude, and Matthew’s advice is simple: don’t depend on one model provider — use frontier models where they shine, and push work like classification, extraction, and summarization onto open-source models like Gemma 4 and Qwen 3.5.