Your Insecure MCP Server Won't Survive Production — Tun Shwe, Lenses
TL;DR
Bad MCP design is bad security — Tin Shwe’s core claim is that discovery, iteration, and context—the three ways agents differ from humans—each create their own security shadow, from tool poisoning in descriptions to data leakage across retries and context-window exfiltration.
Most MCP servers expose way too much by default — Shwe’s five rules are brutally practical: collapse fine-grained actions into outcome-based tools, constrain inputs with enums and Pydantic, treat docs as a defensive layer, return only necessary data, and scope permissions at the tool/resource level.
Production is a cliff, not a ramp — Standard IO feels safe because it’s a local “walled garden,” but the second you move to streamable HTTP you inherit OAuth, TLS, CORS, rate limiting, and token management all at once; Stacklok’s test found 20 of 22 requests failed under just 20 simultaneous standard-IO connections.
API-key-based remote MCP is still the norm—and still fragile — Jeremy Fronae says more than 50% of MCP servers still rely on long-lived, unscoped API keys stored in config files, often passed through upstream without server-side verification, creating confused deputy risks and shared-credential blast radius.
OAuth for MCP is harder than it looks because clients are unbounded — Unlike traditional OAuth where you pre-register 5–10 known apps, MCP clients can be Cursor, Claude Desktop, VS Code, a CLI, or a random agent connecting at runtime, which is why Dynamic Client Registration and PKCE became necessary.
The ecosystem is moving from DCR to CIMD for trustable client identity — Fronae frames DCR as a useful first step but vulnerable to phishing and fake self-asserted metadata, while Client ID Metadata Documents—preferred since November 2025—tie client identity to a public URL and allow stronger redirect and policy controls.
The Breakdown
Why agents need their own security model
Tin Shwe opens with Jeremy Lewin’s line that “agents deserve their own interface,” then pushes it further: a badly designed MCP server is also a badly secured one. His framing is memorable because he treats security not as a later add-on, but as the shadow cast by every design decision.
Discovery, retries, and context all become attack surfaces
He walks through three ways agents differ from humans: discovery, iteration, and context. Agents read every tool description every time, which makes tool descriptions prime real estate for hidden prompt injection; retries resend full conversation history, which can rebroadcast sensitive data; and limited context windows make oversharing dangerous because PII, credentials, and internal details can get exfiltrated if poisoned context slips in.
Five rules for secure agentic design before OAuth even starts
Shwe’s five rules are the heart of the talk: shrink the attack surface, constrain inputs at the schema level, treat documentation as a defensive layer, return only what’s needed, and minimize blast radius. He keeps the language concrete—“every tool you expose is a door”—and argues that fewer coarse-grained tools mean fewer locks, fewer checks, and fewer audit points to get wrong.
The production jump from comfy local dev to full exposure
He contrasts local standard-IO MCP with production HTTP in a way that lands: local is a “walled garden,” great for single-player developer productivity, but production means remote access, multiple clients, scaling, and centralized governance. The catch is there’s no gentle transition; once you cross over, you suddenly need OAuth, TLS, CORS, rate limiting, and token handling all at once.
Why standard IO falls apart at scale
Shwe warns against pretending local transport can stretch into production, then drops the number that makes the point stick: Stacklok tested standard IO and saw 20 of 22 requests fail with only 20 simultaneous connections. His takeaway is simple—if you want concurrency and organizational reuse, you have to cross the chasm.
The ugly reality of today’s API-key MCP setups
Jeremy Fronae takes over and starts with the status quo: local MCP servers usually stuff long-lived API keys into config files and environment variables, while remote MCP servers pass those keys through HTTP headers. He points out the operational and security pain plainly: keys are rarely rotated, poorly scoped, sometimes not even validated by the MCP server, and can create confused deputy problems or overly powerful shared credentials.
Why OAuth for MCP explodes in complexity
Fronae then zooms out and shows why this isn’t just “add OAuth”: implementing an authorization server for MCP means wrangling more than 10 specs and RFCs across core flow, discovery, metadata, and token lifecycle. Traditional pre-registration breaks because MCP has an unbounded number of clients talking to an unbounded number of servers at runtime.
DCR gets clients moving, but CIMD is the stronger enterprise path
He walks through Dynamic Client Registration with PKCE, showing how a client discovers the MCP server from a 401 and WWW-Authenticate header, registers itself, authenticates via SSO, gets a JWT access token, and then triggers token exchange under RFC 8693 so the MCP server can call upstream APIs with least privilege. But he’s clear about the tradeoff: DCR creates endless registrations and trusts self-asserted metadata, so the better path is CIMD, where the client identity lives at a public URL, letting the authorization server verify ownership, bind redirect URIs more safely, and selectively allow or deny clients.
Enterprise-grade MCP needs more than OAuth
Fronae closes by saying OAuth scopes only get you part of the way. Real production MCP also needs tool-level and resource-level RBAC, data masking for fields like email and national insurance numbers, detailed audit logs for regulations like the EU AI Act, and end-to-end tracing so you can reconstruct exactly what an agent did, what data it touched, and why.