What is a Software Moat in 2026? - Freestyle Friday (4/24/2026)
TL;DR
Classic software moats are evaporating — Joe argues that feature velocity, polished UX, and fast engineering teams used to look defensible, but coding agents now let everyone ship quickly, exposing those moats as mostly friction-based illusions.
Thin wrappers over Claude or GPT are not a business moat — he says he can spot these products “within about 10 seconds,” and warns that model providers like OpenAI and Anthropic can absorb the workflow and telemetry you hand them and eventually compete with you directly.
Prompt tricks and generic copilots are already commoditizing — “we integrated AI into X” is, in his view, too boring to sustain a company unless there’s something much deeper underneath than prompts and orchestration.
Real defensibility still lives where friction is high — Joe points to mission-critical systems like DuckDB, Postgres, SAP, and Oracle as examples of software you won’t “vibe code” away because they’re embedded in workflows and act as systems of record with data flywheels.
Training your own model is a moat, but an expensive one — citing Josh Wills of Datalogic, he says custom model training starts around $10 million and could easily be $20 million+, which makes it a risky bet for startups but potentially worth it if you own proprietary workflows and data.
The next battleground is business model design, not just product features — Joe expects go-to-market and pricing to shift from seat-based SaaS to things like price per action or token cost plus margin, because tokens are becoming the new core unit of cost.
The Breakdown
Jet-lagged in Salt Lake, asking the real moat question
Joe opens from Salt Lake City, fresh off Asia travel and a conversation with a VC friend about what a moat even means in 2026. His main point: the old software story — great features, great UX, a team that ships fast — may never have been a real moat so much as a temporary advantage created by engineering friction.
Invert the question: what is not a moat anymore?
Channeling Charlie Munger’s “invert, invert, invert,” Joe says it’s easier to identify what has clearly stopped being defensible. His early list is blunt: thin wrappers over foundation models, prompt-engineering tricks, generic copilots, static BI dashboards, and feature velocity itself.
The wrapper problem: if it’s just Claude with a harness, that’s fragile
Joe says companies pitch him products all the time, and often it becomes obvious almost immediately that they’re just wrappers around Claude or GPT. Even if you built a clever harness, he warns that public model providers get telemetry and business intelligence from your usage — and with every model release, the “special workflow” you thought you owned may just disappear.
Why most startups probably can’t buy a moat by training a model
He doesn’t dismiss model-building entirely, but he makes the economics feel painfully real. Referencing a recent conversation with Josh Wills, now at Datalogic, Joe says training a serious model starts at roughly $10 million and can hit $20 million or more — which is a lot of money to potentially light on fire if you don’t truly have the data, workflows, and capability.
The real moat is still friction — just in different places
Joe’s replacement framework is simple: moats exist where friction still exists. That could mean boring real-world businesses like plumbing, or in software, tools that are deeply embedded and mission-critical rather than flashy and easy to clone.
DuckDB, Postgres, SAP: software you’re not going to vibe code away
This is where he gets concrete. He cites DuckDB — co-created by his friend Hannes Mühleisen — plus Postgres, SAP, and Oracle as examples of durable systems because they sit inside workflows, hardware, systems of record, and data flywheels that improve operations over time.
Your own name might be a moat — at least for now
Joe gets personal here: the business he’s building leans on the one moat he feels confident about today, which is “me” — his brand, reputation, and the things he’s built. AI can replicate a lot, he says, but not yet the trust and accumulated credibility tied to a human identity, even if scaling that kind of moat is still an open problem.
The bigger shift is pricing, GTM, and accepting that nobody really knows
By the end, the discussion widens from product defensibility to monetization. Joe expects seat-based SaaS pricing to fade toward price-per-action or token-cost-plus-margin models, because tokens are replacing labor as a core cost driver; and while he sounds energized by the chance to invent new models, he’s clear that this is still an inflection point where “nobody knows” and everyone is guessing.