Back to Podcast Digest
AI Engineer··52m

AI Didn’t Kill the Web, It Moved in! — Olivier Leplus (AWS) & Yohan Lasorsa (Microsoft)

TL;DR

  • AI is now touching the entire web app lifecycle, not just code generation — Yohan Lasorsa (Microsoft) and Olivier Leplus (AWS) walk through AI-assisted coding, Chrome DevTools debugging, on-device browser APIs, and even “agent-ready” web apps in one end-to-end session.

  • The quality gap with coding agents is increasingly a workflow problem, not a model problem — Yohan shows lightweight text-based “skills” plus an agents.md file orchestrating GitHub CLI, Playwright, a public tunnel, and Telegram so an agent can implement a GitHub issue, record a video, and send a phone-testable URL automatically.

  • Chrome DevTools is turning into an agent surface — Olivier uses the Chrome DevTools MCP server to let an agent launch the app, inspect pages, run traces under no throttling / fast 3G / slow conditions, and return actionable metrics like LCP, CLS, critical path latency, and render-blocking savings.

  • Browser-native AI is getting practical, but it’s still wildly experimental — with Chrome flags enabled, they demo AI inside DevTools for CORS and 400 error explanations, plus local Web AI APIs that download a roughly 4 GB model once and power summarization, proofreading, and multimodal prompting fully on-device.

  • The most futuristic shift is that websites may need to expose tools for agents, not just interfaces for humans — they argue Web MCP could do for agentic browsers what responsive design did for mobile, showing both a hand-written addToCart tool and a way to convert an existing review form into an agent-callable tool with auto-submit.

  • llms.txt and llms-full.txt are framed as the bridge between today’s web and tomorrow’s agentic web — using Angular’s docs as the example, Yohan shows how a markdown map or full single-file corpus can steer agents to current documentation instead of stale training data from months or years ago.

The Breakdown

From “can AI code this?” to “can your workflow guide the agent?”

Yohan opens with the real premise: in the last six months, better models plus better integrations changed the game for web developers. His point is blunt — if you’re still getting poor results from coding agents, it’s often a skills issue, not because AI “can’t do it.”

Skills, agents.md, and a surprisingly practical e-commerce demo

Using a sample shop called Seine, he asks an agent to “implement the first open issue,” which is a GitHub issue for adding a contact page. The interesting part is what happens behind the scenes: the agent pulls the issue via GitHub CLI, uses repo-local skills stored in .agent/skills, records a Playwright video, opens a public tunnel, and sends the URL to Yohan’s phone via Telegram — all described in agents.md.

When the demo wobbles, the point gets more believable

The Telegram step briefly fails because the agent can’t find a token, which gets a laugh and a very relatable “agents not always working, especially during demos.” Then it recovers: Yohan gets the notification on his smartphone, opens the generated contact page, and shows the recorded preview — the whole thing feels less like magic and more like an actual repeatable review workflow.

Chrome DevTools, but callable by an agent

Olivier picks up from there with the Chrome DevTools MCP server, framing it as the missing bridge between amazing browser tooling and AI agents. He shows an agent launching the app, opening Chrome, taking screenshots, and then running performance traces under different network conditions so it can report metrics like LCP, CLS, critical path latency, and render-blocking opportunities.

DevTools itself is starting to answer back

Yohan then shows AI built directly into Chrome DevTools — hidden behind the AI innovation settings for now. In the console and network tabs, he clicks “debug with AI” on a CORS error and a 400 request, and gets contextual explanations and suggested fixes without copy-pasting logs into ChatGPT.

CSS debugging gets less miserable

The most web-developer-specific moment is in Elements: he selects an h1, asks AI to turn the text into a gradient matching the site’s color variables, and watches the style update live. The killer detail is “apply to workspace,” which can push those DevTools edits back to source code so you don’t lose the tweak you made and then immediately forget.

On-device browser AI: summary, proofreading, and image-to-review generation

Next they move into emerging Web AI APIs under W3C discussion: summarizer, proofreader, writer/rewriter, and a general prompt API. Olivier shows a review summarizer and a proofreader running locally in the browser, noting the first model download is around 4 GB but then gets reused across sites unless Chrome needs the storage.

The web wasn’t replaced by agents — it may become their tool layer

The closing section is the most forward-looking: Yohan introduces llms.txt and llms-full.txt as machine-friendly maps of site content, using Angular’s docs as the example. Then Olivier demos highly experimental Web MCP, where a page can register tools like addToCart, or even upgrade an existing HTML review form into an agent-callable tool with generated schema and auto-submit — his analogy is that this may become as important as responsive design was when mobile took over.