The Fundamentals You Need to Know About AI Coding (Microsoft Trainer)
TL;DR
Start with what an LLM actually is — a probabilistic next-word predictor, not black magic — Rob Bos says engineers get misled by AI marketing, and his go-to teaching trick is asking for a random number from 1 to 100 and getting “42” to show how these models pattern-match rather than reason like humans.
Most teams are optimizing the wrong bottleneck: engineers only code about 2 hours a day — citing a 2019 survey of 3,000 engineers, Bos argues leadership obsesses over faster code generation while ignoring the other six hours spent in meetings, architecture, requirements, and review.
AI only helps if your DevOps foundation is solid — Bos’s core claim is that agentic coding magnifies whatever already exists, so weak testing, poor CI/CD, and messy review processes just mean you ship bad code, bugs, and security issues faster.
The role of the engineer is shifting from typing code to translating business intent and validating outcomes — Bos is blunt that “your role has never been creating code,” and says the durable skill is describing what should be built, exploring options, and checking whether the result actually delivers value.
Getting good with AI coding is a real skill curve, not a weekend hack — Bos tells skeptical engineers it takes about 6 months of daily hands-on use before the new workflow clicks, including prompt/context engineering, picking the right model, and learning when agent mode beats manual edits.
Environmental cost is the blind spot almost nobody surfaces clearly — after instrumenting his own usage, Bos found he burned roughly 30 million tokens in 30 days, which he estimates at about 70 grams of CO2 and 600 liters of water, and he wants vendors to expose impact per million tokens the way they expose pricing.
The Breakdown
Rob Bos’s first principle: stop believing the AI magic trick
Bos opens with a warning: too many engineers “go off the cliff” by buying the sales pitch that AI is black magic and coming for their jobs. His antidote is simple — understand the basics of generative AI as next-word prediction with semantic patterns, so you stop expecting truth and start expecting probabilities, hallucinations, and useful-but-imperfect output.
From line completion to agent mode, but only after the basics
In training, he deliberately starts with tiny wins like line completion, adding unit tests, or generating docs, because jumping straight to full agent demos feels like “smoke and mirrors.” Then he ramps people into bigger workflows — like making coordinated MVC changes across interfaces, classes, and frontend in one go — and jokes that he now “lives in agent mode” because he’s a lazy engineer.
The productivity debate misses where engineers actually spend time
When companies ask whether tools like GitHub Copilot improve productivity, Bos pushes back with a DevOps question: how fast are you going now? He cites a 2019 report of 3,000 engineers showing that after meetings, architecture discussions, requirements work, and coordination, most developers only get about 2 hours of actual coding done per day — so speeding up coding alone optimizes the smallest slice of the job.
AI is a magnifier, not a fix, for broken software processes
Bos’s main operational point is that AI behaves like DevOps did: it exposes your existing dysfunctions at higher speed. If you can generate more code but your review process, testing, security checks, and deployment discipline can’t absorb it, you just create more bugs and more pressure downstream instead of more value in production.
Why DevOps maturity matters more than the model wars
He keeps coming back to the same foundation: tests, local validation, CI from day one, and visibility into what changed. One vivid example is his use of AI to generate before-and-after screenshots plus code deltas in automated testing, so he can literally see whether a button moved 10 pixels too far to the right instead of trusting that the model got it right.
Skeptics usually haven’t seen the new workflow yet
Bos says many of the loudest skeptics tried AI “a couple of years back,” got garbage, and wrote it off. But in his trainings, even people who open with “you’ve got two hours to convince me” often end up saying “holy crap” once they see what better models, larger context windows, and more deliberate prompt/context engineering can do.
The engineer’s value is moving up a level
The most direct moment in the conversation is Bos saying the engineer’s job was never to produce code — it was always to translate business requirements into user value. That shift shows up in how he now works: asking the model for implementation options, exploring approaches before diving into a fix, and focusing less on handcrafted syntax and more on whether the system works, can be validated, and can be maintained.
The overlooked cost: tokens, CO2, water, and hidden infrastructure tradeoffs
Near the end, Bos zooms out to environmental impact, arguing that vendors abstract away the real cost behind pricing like “premium request units.” After building a VS Code extension to inspect his own usage, he found he’d consumed around 30 million tokens in 30 days; the carbon estimate was abstract, but the attached 600 liters of water was the number that genuinely shocked him and made him want token-level sustainability reporting from providers.