A recent episode of go podcast() featuring Dominic St. Pierre and his new co-host Morten Wittesen is worth paying attention to—not because it breaks new ground on AI hype, but because it unintentionally captures a fault line running through software engineering right now.
Both speakers are experienced developers. Both are thoughtful. Both are skeptical of simplistic "AI replaces programmers" narratives. And yet they are clearly living in two different futures.
Dominic’s position is emotionally coherent and widely shared. AI-augmented coding feels dangerous. It erodes hard-won skills. It produces code no one wants to read. It invites juniors to outsource understanding before they have any. It threatens maintainability, learning, and professional identity. These are not straw-man concerns. They are real, and many teams are already feeling the consequences.
Morten begins from a similar skepticism—but then, almost without noticing, describes a radically different reality in practice. He uses AI extensively. Not to "vibe code" blindly, but to scaffold, replicate, explore, and accelerate once a sound structure exists. He wraps generation in tests. He relies on deterministic feedback. He delegates repetition, not judgment. In other words, he is already operating inside a different operating model.
This tension is the story.
Dominic is reacting to AI as if it were a replacement for thinking. Morten is using it as a multiplier for execution under constraints. The disagreement is not about quality, ethics, or learning. It is about where the bottleneck now lives.
For decades, software engineering was bottlenecked on typing. Writing code was expensive. Review was tractable because volume was low. Learning required friction because feedback loops were slow. In that world, effort correlated reasonably well with understanding.
That world is gone.
AI makes execution cheap. Not perfect, but cheap enough that generation is no longer the limiting factor. When code is cheap, reading code stops being the primary quality gate. That does not mean quality disappears. It means the gate moves.
The scarce resource is no longer keystrokes. It is judgment.
Judgment means knowing what to build, how to decompose it, which parts may be delegated, and—critically—how to prove that the result is correct. Dominic’s discomfort comes from sensing that the old proxies for judgment are failing. "I read the diff and felt okay about it" no longer scales when diffs are thousands of lines long and mostly machine-generated. His fear of unmaintainable sludge is justified if nothing replaces that gate.
Morten, by contrast, has quietly replaced it.
When he talks about golden file tests, deterministic outputs, scaffolding once and replicating many times, or using a compiler and LSP diagnostics as a constant feedback loop, he is describing verification as the center of gravity. He does not trust the model. He constrains it. He does not rely on taste. He relies on evidence.
This is the critical distinction missing from most public AI debates: the difference between outsourcing execution and outsourcing responsibility.
AI-generated code reviewed by AI is a closed epistemic loop. Everyone in the podcast correctly rejects that. But rejecting AI review does not require rejecting AI execution. It requires independent verification. Tests, harnesses, invariants, contracts, observability—these are not "DevOps tools" or "process overhead." They are how judgment is made executable.
Seen through this lens, the junior developer problem also sharpens. The issue is not that juniors use AI. It is that they lack a verification posture. They cannot yet tell what must be constrained, what can be delegated, or when something is wrong but compiles. That was already true before AI. AI simply removes the illusion that typing equals competence.
The uncomfortable truth is that AI does not make people lazy. It makes lack of judgment visible. Seniors feel this as loss because part of their identity was built around being the fastest typist or the one who remembered every API detail. That identity was already under erosion from IDEs, search engines, and Stack Overflow. AI just completes the arc.
The real skill—the one that compounds—is the ability to design systems that can falsify themselves. To embed feedback so tight that errors surface immediately. To make correctness cheaper than speculation. That is not anti-learning. It is accelerated learning.
The podcast ends with a hope that education will return to "human-led" models, with coaching and communities. That may well be true. But the competitive advantage will not come from human warmth alone. It will come from teaching people how to operate in a world where execution is abundant and coherence is scarce.
We are not watching the death of software engineering. We are watching a coordination shift. Code is no longer the product. Verified behavior is.
Those who cling to reading as the primary abstraction will feel overwhelmed. Those who move verification into the inner loop will feel superhuman.
That is not hype. It is already visible—in this very conversation.