I have been listening to two different kinds of AI anxiety lately.

One is the layoff narrative: companies push AI adoption, measure usage, talk up productivity gains, then cut staff and tell the market the future has arrived. The other is the craftsman’s lament: the models write ugly code, drift off intent, require too much supervision, and leave behind a mess that costs more to clean up than the original work was worth.

Sabrina Ramonov’s video sits mostly in the first camp. A recent go podcast() episode discussion sits more in the second. On the surface they sound different. Underneath, they are circling the same thing.

The real story is not that AI has cleanly replaced software developers. Nor is it that AI is fake, useless, or only good for demos and hype. The real story is that AI is collapsing the cost of execution faster than most companies can collapse the cost of coordination.

That is the shift that matters.

Sabrina’s argument is strongest where she points out that executive AI narratives often run ahead of actual capability. Workers are told to use the tools, measured on their usage, pushed to internalize the new world, and then later hear the same story used to justify layoffs. I think that critique lands. There is clearly something real there. But I also think parts of that story become too theatrical, too binary, too eager to force everything into a simple moral frame where either AI fully replaces people or it is useless.

That is not how engineering works. It never has been.

No serious engineering discipline relies on getting everything right on the first try. Humans do not work that way. Machines do not work that way. Large projects do not work that way. The entire history of engineering is iteration, failure, correction, verification, refinement. The strange thing about some AI critiques is that they demand from machines a magical standard that no one has ever demanded from real engineering teams. Either it replaces the whole job by itself, flawlessly, or it has failed. That framing misses the entire picture.

The go podcast() conversation misses it in a different way. There the complaint is less political and more artisanal. The model writes code that feels wrong. It goes off and "cooks" too long. It produces things they would not have done themselves. They start wondering whether the models are getting worse, whether vendors are hiding better internal systems, whether the public is being given something degraded while the elite keep the real leverage for themselves.

I find that line of thinking telling, because my own experience has been almost the opposite.

I have used OpenAI Codex heavily to build a very large Go codebase, and I do not recognize most of the failure mode they describe, at least not in the way they describe it. Maybe that was closer to reality back in May 2025. In AI time that is ancient history. Today, with the right steering, I find Codex surprisingly good at writing Go (gpt-5 up through gpt-5.3-codex and gpt-5.4). Often it writes code very close to how I would have written it myself. Not always. Not perfectly. Not autonomously. But well enough that the leverage is absolutely real.

That does not mean the model is "better." It means the fit between the model, the language, the problem, and my way of working is good enough to create real compounding value.

That distinction matters, because these systems do not create value in the abstract. They create leverage inside a specific development posture.

I think my own AI-augmented essay says this better than most of the current AI discourse does. The core shift is that execution is getting cheaper while coordination is becoming more visible as the dominant cost. Once a senior engineer plus agents can execute architecture, implementation, refactoring, testing, and operational work at a pace that used to require a whole team, the constraint moves. It does not disappear. It moves into coherence, integration, prioritization, verification, and organizational design.

That is exactly what I keep seeing.

I have never worked well by treating architecture diagrams as the real design artifact. For me, the real architecture is in the code, in the flow, in the invariants, in the behavior of the system under change. Diagrams come later, if at all. They are illustrations. They are not the living system. The living system is in the implementation and the proofs around it.

That turns out to matter a lot when you work with AI.

One of my biggest strengths in this mode is that I work iteratively and evidence-first. I encode invariants in unit tests. I prefer robust, broad, end-to-end integration tests. I want evidence that the system is still itself after each meaningful change. That has saved one of my large Go projects multiple times already. It has made it possible to refactor subsystems, add new subsystems, push through major performance improvements, and still stitch together a coherent whole at the end. Not without difficulty, of course. But compared to what? Building a massive coordination kernel, document store, query language, messaging queue, and log store while also holding down a full-time job would simply not have happened for me in a non-AI world.

That is the key. AI does not remove difficulty. It changes the shape of the difficulty.

The hard part is no longer "can I produce enough implementation?" The hard part is "can I preserve intent while implementation accelerates?" Can I keep the vision coherent? Can I steer the system back when a branch drifts too far? Can I establish current state, desired state, and the diff between them clearly enough that the agent can help close the gap? Can I prove the system still works after sweeping changes?

That is where the leverage lives.

I also have another trait that in the old world was often treated as a weakness: I dislike heavy upfront requirements. I do spec things up front, but not in the old industrial sense where everything must be frozen and correct before iteration begins. AI does need specification, but not the old kind. It needs intent, direction, constraints, and enough shape to start moving. Then I let it rip through a first iteration and take it from there. I keep coherence against the vision. I work in manageable loops, not too short and not too long. Long enough to make real progress, short enough that I can still hold the system in my head while context-switching across several other efforts. Days later I may need to ask what exactly changed, but the agent can usually answer that, and the tests tell me whether the behavior still holds.

This is why I think many AI discussions are not really about the models alone. They are about different kinds of engineers.

Some engineers want the tool to behave like a very careful artisan junior who follows explicit design taste from the start and rarely colors outside the lines. Some want to remain the direct author of every meaningful move. Some are repelled by the idea of iterative steering and feel that needing to correct the model is itself proof of failure. I understand that instinct, but I do not share it. I like writing code, but I also understand that I cannot do everything alone. So I delegate. The real skill is knowing what to delegate, how to delegate it, and how to recover when it drifts. That is the work now.

This is also why I have little patience for nostalgia about code review. In almost every organization I have worked in, code review was already mostly a ritual. Rubber stamps, polite comments, surface-level style nits, and very little actual verification. AI did not kill code review. It exposed how weak it already was. What matters is not ceremonial review. What matters is outcome-oriented testing, encoded invariants, integration coverage, and fast feedback tied to reality. If you want production quality without heroics, that is where the truth has always lived.

And this takes me back to Sabrina, because her strongest point is not really about model quality at all. It is about organizational fragility.

Many companies are still structured around a software operating model from a slower era. Functional silos. Matrix reporting. Too many handoffs. Too many meetings. Meetings about AI. AI-generated meeting notes that create even more inventory. Governance theater. Review theater. Planning theater. A whole machinery designed around the assumption that execution itself is scarce and expensive.

That assumption is now breaking.

When execution gets cheaper, the absurdity of the surrounding machinery becomes much more visible. AI lets smaller, senior, low-coordination units move faster. But most enterprises are not built to metabolize that speed. So the immediate result is often not more flow. It is more inventory. More code. More drafts. More tickets. More half-processed ideas. More things piling up at whatever the real constraint is.

And once inventory starts piling up at the constraint, management has two broad choices.

The hard choice is to redesign the system around the new reality. Smaller teams. Harder contracts. Narrower interfaces. Less ceremony. More direct ownership. Better automated verification. Fewer handoffs. Fewer dependencies. A model where the fastest people support the constraint instead of drowning in coordination overhead.

The easy choice is to cut incoming flow and tell a story about productivity.

That is why I think the popular "AI is replacing developers" narrative is too shallow. In many cases, AI is not replacing developers nearly as much as it is exposing that the enterprise operating model was already obsolete. The old model could survive as long as execution was the expensive part. Once execution becomes cheaper, coordination becomes the visible tax. Once a few strong people with agents can outproduce the old pipeline, the old pipeline starts looking ridiculous.

This is the coordination shift.

It is not fundamentally a story about model intelligence. It is a story about what kinds of organizations can still function when execution cost drops faster than coordination cost. Small teams gain leverage. Large matrixed orgs accumulate inventory. That is why so many large firms now look confused. They are not just struggling with AI adoption. They are struggling with the death of assumptions they built their entire internal machinery around.

And this is where I agree with Sabrina more than I disagree. Small-team organizations are the future of AI-native work, at least for now. Not because corporations disappear overnight. Not because every company becomes a solopreneur shop. But because the unit of effective action has changed. A technically strong person with good tools, strong taste, encoded invariants, and tight loops can now do work that used to require a coordinated group. Large enterprises can still exist, but many of them are going to discover that their dominant challenge is no longer execution. It is structure.

The irony is that they keep responding as if the bottleneck were still labor.

That is the part Theory of Constraints makes painfully obvious. They are not elevating the constraint. They are not making their fastest hikers support the bottleneck. In many cases, they are firing them because the trail system was designed for a slower world and management cannot or will not redesign it fast enough.

That is not an AI failure.

It is an operating-model failure revealed by AI.

And I think that is the real story underneath both the layoff panic and the artisanal disappointment. AI is not a magic replacement for engineering judgment. It is not a toy. It is not a button that outputs finished systems. It is a force multiplier for people who can hold intent, verify outcomes, and steer iteratively without losing coherence. In organizations built for that, it creates enormous leverage. In organizations built around handoffs, rituals, and diluted ownership, it mostly reveals the rot faster.

So when I hear that AI is "replacing developers," I do not really hear a labor story. I hear an organizational one.

The companies that win from here will not be the ones that merely buy more AI. They will be the ones that can actually reorganize around it.

They will understand that execution is no longer the scarce resource it used to be.

Coherence is. Verification is. Boundaries are. Intent is.

That is the shift.

And most enterprises are nowhere near ready for it.