In November 2025, I published 10×: The Coordination Shift -- Software Engineering in the Centaur Era. The essay argued that AI does not merely make software work faster. It changes the unit economics of execution. When drafting, scaffolding, refactoring, testing, exploration, and broad solution search become cheap, the bottleneck moves. The scarce work is no longer only implementation. It becomes intent, judgment, verification, integration, governance, ownership, and coordination.
That was the thesis.
Microsoft’s 2026 Work Trend Index now gives that thesis a large corporate measurement surface. The report does not use my vocabulary. It does not talk about the coordination shift. It does not name centaur units as an organizational form. It does, however, measure the same pressure: people are learning to operate with AI faster than their organizations can absorb what they can now do.
Microsoft calls part of this state blocked agency.
The phrase is useful because it names the transitional pain. Workers have developed real AI capability, yet sit inside organizations that cannot absorb it. They can move faster. They can explore more options. They can produce stronger first drafts, prototypes, summaries, analyses, and implementation paths. They can collapse distance between idea and execution.
The organization around them still expects work to move through the old channels.
That is where the coordination shift becomes visible.
The usual interpretation is soft: companies need better AI training, clearer guidance, more leadership alignment, better incentives, and more psychological safety. All of that may be true. None of it is enough.
Blocked agency is not mainly a training problem. It is not mainly a tooling problem. It is not even mainly a culture problem.
It is an organizational model problem.
The coordination shift, measured from the outside
The coordination shift says that AI makes execution cheaper, faster, and more abundant. That sounds like a productivity story. It is not.
Cheap execution creates a new problem: the organization must decide what to do with more options, more drafts, more prototypes, more partial answers, more local experiments, and more machine-generated work than its old coordination system was designed to absorb.
In the old world, execution was expensive enough to act as a throttle. Work moved slowly because people could only produce so much. Coordination was still hard, but the volume of produced work was naturally limited.
AI weakens that throttle.
Once execution accelerates, every downstream weakness becomes more visible. Unclear ownership becomes more costly. Slow decision-making becomes more obvious. Review queues grow. Architecture boundaries matter more. Governance by meeting becomes absurd. Functional handoffs stop looking like coordination and start looking like drag.
That is the coordination shift.
Microsoft’s report says much the same thing in enterprise language. It frames the AI era not as a tool-adoption phase, but as the emergence of a new operating model. It describes operating models through workflows, roles, decision rights, governance, and the everyday architecture of execution. It says work is increasingly organized across people, agents, and the systems connecting them, rather than only around people, processes, and applications.
That is not a side observation. It is the report’s central implication.
Microsoft has measured the pain of the coordination shift without naming it as such.
The report’s introduction is even more direct: people are using AI and agents to expand what they can do, while agents take on more execution. The problem is that most organizations are not keeping up. People are often ready. The systems around them are not. The constraint is the gap between what employees can now do and what organizations are built to support.
That is the coordination shift in one sentence.
When ownership looks like trespass
AI changes the shape of capable work.
A person who knows how to work well with AI does not merely "use a tool." They begin to operate differently. They can move from intent to outline, from outline to prototype, from prototype to test, from test to evidence, from evidence to revision, from revision to delivery. The loop tightens.
In a vertically responsible unit, this is simply ownership.
In a functional or matrix organization, the same behavior can look like trespass.
The person has crossed a boundary. They did not wait for the right function. They did not route the work through the expected queue. They did not respect the handoff. They did not keep discovery separate from delivery. They did not remain inside the role as defined by the model.
The problem is not that they used AI. The problem is that AI made visible a form of ownership the organization was not built to permit.
Functional and matrix organizations often say they want initiative, accountability, and innovation. In practice, they frequently reward lane discipline. A person may be praised for being proactive in the abstract, then punished for acting across the boundaries required to make proactivity real.
AI makes this contradiction harder to hide.
When execution was slow and expensive, the old model could survive on handoffs, queues, meetings, reviews, and translation layers. Those mechanisms were costly, but familiar. They also preserved authority. Each function owned its slice. Each manager defended a boundary. Each decision had somewhere to wait.
When execution becomes cheap, those waiting points change character. They stop looking like coordination and start looking like drag.
Microsoft names the symptom
Microsoft does not frame blocked agency as a critique of functional or matrix organizations. It uses safer enterprise language: operating models, leadership, culture, manager support, talent practices, governance, workflows, decision rights, and learning systems.
That does not make the diagnosis soft.
The report says the problem is not only whether people have the right skills. It is whether the organization is built to unlock them.
That sentence matters because it moves the argument out of individual competence and into organizational compatibility. If people are ready, yet the system around them is not, the issue is not adoption in the usual sense. The issue is that the operating model cannot absorb the capability now emerging inside it.
It is tempting to treat this as a maturity gap. The company needs enablement. It needs adoption programs. It needs champions. It needs better communication.
Sometimes that is true. More often, the language softens the diagnosis.
A company that blocks agency is not merely immature. It may be built around the wrong operating model. Its structure may assume that work should be divided, routed, and controlled in ways that AI now makes increasingly expensive. Its managers may be incentivized to protect function-level performance rather than outcome-level progress. Its governance may rely on approval rather than executable guardrails. Its transformation program may be real in vocabulary and fake in topology.
That is the depressing part.
Most organizations already know how to talk about transformation. Boards talk about it. CEOs talk about it. Consultants sell it. HR departments brand it. Technology functions package it into programs. The language is everywhere.
Real transformation is rarer because it attacks the settlement underneath the language.
It changes who owns outcomes. It changes who can say no. It changes how budgets move. It changes what managers are for. It changes what governance means. It changes the difference between risk management and permission management. It exposes the cost of coordination roles that do not improve the work.
That is why so many reorganizations become a pig with makeup. The labels change. The topology remains.
Microsoft’s frontier worker and the centaur unit
Microsoft does not use the term centaur unit. The report is not making the same structural claim as the coordination-shift thesis.
It does, however, describe a recognizably centaur-like mode of work.
The report says effective AI users define intent, set the quality bar, design how work gets done across humans and AI, and remain responsible for how outputs are used. Routine execution, research, and synthesis are delegated to agents; the human role moves toward direction, judgment, and responsibility.
Microsoft’s "Frontier Professionals" are also described as users who apply agents to complex or multi-step work, routinely redesign workflows around what AI can do well, and participate in repeatable AI-enabled practices that can scale beyond individual use.
The coordination-shift thesis names this pattern more directly.
In 10×: The Coordination Shift, the centaur is described as a hybrid of human judgment and machine-driven execution, emerging as a new unit of production. The essay assumes a model where the human provides framing, judgment, and accountability, while AI systems provide search, execution, and rapid exploration. It then argues that a senior engineer augmented by such tools can often execute end-to-end work -- architecture, implementation, testing, and operations -- that previously required a coordinated team.
The later Centaur Manifest makes the unit-scale version explicit: a centaur unit is a small team, often two to four humans, operating with AI augmentation. The unit remains centaur so long as intent is explicit, constraints are enforced, evidence governs direction, and AI remains an accelerator of execution rather than an authority. Human judgment remains responsible for intent, constraints, and proof.
That is the bridge.
Microsoft measures the worker pattern. The coordination-shift thesis names the production unit.
One is cautious enterprise analysis. The other is a stronger operating-model claim. They should not be collapsed into each other, but they clearly rhyme. Microsoft says advanced workers are learning to direct, evaluate, redesign, and govern human-agent workflows. The centaur-unit model says that once this becomes practical, the unit of effective action changes.
That difference matters because the organizational consequence is not merely better productivity. It is a conflict between two models of work.
The old model protects itself
Organizations do not only coordinate work. They defend a theory of work.
A functional organization assumes that expertise should be grouped by discipline. A matrix organization assumes that work can be divided across competing lines of authority and reconciled through management. Both models were built for a world where specialization, control, and coordination overhead seemed like reasonable prices to pay for scale.
AI pressures that settlement.
A capable centaur worker, or a small centaur unit, can now perform much more of the loop locally. Not all of it. Not without constraints. Not without verification. Not without human judgment. Enough of it, however, changes the operating assumption.
The relevant unit of work becomes less "my assigned task" and more "the outcome we are trying to produce."
That is a different social contract.
In the old model, punishment often follows boundary violation. You moved too far. You bypassed someone. You created something another function believes it should own. You made a decision that belonged elsewhere. You exposed a dependency that was politically convenient to leave vague.
In the centaur model, correction follows another logic. You failed the intent. You violated a constraint. You weakened the proof. You increased operational risk. You created incoherence. You did not learn from the signal.
Those are not the same thing.
One accountability regime protects the org chart. The other protects the outcome.
That distinction matters because many organizations will try to adopt AI without changing which accountability regime they run. They will ask people to become more capable while preserving the model that contains capability. They will ask for speed while preserving permission chains. They will ask for ownership while preserving horizontal fragmentation. They will ask for transformation while keeping the old theory of work intact.
The result is predictable: frontier workers become blocked workers.
AI speed with matrix control is incoherent
Many companies will try to have both worlds.
They will want AI speed and matrix control. They will want frontier workers and functional lanes. They will want vertical execution and horizontal accountability. They will want local initiative and centralized permission. They will want agents to accelerate work while preserving the same handoffs, committees, review boards, and functional ownership boundaries that made the work slow.
This combination is incoherent.
The point is not that every organization must abolish every function or eliminate every specialist group. Large systems need expertise. They need standards. They need platforms. They need governance. They need shared infrastructure. They need legal, security, finance, and operational competence.
The question is where ownership lives.
If ownership remains fragmented across functions, the centaur pattern has nowhere to land. AI will create more drafts, more options, more prototypes, more partial analyses, and more local experiments. The system will then struggle to absorb them. Work-in-progress expands. Review queues grow. Decision latency becomes more visible. Integration becomes harder. Managers ask why productivity gains do not appear at the enterprise level.
The answer is simple: the organization accelerated production without changing absorption.
That is the coordination shift. The bottleneck moved. The organization did not.
What real compatibility looks like
An AI-compatible organization does not merely permit AI use. It changes the structure around capable work.
It gives small units real responsibility for outcomes, not just tasks. It makes intent explicit enough that people and agents can act without constant translation. It treats constraints as first-class, not as late-stage compliance theater. It pushes governance into tests, policies, observability, and guardrails. It reduces handoffs where ownership should be vertical. It keeps batch size small. It makes learning visible. It judges progress by evidence, not activity.
This is not a call for chaos. It is the opposite.
The old organization often confuses control with safety. It assumes more approval creates more responsibility. In practice, too much approval can destroy responsibility by spreading it across so many boundaries that no one truly owns the result.
A centaur-compatible organization needs strong constraints, clear accountability, and fast feedback. It needs fewer vague permissions and more explicit proof. It needs less theater around alignment and more durable artifacts of intent, decision, verification, and learning.
Microsoft’s report points in the same direction when it describes how Frontier Firms treat agent signals. As agents take on more work, they generate evidence about what worked, what failed, and where outcomes drifted. In many organizations, those signals remain local or spread slowly. Frontier Firms capture them, encode them into shared routines, and preserve accountability and control.
That is a crucial distinction.
The frontier is not merely a person using AI well. It is the organization learning from AI-mediated work fast enough to change how future work is done.
The frontier worker does not need unlimited freedom. They need a model in which ownership is legitimate.
A note on trajectory
This last part is not evidence in the formal sense. It is a statement about pattern recognition.
I saw the potential in DevOps early. I argued, inside a consultancy environment, that this was where the company should build a profile. A year or so later, a DevOps consultancy boom appeared in Gothenburg. The prediction was not precise in the way a forecast table is precise. It was directional, which is how useful predictions often arrive: clear enough to act on, vague enough to be dismissed by people waiting for proof.
In September 2025 (more than half a year ago), I started drafting 10×: The Coordination Shift. The essay was released in November, after more iteration than originally planned. It was not pulled out of nowhere. The evidence was already visible in fragments: DevOps, agile, platform thinking, Team Topologies, Lean Startup, outcome-based work, mission command (or auftragstaktik, the old military logic of giving capable units clear intent rather than constant instruction).
My own bias was already in that direction. I have long preferred vertically responsible teams over horizontal coordination theater. I have long believed that real ownership requires intent, constraints, autonomy, proof, and consequence. AI simply made the implication impossible to ignore.
At first, my thesis was even more severe than the published version. I suspected that team frameworks themselves -- Scrum, Kanban, even my own OBAF -- were being deprecated by the new production unit. If one capable operator with AI agents, or a very small human unit with agents, can own the full stack from intent to verification, then much of the old team-coordination apparatus becomes unnecessary. Coordination inside the unit is compressed. What remains is not ceremony, but contracts: explicit boundaries between centaur units, integration points, guardrails, and proof obligations.
That is still where I think this goes.
Microsoft’s report does not prove every part of the coordination-shift thesis. It does something more useful for the present moment: it shows where we are in the story. It measures the transition state. It shows workers learning to operate in the new mode while organizations remain built for the old one. It shows blocked agency as a real condition, not merely a private frustration.
That makes the direction much harder to dismiss.
The next serious conversation should be about vertically integrated small units. Not AI champions. Not prompt training. Not another transformation office. Small units with real outcome ownership, explicit constraints, automated proof, and clean contracts to other units.
That is the organizational shape AI keeps pointing toward.
I have been early on this kind of shift before. I do not say that as proof. I say it because some patterns become visible before they become respectable. This one is now becoming visible in the data.
The organizations that understand it early will not merely adapt to AI. They will prepare for the phase after adoption: the phase where the operating model itself becomes the advantage.
The hard conclusion
Microsoft’s report is useful because it gives corporate language to a real transition. It says workers are ready, organizations are not. It identifies blocked agency as a measurable state.
The coordination-shift interpretation is harsher.
Workers are not simply ahead of their organizations. They are often ahead in a way the organization is structurally designed to resist.
That is why blocked agency is not an adoption problem. Adoption language implies the organization can remain basically the same while people learn new tools. Blocked agency says something more serious: the organization’s model of work is incompatible with the capability now emerging inside it.
AI did not create that incompatibility. It exposed it.
The firms that respond with training alone will get better tool users trapped in the same lanes. The firms that respond with transformation theater will get new vocabulary wrapped around old bottlenecks. The firms that respond structurally will redesign around the new unit of production: small, vertically responsible human-AI units governed by intent, constraints, proof, and learning.
That is where the frontier is.
Not in the tool, but in the organizational model that can absorb what the tool makes possible.