This post is a commentary on the February 9, 2026 Harvard Business Review article, AI Doesn’t Reduce Work—It Intensifies It, which reports on an eight-month field study of AI adoption inside a U.S. technology company. The researchers found that instead of shrinking workloads, AI expanded task scope, increased multitasking, accelerated pace, and contributed to burnout. The article frames this as a complex and somewhat surprising outcome of generative AI. It shouldn’t be surprising. What the study captures - without explicitly naming it - is Goodhart’s Law...

"When a measure becomes a target, it ceases to be a good measure"

...and the classic Theory of Constraints local optimization trap playing out at machine speed...

"An hour lost at a bottleneck is an hour lost out of the entire system. An hour saved at a non-bottleneck is worthless."

Old System Laws, New Acceleration

There is a pattern emerging in AI-augmented organizations that feels new but is, in fact, deeply familiar. Productivity spikes. Output increases. Backlogs shrink—at least temporarily. Individuals appear dramatically more capable. It feels like a breakthrough. And yet, value does not increase proportionally. Quality wobbles. Coordination becomes heavier. People feel busier, not freer. Fatigue rises.

This is not an AI problem. It is the local optimization trap, reintroduced under conditions of extreme execution acceleration. The uncomfortable truth is that the principles articulated decades ago by the Theory of Constraints, the Toyota Production System, the Agile Manifesto, and Extreme Programming still apply. In fact, they apply more rigorously than ever.

When Execution Becomes Cheap

Generative AI collapses the cost of starting work. It reduces the friction of drafting, exploring, iterating, and switching contexts. The blank page is no longer intimidating. The first version is no longer expensive. That shift changes behavior immediately.

People start more tasks. They attempt adjacent responsibilities. They revive previously deferred work. They parallelize threads because it feels possible. They generate more alternatives because it is cheap to do so. Nothing in this process requires top-down pressure. The incentive is embedded in the capability itself. When action becomes easy, action increases.

But systems do not become more efficient simply because more work is initiated. Efficiency is always relative to the constraint. If the bottleneck in your system is integration, decision quality, stakeholder alignment, domain clarity, or verification, then accelerating artifact production upstream does not improve throughput of value. It increases load on the real constraint.

That is textbook Theory of Constraints. AI does not remove the bottleneck. It makes it easier to overwhelm it.

Why It Looks Like Success

The reason this failure mode is so seductive is that it produces visible gains. Velocity metrics improve. Ticket throughput rises. Response times drop. More pull requests are merged. More drafts exist. More experiments are attempted. The dashboards glow. But these are output measures. They are not outcome measures.

The gap between output and outcome is where Goodhart’s Law operates. When a proxy becomes easier to optimize, it detaches from what it was meant to represent. AI dramatically lowers the cost of optimizing proxies.

  • If responsiveness is rewarded, AI increases responsiveness.
  • If artifact count is rewarded, AI increases artifact count.
  • If visible activity is rewarded, AI increases visible activity.

None of these guarantees improved outcomes.

What often follows is subtle but predictable: increased review burden, informal correction work, cross-domain entanglement, rising context switching, and reduced recovery time. The system becomes busier without becoming more effective. This is not efficiency. It is accelerated local optimization.

The Coordination Shift

The central shift of the AI era is not speed. It is the migration of the constraint. Execution is no longer the dominant cost, it is...

  • Coherence
  • Verification
  • Alignment
  • Integration
  • Sustainable pace

When execution becomes cheap, organizations must elevate their governance layer. If they do not, the system optimizes the easiest layer available: artifact production.

That is where many teams are currently stuck. They are running a 2026 capability engine with a 2019 measurement system. The result is predictable: throughput of work units increases, but throughput of validated outcomes does not scale proportionally.

Agility was never about speed alone. It was about reducing cycle time of validated learning. AI increases the rate at which options can be generated; it does not increase the rate at which meaningful decisions can be made. If decision quality and integration capacity are not strengthened, acceleration simply compresses error cycles and amplifies rework.

Continuity and Evolution

The foundational lessons still stand:

  • Theory of Constraints reminds us to optimize the bottleneck or optimize nothing.
  • The Toyota Production System teaches that limiting work in progress protects flow and surfaces problems early.
  • The Agile Manifesto prioritizes working outcomes and customer collaboration over artifact accumulation.
  • Extreme Programming emphasizes tight feedback loops and sustainable pace.

AI does not invalidate these principles. It makes neglecting them more expensive, but the AI era require extensions.

This is where frameworks like OBAF (Outcome-Based Agile Framework) and the Centaur Manifest enter - not as replacements for older thinking, but as evolutions of it.

OBAF sharpens the definition of agility for an AI-augmented environment. When generation becomes trivial, outcome definition must become explicit. OBAF formalizes what many teams only implied: that work should be structured around validated outcomes, measurable impact, and constraint awareness rather than artifact throughput. It operationalizes the idea that agility without outcome coupling degenerates into motion.

The Centaur Manifest extends the systems thinking tradition into human–AI collaboration. It reframes the role of the professional from primary executor to orchestrator, verifier, and governor. In a world where AI generates options cheaply, the human role shifts toward intent clarity, boundary setting, verification discipline, and integration stewardship.

Together, these adaptations recognize a structural truth:

When machines accelerate execution, humans must elevate coordination.

That is not a philosophical preference. It is a systemic necessity.

The Hard Conclusion

AI is a throughput amplifier. It amplifies whatever your system structurally rewards. If you reward visible output, AI will amplify visible output. If you reward validated outcomes, AI will amplify validated outcomes. The tool is neutral with respect to system design. It will intensify whatever incentive landscape it enters.

The local optimization trap did not disappear in the AI era. It simply became easier to enter and harder to notice. Organizations that recognize this will redesign their metrics, governance, and definitions of success around outcomes and constraint management. Organizations that do not will experience impressive bursts of activity followed by fatigue, quality drift, and silent coordination collapse.

The laws of systems did not change. We just made the engine faster.