This piece sharpens the argument I made in The Local Optimization Trap. It is a direct commentary on an interview by The Tech Report titled AI productivity bubble: ‘There is a reckoning coming for employers’ | Natasha Bernal (YouTube), where Bernal discusses emerging findings highlighted by Harvard Business Review about early AI adoption in the workplace.

First rule of shadow AI: You do not talk about shadow AI.
Second rule of shadow AI: You DO NOT TALK ABOUT SHADOW AI.

Shadow AI isn’t a rebellious subculture. It’s a predictable adaptation to incentives. When leaders treat "AI usage" as a proxy for productivity, they teach employees a simple lesson: don’t bring nuance to a KPI fight. People will route work through AI, quietly launder the output, and keep their heads down—because transparency invites new controls, new expectations, and zero credit for the additional verification burden. The organization gets a prettier dashboard. The business gets slower, noisier, and more fragile.

The uncomfortable truth is that this is not primarily an "AI problem." It’s a management-accounting problem that AI exposes with embarrassing clarity. AI doesn’t just speed things up; it amplifies whatever system you already have. If your system confuses output with outcomes, AI will help you produce more output. If your system rewards visible activity over coherent delivery, AI will help you generate more visible activity. If your system cannot tell the difference between momentum and progress, AI will help you sprint in circles.

The modern workplace has a recurring fantasy: that a new tool will "save time," and the saved time will turn into leisure, creativity, or at least a smaller to-do list. It almost never does. Historically, time-saving tools lower the activation energy of work and expand the feasible scope of what can be demanded. The to-do list doesn’t shrink; it inflates. The day doesn’t get shorter; the expectations get taller. AI is simply the newest and most potent version of that pattern, because it reduces the friction of starting and creates the illusion of continuity: there is always another prompt, another refinement, another "quick" improvement. The natural pauses—the moments where you would stop, think, walk away, reprioritize—get squeezed out.

This is why early adopters often don’t look "freed." They look overloaded. The first cohort to integrate AI deeply tends to discover an ugly dynamic: AI makes more things possible, so they do more things. They take on broader scope, higher complexity, and a more continuous workday. And then something subtle happens: the workload doesn’t contract back to what was manageable before. It never does. The local system adapts around the new pace, and suddenly the baseline has shifted. The employee is now responsible not only for doing the work, but for maintaining an elevated throughput—while also absorbing the new cost that the hype rarely budgets for: verification.

Verification is the hidden tax of generative AI. In many domains, AI doesn’t replace work; it moves it. It converts production work into editorial work, and editorial work is a different cognitive animal. It’s vigilance. It’s skepticism. It’s resolving ambiguity. It’s catching plausible errors. It’s deciding when to trust and when to re-derive. That’s not relaxing. In many cases it’s more tiring than doing the first draft yourself, because you’re constantly fighting the "looks right" effect.

And here is where organizations start lying to themselves.

Leaders see a demo and infer a production pipeline. They see speed and infer savings. They hear "hours saved" and assume those hours will be harvested. So they build dashboards. They demand adoption. They tie "AI usage" to performance. They celebrate tool-touch as if tool-touch were delivery. And in doing so, they create the most predictable behavioral response in the world: performative productivity.

People do not behave according to what leaders say they value. They behave according to what leaders measure and reward. If compensation, promotion, or even informal status begins to depend on visible AI usage, people will "use AI" the way they used every previous metric regime: in the easiest way that satisfies the rubric. They’ll run emails through a model. They’ll generate drafts they could have written faster. They’ll add AI steps to work that didn’t need them. They’ll optimize for "AI-ness" in the artifact. Some will do it openly; many will do it quietly. Shadow AI thrives under mandatory AI because it’s safer to comply in appearance than to argue in principle.

This is Goodhart’s Law with a GPU: when a measure becomes a target, it stops being a good measure. "AI adoption" becomes the target; therefore it stops measuring meaningful transformation. The dashboard turns green while delivery quality drifts. Rework increases. Coordination costs balloon. Trust erodes. People learn to treat truth as optional—because truth is rarely rewarded in a KPI culture.

This is also where the "coordination shift" bites hardest. AI cheapens execution. That means execution is no longer the bottleneck you should be optimizing. The bottleneck becomes coherence: priority, intent, decision rights, and a shared definition of "done." Without those, AI doesn’t create throughput; it creates inventory. More tickets in flight. More half-finished work. More parallel initiatives. More "almost ready" artifacts waiting for review, integration, or sign-off. The organization becomes busy in the same way a traffic jam is busy.

If you’ve ever studied Theory of Constraints, this should feel painfully familiar. Local optimization increases global dysfunction. Accelerating non-bottlenecks increases work-in-progress and hides the true constraint. AI is a turbocharger bolted onto that exact failure mode. It makes it easier to start more things, easier to generate intermediate artifacts, easier to flood the system with "progress." It does not magically create decision clarity. It does not choose. It does not reconcile conflicting priorities. It does not protect focus. It does not reduce organizational entropy. In fact, in weak governance environments, it amplifies entropy—because it makes it cheap to produce convincing noise.

At this point, many leaders reach for surveillance. They try to "measure output" more tightly. They try to detect AI usage. They try to score "successful use." They introduce another layer of algorithmic judgment and call it objectivity. But an employee arguing with an algorithm about whether they "used AI correctly" is not a productivity strategy. It’s a morale-destruction strategy. And it invites a second-order gaming loop: people stop optimizing for the business and start optimizing for the classifier.

The endgame is grimly straightforward. If a smaller group can use AI to operate at sustained 100% capacity, organizations will try to do more with fewer people. That can look like efficiency until the human system breaks: burnout, attrition, quality failures, sick leave, and a growing tail of hidden operational debt. The first wave is already visible in many workplaces: the most conscientious high performers are the first to absorb the verification load, the first to extend into evenings, and the first to discover that "possible" is not the same as "sustainable."

And then comes the reckoning: not a philosophical one, but an accounting one. When the organization "wins" the dashboard—adoption up, output up, cycle time superficially down—but loses the business through churn, defects, misalignment, and talent loss, leaders will ask what went wrong. They will blame the tools. They will blame the workers. They will blame the pace of change. They will say the technology "wasn’t ready." They will rarely blame the only thing that actually deserves it: the decision to treat metrics as reality.

So what do you do instead?

You stop treating AI as the strategy. AI is not the strategy. AI is a force multiplier. The strategy is governance: clear decision rights, bounded work intake, explicit tradeoffs, and a definition of done that includes verification and integration. If you want real productivity, you must make it impossible for "more output" to masquerade as "more value." You must limit work-in-progress the way serious systems do, because unlimited WIP is how you manufacture chaos. You must design for coherence, because coherence is now the scarce resource.

And if you’re an individual contributor operating inside a KPI fog, recognize the uncomfortable incentives. Shadow AI exists because people are trying to survive inside measurement regimes that confuse theater with outcomes. You can’t fix that alone. What you can do is insist—quietly, professionally, relentlessly—on work that ships, not work that scores.

That’s the core point. AI will not save you from bad management. It will not save you from metric traps. It will not save you from organizations that treat human cognition as an infinite resource. In those environments, AI doesn’t reduce work. It intensifies it.

And that is exactly how you win the dashboard and lose the business.