A great deal of public AI discourse still begins in the same place: jobs. Will junior developers disappear? Will designers be replaced? Should young people avoid knowledge work and learn a trade instead? That conversation is still everywhere, and it matters because it shapes how people interpret what is happening around them. But it is no longer the world I am trying to understand.

At this point, the more useful question is not whether AI can do meaningful work. It can. Nor is it whether some categories of task are under pressure. They are. The more useful question is why so many people still reach for labor-market anxiety first, even when the deeper change is happening elsewhere.

My hypothesis is that this happens because jobs are the easiest visible surface of a much larger structural shift. It is easier to ask whether junior developers will disappear than to ask what happens when a technology that can move at machine speed lands inside organizations that still think at human-bureaucratic speed. It is easier to speculate about replacement than to confront the harder possibility: that the models may work far better than many people think, while the institutions using them remain organized in ways that prevent real transformation.

That is where the real story begins.

The Chasm

For most of my career, I have been interested in automation in the deeper sense: not prettier interfaces, not a faster way to produce artifacts for people to click on, but systems that actually move work forward with minimal human intervention. Real automation. The kind that changes the shape of a value chain. The kind that removes waiting, handoffs, repetition, and theater. The kind that makes the system itself behave differently.

From that perspective, much of what is currently described as "AI transformation" looks surprisingly shallow. It is not fake. It can be useful. It can save time. But it often sits on top of the same old bottlenecks. The browser is still there. The approval chain is still there. The meeting culture is still there. The domain confusion is still there. The organization still has no idea where value actually gets created. The only difference is that now it can generate inventory much faster.

And inventory is not flow.

That distinction matters. Faster execution by itself does not create value. It creates output. If that output cannot be validated quickly against reality, it becomes stock in a warehouse. More code waiting to be integrated. More prototypes waiting to be reviewed. More decks waiting to be approved. More plausible-looking systems waiting for someone, somewhere, to decide whether any of it matters. A company can drown in generated potential while still moving no faster where it counts.

This is why I find many mainstream AI discussions unsatisfying. They are too focused on substitution and not focused enough on system design. They ask whether AI replaces junior tasks, or junior jobs, or creative workers, or research assistants. They do not ask what happens to an organization when execution accelerates while coordination, validation, and decision-making remain unchanged. They do not ask whether the company knows how to pivot once reality pushes back. They do not ask whether the system is designed for flow or for local optimization. They do not ask whether the bottleneck has been identified correctly at all.

That omission is not trivial. It is the whole game.

The deepest risk in the AI era is not that the models fail. It is that they succeed technically while organizations fail structurally.

A great many firms will likely adopt AI in exactly the wrong way. They will wrap it around existing process. They will use it to accelerate local tasks without redesigning end-to-end work. They will produce more human-facing UI instead of more machine-native execution. They will celebrate faster prototyping while leaving the real constraints untouched. They will say they are automating, when in fact they are only making the front-end of the same old system look more impressive.

The result will be familiar to anyone who has worked in a badly designed organization. Two weeks to build something that could ship. Eighteen weeks waiting for decisions. Endless semantic discussions. Meetings about alignment. Over-complication mistaken for seriousness. A system that can produce artifacts quickly but cannot turn those artifacts into validated movement.

AI does not solve that problem. It magnifies it.

If execution gets cheaper while judgment stays scarce, the relative cost of bad judgment rises. If generation becomes effortless while validation remains slow, unvalidated output multiplies. If engineers can build ten times more while the organization still cannot decide, prioritize, or absorb change, then the system becomes more chaotic, not more productive. It starts to feel as though everything is happening faster even though nothing important is actually moving.

This is why I am skeptical of the current managerial fantasy that value now materializes from prompts. It is a category error. A prompt can produce an artifact. It cannot, by itself, produce validated value inside a messy sociotechnical system. It cannot resolve conflicting incentives. It cannot repair bad topology. It cannot make a company suddenly understand its own bottlenecks. It cannot force an organization to confront the fact that most of its delay was never in code generation to begin with.

That last point is especially important in software.

There is a great deal of talk right now about AI replacing developers, or junior engineers, or the need to learn programming. I think this is badly framed. What is disappearing first are not developers but certain classes of development task: scaffolding, templating, boilerplate, routine synthesis, glue work that was already highly legible and well represented in public corpora. Junior tasks are at risk far sooner than junior engineers. Those are not the same thing.

Senior Advice To The Junior Engineer

A junior engineer is not defined by a frozen list of low-level tasks. A junior engineer is a learning trajectory. A person with appetite, curiosity, and the willingness to improve can move toward the new bottlenecks. The old apprenticeship path may be changing, but that does not imply that technical careers are becoming a bad bet. Quite the opposite. If code generation is no longer the primary bottleneck, then the growth path shifts toward what still resists automation: backend integration, validation, correctness, narrow end-to-end ownership, and eventually stronger judgment about architecture and constraints.

From a senior engineer’s perspective, that distinction matters a great deal. The old path into engineering often began with boilerplate, templating, glue work, and other forms of low-risk repetition. Much of that is now under pressure. That does not mean the engineer is under pressure in the same way. It means the growth path is changing.

If I were advising a junior engineer today, I would not tell them to abandon the field. I would tell them to stop identifying their future with the old junior task set.

The right response, in my view, is to move deliberately toward the bottlenecks that remain hard to fake. Learn systems integration. Learn how to make software behave correctly across boundaries, not just inside a single file or framework. Learn validation, testing, and correctness deeply enough that you can drive AI toward self-verifying behavior rather than passive code emission. Learn to build and own a small end-to-end system that runs like something real, even if it is only a hobby-scale product. Learn what breaks in production, what must be observed, what must be retried, what must be proved, and what cannot be hand-waved away.

I would not tell most junior engineers to chase domain mastery outside technology too early. There are exceptions, but most people do not have the cognitive bandwidth to become deeply capable in both a non-technical domain and modern engineering at the same time. First, become strong in the technical domain itself. Build narrow end-to-end ownership. Learn what good systems look like. Learn how real automation works. Once that is anchored, broader seniority can follow.

This is why the popular contrast between "junior engineer" and "carpenter" is so unhelpful. It assumes that knowledge work is collapsing into irrelevance while physical trades remain somehow timeless and stable. But that is romantic nonsense. Modern trades are highly structured, industrialized, and constrained in their own ways, and many are far removed from the artisanal fantasy people project onto them. Meanwhile, AI is making many forms of technical building more accessible and more creative by removing some of the least interesting forms of drudgery. The question is not whether someone should flee tech. The question is whether they can reorient themselves toward where value still has to be earned.

That same pattern appears outside software as well.

The Artist In The Era Of Infinite Plausible Visuals

In creative work, I do not see a simple replacement story either. I see a baseline-raising story. Execution becomes cheaper. Access expands. More people can create things that look superficially professional. That does not eliminate the need for taste. It increases it. When everyone can generate plausible visuals, tracks, layouts, and concepts, the scarce thing is not the artifact. It is judgment. What stands out? What fits the audience? What iterates well? What should be discarded? The creative professional becomes less like a pair of hands waiting for instructions and more like a director, curator, translator of stakeholder intent, and arbiter of quality.

In other words, the pattern repeats. AI lowers the cost of production and raises the importance of curation, validation, and fit.

Systems Thinking > Management

That is why I think the popular focus on management as the central AI skill is also somewhat misleading. There is a useful insight buried in it, but the phrase itself keeps people inside the wrong frame. If by "management" one means command-and-control, task allocation, resource supervision, or generic business oversight, then no, that is not the heart of the matter. The deeper capability is systems thinking. Architecture. Value-flow awareness. The ability to see the whole, identify the true constraint, and couple execution tightly enough to evidence that the system can pivot when reality demands it.

Management, insofar as it remains useful, sits underneath that. It is subject to systems thinking, not the other way around.

This is one reason I do not think managers should be deciding what can be automated. That is work for senior engineers, architects, and domain experts. They understand the real boundaries, the hidden state, the operational risks, and the difference between a local convenience and a meaningful redesign of the system. The manager’s role, if there is one worth defending, is not to pretend to be a mini-expert in every domain. That is just Taylorism in modern clothes. The manager’s role is to remove limiting factors, open paths across silos, and make it possible for the people closest to the work to move where the bottleneck actually is.

The best version of this is not command. It is closer to mission command: clear intent, local judgment, collaboration in execution, and freedom to move toward the problem rather than remain trapped inside formal boundaries. If an engineer can see the real problem and has a credible path to improve it, the system should help them move there rather than force them back into their lane.

This, I suspect, is where the real divide of the AI era will emerge.

Not between people who "use AI" and people who do not. Not between humans and machines. Not even primarily between juniors and seniors.

The real divide will be between organizations that redesign around flow and organizations that keep generating inventory.

Some firms will bolt AI onto existing structures and call it transformation. They will get shallow but economically meaningful gains: better search, faster drafts, compressed support work, nicer interfaces, some headcount leverage, some margin improvements. That may be enough to reward them for a while. Markets do not require philosophical coherence. Mediocre adoption can still produce real returns.

But the deeper value will likely emerge elsewhere.

It will emerge in companies that are forced, by necessity or by design, to organize differently. Smaller teams. Strong domain experts. Engineers who understand systems rather than just syntax. APIs and tools built for agents, not just browsers built for humans. Workflows where automation is not an afterthought but part of the operating model. Feedback loops that are short enough to keep execution attached to reality. Organizations that know where their constraints are, elevate them deliberately, and refuse to confuse output with value.

Those firms will not merely use AI. They will be legible to AI.

That is a very different thing.

I do not know how quickly this world arrives. I do not know which incumbents will adapt and which will spend the next decade mistaking local efficiency for transformation. I do not know whether the market will overprice or underprice the transition along the way. There may well be bubbles, crashes, and fireworks. That is not the part that interests me most.

Lights-Out Automation

What interests me is that, for the first time in a long time, the possibility of real automation feels materially closer. Not the fantasy version where everything disappears into a prompt, but the harder and more interesting version where systems begin to interact with systems, where agents consume interfaces designed for them, where the browser ceases to be the universal bottleneck, and where the old human UI layer is no longer assumed to be the center of gravity.

If that future comes, it will not arrive because management discovered prompting. It will arrive because a minority of organizations learned how to redesign themselves around flow, feedback, constraints, and machine-speed collaboration.

Everyone else may keep saying they are transforming, but transformation is not what you call it when the demos look better.

Transformation is what you call it when the bottleneck moves.