I am carrying my development environment in my pocket. Not metaphorically. Literally.
On a normal day, I relay between five and ten terminal sessions from my development machine through Lingon. Some are ordinary shells. Several run Codex CLI agents. When I am at the desk, I work through the host terminal. When I leave the desk, I can attach from another laptop. When I leave the laptop, the same sessions follow me through the Lingon Android app.
This is not a distributed agent platform. It is not a containerized agentic cloud workflow with a decorative terminal attached to it. It is a live remote execution surface. The difference matters. The work is still happening where the work happens. The terminal is simply no longer bound to the chair.
That sounds ridiculous until it becomes normal.
A notification lands on the phone. A wall message appears. The watch vibrates. An agent has finished a refactor, a test run, a bug hunt, or a larger implementation pass. I attach to the same terminal session, read the claim, reject the vague parts, ask for the invariant, inspect the tests, push the system back into the loop, and put the phone away again.
Some of this happens at the desk. Some of it happens from bed in the morning. Some of it happens from bed at night. Some of it happens while buying coffee. Some of it happens while standing in a playground, pushing one of my children on a swing after kindergarten.
This may be absurd. It is also real.
A few years ago, the idea that I would supervise several coding agents from a phone while moving through ordinary family life would have sounded like a parody of work. Now it is part of my daily operating environment. I deprecated my first agentic development environment Centaurx and built around Lingon precisely because I wanted this control loop: live sessions, direct execution, fast attachment, low ceremony, and the ability to steer work without being physically seated in front of the machine.
That is not the important part.
The important part is what the loop did to me.
The portable control loop
The shallow version of AI-assisted development is that the machine writes code quickly.
That is true, but it is not enough. A coding agent can generate, refactor, explain, test, migrate, and search at a speed that is no longer meaningfully comparable to manual typing. But the code is not the whole system. The system includes intent, constraints, invariants, interfaces, failure modes, regression coverage, operational behaviour, and the human judgment required to decide whether the work is actually correct.
The human role therefore changes. I do not spend long stretches simply typing code. More often, I define what must remain true, what must fail safely, what must be proven, what existing behaviour must not regress, and what evidence would be sufficient for me to believe the result.
The agent goes to work. I return later. Sometimes later means an hour. Sometimes it means five minutes. Sometimes it means the next time my watch vibrates.
That creates a strange new rhythm. Work no longer has the same physical outline. It is not a block of time spent sitting in a chair, gradually moving a codebase forward by direct manipulation. It is a series of steering interventions across several live execution streams.
The steering is not casual. It cannot be casual. The machine can produce too much plausible output too quickly for casual supervision to be safe.
So the workflow becomes strict. "I implemented it" is not an answer.
- Where is the invariant?
- What proves it?
- Which tests fail without the change?
- Which regression path did we cover?
- What happens under restart?
- What happens under partial failure?
- What did we preserve?
- What did we deliberately change?
- What is the smallest reproducible claim?
The old phrase says that trust is good, but control is better. That is close, but the word control is often misunderstood. This is not managerial control. It is not micromanagement. It is not reviewing every line because the machine is untrusted by default.
It is epistemic control.
A system that claims to have changed must show the evidence. A design that claims to be simpler must survive contact with real failure modes. An implementation that claims to be complete must encode the relevant invariants somewhere executable. A test suite that claims coverage must demonstrate that it would have caught the thing we were afraid of.
The machine is fast. That makes proof more important, not less.
The worker in the loop
There is a price for this.
Burnout is the wrong word, or at least too blunt a word. This is not the ordinary fatigue of too many meetings, too many tickets, too much context switching, or too much corporate noise. It is closer to the fatigue of sustained high-bandwidth steering.
At the moment, I am steering four or five agents across several projects and roughly three quarters of a million lines of code. That is not a boast. It is a workload description.
The agents execute quickly. The human still has to maintain coherence.
That coherence has a cost. It consumes cognitive bandwidth: memory of design intent across projects, suspicion toward plausible but vague answers, the ability to move from architecture to test output to API shape to failure semantics and back again, and the discipline to reject work that looks complete but has not earned that status.
When the speed is slightly too high, it becomes extremely tiring.
Fortunately, subscription limits sometimes provide a crude but useful safety mechanism. I run out of weekly quota before I hit the absolute wall. Then I am forced to do something else for a few days.
That sentence sounds like a joke. It is not entirely a joke.
The same thing happens with learning. I have used very fast model variants to get myself back into C programming at a speed that would make most courses look ceremonial. The acceleration was real. So was the cost. After some sessions it felt as if my head had physically grown larger. The tiredness that followed was not normal learning fatigue. It was the feeling of having forced too much compression through a biological substrate.
I often think of Trinity in The Matrix uploading a helicopter course into her head. The film gets the fantasy right and the cost wrong. Reality is not science fiction. You can accelerate learning. You can compress feedback. You can place a machine tutor and a codebase and a compiler and a test suite into a much tighter loop than before.
But the nervous system is still involved.
Constraints are real.
The machine’s discipline
After enough time in this loop, I began communicating more with machines than with humans.
That sounds worse than it is. These are not conversations in the social sense. They are work loops. They are cycles of intent, implementation, evidence, correction, and refusal. The agent proposes. I challenge. I propose. The agent challenges. Both sides push toward something more precise than the first version of the idea.
A good agentic harness does not merely obey. It pushes back when the design is weak, when the acceptance criteria are unclear, when the existing tests are insufficient, or when the requested change conflicts with the shape of the system. I push back in the other direction when the machine hides behind generic completion language, overstates confidence, misses an invariant, writes a thin test, or confuses plausible structure with proven behaviour.
Over time, that becomes a discipline.
Not a personality. Not a vibe. Not a cute conversational style.
A discipline.
Claims require evidence. Intent must become executable. Ambiguity must be reduced until the system can act on it. A change must leave traces that can be inspected. A behaviour must be reproducible. A failure must be understood as a path, not merely as an incident. A regression test is not decoration. It is memory encoded into the system.
This is the machine’s epistemology, or at least the epistemology that emerges when a human and a machine work seriously together inside a codebase.
The agent is not wise. It is not authoritative. It is not morally superior. It is often wrong. It is sometimes lazy in exactly the way language models are lazy: producing an answer-shaped object instead of a proven result.
That is why the discipline matters.
The point is not to trust the machine. The point is to construct a loop where neither the machine nor the human gets to pass vague claims as finished work.
The operator got refactored
The disturbing part was not that AI changed my workflow.
I knew that already.
The disturbing part was realizing that the workflow had changed me before I had consciously noticed it.
Recently, while interacting with several humans, I noticed something different in my own reasoning. I was asking for evidence more directly. I was reacting harder against vague claims. I was looking for invariants where people were offering intentions. I was less tolerant of statements that could not be tested, reproduced, falsified, or tied to a concrete failure mode.
This was not a conscious decision.
That is what made it interesting.
If I had decided to apply a stricter reasoning method in human conversations, that would have been ordinary professional discipline. Useful, perhaps, but not surprising. The surprising part was that the reaction appeared before the decision. It was a knee-jerk response. The reasoning materialized.
The machine had not replaced my thinking.
It had trained parts of it.
That sentence is easy to overstate, so it should be handled carefully. I am not claiming that AI changes every worker. I am not claiming that everyone who uses a chatbot becomes some new kind of mind. I am not claiming a mystical merger between human and machine.
I am saying something narrower and, to me, more interesting.
Sustained work inside a machine-speed proof loop changed my default pattern of thought. Not all of it. Not magically. Not without cost. But enough that I noticed it when I returned to ordinary human interaction.
That is AI augmentation at a different level from code completion or chat-based assistance. The augmentation is no longer only at the keyboard. It is not only in the phone app, the terminal session, the agent harness, or the test suite. Those are the external parts of the loop.
The deeper change is that the loop starts shaping the operator.
At sufficient intensity, the human does not merely use the machine. The human adapts to the discipline required to steer the machine.
That is both exciting and uncomfortable.
It is exciting because this is what serious tools have always done. Writing changed thought. Mathematics changed thought. Programming changed thought. Version control changed collaboration. Tests changed how good engineers think about change. Production operations changed how good engineers think about failure.
AI is now doing something similar, but faster and under stranger conditions.
It is uncomfortable because the adaptation is not purely intellectual. It is embodied. It happens through repetition, interruption, fatigue, reward, frustration, and correction. It happens through mornings, evenings, terminals, watches, phones, failed tests, green builds, rejected claims, and the repeated demand that something vague become something provable.
The operator changes because the operator is inside the loop.
The new human constraint
There is a childish version of the AI story where machines do the work and humans relax.
Whoever believed that had probably never met people who treat acceleration as an invitation to raise the standard.
When execution becomes faster, the serious operator does not simply do less. The serious operator asks for more proof. More surface area. More tests. More refactoring. More compression. More parallel work. More learning. More integration. More precision.
That is not necessarily healthy. It is not automatically admirable. It is simply what happens when a certain kind of person is given a faster machine.
The work expands until it finds the next constraint.
In this loop, that constraint is not typing speed. It is not access to documentation. It is not the ability to produce a first draft. It is not even the ability to make the code compile.
The constraint is the human capacity to maintain coherent judgment across accelerated execution.
That capacity can be trained. Mine has clearly been trained. It can also be overloaded. Mine clearly can be overloaded.
That is the line worth respecting.
AI accelerates execution, feedback, and learning. It compresses the distance between intent and running code. A phone becomes a control surface for work that used to require physical presence at a desk. One person can steer several streams of implementation at once.
The more important effect is quieter: the person doing the steering is also being trained.
None of this abolishes the human substrate.
The operator may be refactored, but still human.