I recently listened to a long interview with Dominic Williams, architect of the Internet Computer (ICP). It is one of the clearest articulations of the project’s intent: a sovereign, self-writing cloud where applications are mathematically guaranteed to be tamperproof, unstoppable, and safe to update continuously—even when written and modified by AI rather than by human teams.
Taken on its own terms, the vision is coherent. ICP is explicitly designed as an integrity-first system. Applications run inside a protocol-governed execution environment, replicated across independent operators, so that no single party can alter logic, corrupt state, or silently introduce backdoors. If an AI writes bad code, the platform guarantees that the code will either execute deterministically against correct data or be rejected during upgrade.
As a protocol claim, this largely holds. As a security claim in the real world, it breaks in a very specific—and very predictable—way.
The break starts with a simple physical reality: computation requires plaintext. You can encrypt data at rest. You can encrypt it in transit. You can encrypt it on disk, across the network, and between replicas. But the moment instructions execute, the data they operate on must exist in decrypted form somewhere in memory. This is not a flaw in ICP. It is a property of all general-purpose computers.
ICP executes application logic as WebAssembly inside a constrained virtual execution environment, often itself hosted inside a VM and increasingly inside a trusted execution environment such as AMD SEV-SNP. None of this changes the core fact: at execution time, decrypted state exists in RAM.
Modern hardware provides many mitigations against tampering. On x86 and arm64 you have execute-never pages, W^X policies, privilege separation, hardened page tables, and controlled syscall surfaces. TEEs add encrypted memory, integrity checking, and remote attestation. These mechanisms are effective at preventing accidental corruption, remote exploitation, and unauthorized modification of execution.
They do not prevent observation.
A sufficiently privileged operator—or anyone with physical access to the machine—can still extract memory contents once data has been decrypted for execution. This does not require modifying the running program, altering outputs, or breaking consensus. Memory can be dumped via hypervisor access, firmware compromise, DMA attacks, debug interfaces, or side-channel techniques. In a VM or container setting, the host already sits above the guest in the trust hierarchy. Crucially, this kind of observation can be passive. It does not alter execution, and it leaves little or no trace visible to the application or to consensus peers.
This is where the Internet Computer’s security narrative quietly shifts.
When Dominic says that “you can’t steal digital assets that are inside of it,” he is speaking from a Byzantine fault tolerant, ledger-centric worldview. In that worldview, stealing an asset means forging an invalid state transition: transferring tokens without authorization, corrupting balances, or violating protocol rules. And in that narrow sense, he is mostly right. A compromised node cannot unilaterally mutate replicated state in a way that consensus will accept.
But that is not how real adversaries operate.
In an adversarial, intelligence-driven model, you do not need to change the ledger. You only need to act first.
If a node operator can observe decrypted state or intent—private keys loaded into memory, pending transactions, internal control flows, escrow conditions, governance votes, or upcoming state transitions—they gain near-real-time intelligence. With that intelligence, they can submit their own fully valid transactions ahead of the legitimate ones. The system will accept them. Consensus will agree. Everything will appear correct and lawful from the protocol’s point of view.
Nothing was forged. No rule was broken. No Byzantine behavior occurred.
The outcome was still decisively altered.
This is classic OODA-loop dominance: observe, orient, decide, act—faster than the opponent, and without revealing that you had the information in the first place. In military and intelligence contexts, this is not a corner case; it is the primary mechanism of victory. The most devastating advantage is not the ability to violate rules, but the ability to exploit timing asymmetry invisibly.
Byzantine fault tolerance does not model this class of attack at all.
BFT assumes adversaries who behave arbitrarily within the protocol—lying, equivocating, halting, or colluding—but it assumes they do not gain asymmetric information advantage simply by hosting the computation. Early knowledge is treated as benign. Timing is treated as noise. Confidentiality is assumed to be either irrelevant or external.
That abstraction is reasonable if your only goal is agreement under fault. It is dangerously incomplete if your system has economic value, strategic decision-making, or incentives attached to timing.
This is why the common retort—“they can’t change the ledger”—misses the point entirely. You do not need to change the ledger if you can foresee legitimate actions and preempt them with your own legitimate actions. From the system’s perspective, the outcome is correct. From the user’s perspective, assets are gone, votes flipped, auctions lost, or positions liquidated.
ICP’s own recommended mitigation implicitly acknowledges this: do not store secrets on the platform. Store commitments, hashes, or coordination logic on-chain, and keep real data elsewhere. This advice is technically sound—and architecturally fatal to the broader claim.
Once real data lives off-platform, the promise that “the program is the database” collapses. You now have a split system: a protocol-governed execution layer and a conventional data layer governed by contracts, access controls, audits, and liability. You have reintroduced a distributed data problem, plus cryptographic coupling, plus new failure modes, without eliminating the original trust issues.
For EU-based companies, this is usually the end of the discussion. GDPR requires clarity around data location, access, erasure, and incident response. Employment law adds constraints around employee data. The AI Act introduces new governance obligations. Even if personal data never touches ICP directly, metadata and derived state may still fall under regulation. Explaining this architecture to auditors, legal teams, and works councils is not merely difficult; it is often disqualifying.
There is also an inversion of trust that is rarely acknowledged. ICP reduces reliance on a single cloud provider, but it expands the number of infrastructure operators who could potentially observe plaintext state. Instead of trusting a small number of partners under contract and audit, you trust a protocol-curated network of node providers across jurisdictions. Governance may reduce the risk of coordinated sabotage, but it increases the number of parties who must be trusted not to observe sensitive information. From a risk-management perspective, this is usually the wrong trade.
The constrained execution model further narrows the use case. WebAssembly is essential for deterministic, replicated execution, but it limits libraries, tooling, observability, and operational practices. Debugging becomes forensic rather than interactive. This is acceptable if your primary requirement is censorship resistance or autonomous execution. It is unnecessary friction if your primary requirement is running a regulated business.
Compare this with Temporal. Temporal also enforces deterministic state transitions and durable execution, but it lets you choose your trust boundary. You can run it where you trust the metal, satisfy auditors, and reason clearly about who can see memory and when. For most enterprises, that trade-off is straightforward.
Seen without hype, the Internet Computer is an integrity-maximizing platform built around a protocol-centric threat model that explicitly excludes intelligence advantage. That makes it academically interesting and occasionally useful. It also makes it unsuitable for most business-critical systems, where secrecy, timing, and accountability matter more than formal guarantees about rule-following.
The system is not wrong. It is simply optimized for adversaries who break rules—while real adversaries win by never needing to.