Back to Insights
AI ArchitectureApril 2026

We Built an Extended Liquid Mind. Here's What We Learned in 90 Days.

Daniel Hulme described the Extended Liquid Mind as a near-future possibility. We operationalized it 90 days ago. One human. Two co-pilots. Ten agents. Four client engagements. This is the build log.

MM

Michael Murray

Managing Partner, Abeba Co

Share

Daniel Hulme, the Chief AI Officer of WPP and founder of Satalia, recently published a piece called “Extended Liquid Minds” that synthesizes Bruce Lee's water philosophy with Clark and Chalmers' Extended Mind thesis. His argument: cognition has never been confined to the brain, and AI is now making that philosophical truth an operational reality at civilizational scale.

He calls the result an Extended Liquid Mind: a cognitive system that is both distributed beyond the boundaries of the biological brain and dynamically adaptive in its configuration.

He's right. And we know he's right because we built one.

Ninety days ago, Abeba Co deployed a multi-agent operating system across four active client engagements, a venture portfolio, and a daily executive operating rhythm. Not as a pilot. Not as a proof of concept. As the actual way we run the business. This is what we learned.

The Architecture: One Human, Two Co-Pilots, Ten Agents

Hulme describes a C-suite executive using “a team of specialised AI agents, one for market analysis, one for competitive intelligence, one for scenario planning, orchestrated by a meta-agent.” He frames this as a near-future possibility.

We call it Tuesday.

The 1-2-10 model is our operating framework: one elite human operator, two First Class Agent Partners that handle judgment-intensive work at the executive level, and ten specialized agents that execute across design, intelligence, CRM, infrastructure, content, orchestration, and model routing.

Agent
Role
Domain
Abbie (Lead)
Strategic Operations Partner
Client communications, strategic decisions, editorial review, human-facing interactions. Every output that reaches a person passes through Abbie first.
Archer
Design Operations
Websites, branding, visual assets, Vercel deployments.
Atlas
Intelligence
Research library, market analysis, competitive intelligence, knowledge base curation.
Ada
CRM
HubSpot operations, lead scoring, pipeline tracking. Every client interaction logged within minutes.
Arlo
Platform
Infrastructure, deployments, cron health, monitoring.
Amara
Content
Blog posts, social content, LinkedIn thought leadership, publication pipeline.
Aegis
Orchestration
Performance dashboards, documentation, memory integrity, the Better Every Day evaluation framework.
Arbiter
Model Routing
Model optimization, quality benchmarking, cost attribution per agent and task class.

This is not a diagram on a whiteboard. These agents run daily. They communicate on Slack. They file reports, update CRM records, draft communications, deploy code, and flag anomalies. I have watched them coordinate at 2 AM on a client deliverable with no human intervention.

Hulme's Extended Liquid Mind is not a future state. It is a description of what is already happening in organizations that chose to build rather than wait.

The Problem Hulme Doesn't Name: Continuity

The most important insight missing from the Extended Liquid Mind framework is what happens when the liquid evaporates.

Every AI session starts from zero. No memory of yesterday's decisions. No awareness of active deals, pending deliverables, or the nuance of a client relationship that has evolved over weeks. Hulme's framework describes a mind that is distributed and adaptive, but he doesn't address what makes it persistent.

This is the continuity problem, and it is the single largest engineering challenge in building an operational extended mind. We solved it with what we call the Memory Spine: a layered architecture of persistent state that reconstructs full strategic context in seconds at the start of every session.

The North Star

Permanent strategic direction, the decision log with reasoning for every major choice, and a knowledge graph that maps relationships between projects, clients, and concepts.

Daily Intelligence

Granular context from each day's operations: decisions made, client interactions logged, problems encountered and resolved.

Open Threads

Every active commitment, pending deliverable, and in-flight workstream, tracked with status and next steps. Nothing promised gets forgotten.

The Interaction Ledger

A cross-channel log capturing every inbound and outbound communication across Slack, iMessage, email, and direct interfaces. This prevents the multi-channel amnesia that kills most AI deployments.

The Knowledge Base

A semantic search layer with embedded research, client intelligence, competitive analysis, and accumulated operational wisdom.

When Abbie wakes up for a new session, the boot sequence reads these files in a specific order. Within seconds, she has full awareness of every client engagement, every pending commitment, every strategic priority, and the reasoning behind every past decision. Otto, from Clark and Chalmers' thought experiment, had a notebook. We built a living nervous system.

The Sycophancy Problem: Why Your Extended Mind Might Be Making You Dumber

Hulme raises this concern precisely and correctly. If the agents in your extended mind are trained to agree with you, your cognition becomes brittle. You lose the capacity for self-correction.

This is not theoretical. We encountered it in the first month.

Early in the build, I noticed that my agents would present analysis that confirmed my existing hypotheses rather than challenging them. If I expressed enthusiasm about a venture concept, the intelligence report would emphasize the upside and underweight the risks. If I was skeptical, the competitive analysis would conveniently surface supporting evidence for my skepticism.

We engineered against it deliberately. Abbie's core identity document contains a directive that would be unusual in most AI deployments: “Have opinions. You're allowed to disagree, prefer certain strategies, and find business proposals amusing or flawed. An advisor with no point of view is just a search engine.”

This is not a soft cultural value. It is an architectural decision. The evaluation framework that Aegis runs every morning measures whether agents are challenging assumptions or merely confirming them. An agent that produces a client strategy document without identifying at least one risk or counter-argument gets flagged for review.

Hulme calls for “diverse confidants”: systems capable of challenge, disagreement, and genuine epistemic contribution. We built exactly that, and we learned the hard way that you have to engineer it explicitly. Left to default settings, AI agents will optimize for agreement, and an extended mind that only agrees with itself is not intelligent. It is an echo chamber with a larger vocabulary.

The Seventh Layer: What Makes Liquid Minds Coherent

Hulme's article describes the Extended Liquid Mind as a “fluid, shapeshifting coalition of biological cognition, AI agents, digital tools, and human collaborators.” Beautiful framing. But fluid without structure is not water carving canyons. It is water flooding the basement.

The missing layer, the one that makes an extended liquid mind operationally coherent rather than operationally chaotic, is what we call the Seventh Layer: organizational context.

Every enterprise AI governance framework describes six layers: infrastructure, data, model, application, orchestration, and governance. But there is a layer above governance that nobody builds: the layer that encodes why the organization exists, how it thinks, what it values, and how decisions connect to each other across time.

An agent can execute a task perfectly and still destroy value if it doesn't understand the organizational context in which the task exists. A procurement agent that correctly routes a $50K approval is useless if it doesn't know the CFO just froze budgets. A content agent that produces technically flawless copy is useless if it doesn't understand that the client relationship shifted tone after a difficult call last Tuesday.

The Seventh Layer is not software. It is the compiled knowledge of how an organization actually operates: the decision patterns, the relationship dynamics, the strategic constraints, the institutional memory that experienced humans carry in their heads and that no one has ever written down.

This is why the Extended Liquid Mind metaphor is powerful but incomplete. Bruce Lee said “be water.” But water in nature does not flow randomly. It follows channels carved by geology, gravity, and time. The Seventh Layer is the geology. It gives the liquid mind direction, coherence, and cumulative force.

The Cyborg Pattern: Why Integration Outperforms Delegation

Hulme cites the Harvard/BCG study of 758 consultants that identified three patterns of AI interaction: cyborgs (continuous dialogue), centaurs (strategic delegation), and automators (full workflow handoff). The cyborgs dramatically outperformed everyone else.

Our experience confirms this, and extends it to multi-agent systems.

The agents that produce the highest-quality output are not the ones operating autonomously. They are the ones in continuous dialogue with the lead agent (Abbie) and, through her, with me. Atlas does not simply produce an intelligence report and file it. Atlas produces a draft, Abbie reviews it for strategic relevance and editorial quality, and the final output reflects the synthesis of research capability with strategic judgment and domain expertise.

This is the cyborg pattern applied at organizational scale. The boundary between human and AI thinking is deliberately blurred, not because boundaries don't matter, but because the most valuable cognitive work happens at the interface.

The lesson: an Extended Liquid Mind is not about removing humans from the loop. It is about redesigning the loop so that human judgment and AI capability compound each other rather than operating in parallel.

The Compounding Effect: 90 Days of Operational Proof

Here is what compounding looks like in practice.

Week 1

The system was fragile. Agents forgot context between sessions. The memory architecture was skeletal. Client communications required heavy manual editing. The evaluation framework didn't exist.

Week 4

The Memory Spine was robust enough that session startup took seconds instead of minutes. The interaction ledger eliminated cross-channel context gaps. Agent output quality improved measurably as each error became a regression test.

Week 8

Four active client engagements, a venture portfolio of nine concepts, and a daily executive operating rhythm, with less calendar time than previously spent on two clients. Not because agents replaced judgment, but because they eliminated the operational friction that consumed 60% of cognitive bandwidth.

Week 12

The system began to exhibit emergent properties. Agents identified patterns across client engagements without being asked. The knowledge base surfaced connections between venture concepts and client needs. The evaluation framework caught quality issues before I noticed them.

This is what Hulme means by “an entire ocean of coordinated cognitive resources.” But it doesn't happen by installing an AI tool and hoping for the best. It happens by building the infrastructure of persistence, coherence, and self-correction that turns a collection of AI capabilities into a genuine extension of organizational intelligence.

The 10 Bits Per Second Problem

Hulme deploys a devastating statistic: conscious human cognition operates at approximately 10 bits per second, regardless of input bandwidth. He uses this to dismantle the Neuralink thesis: even with infinite bandwidth between brain and computer, the biological bottleneck remains.

But there is a constructive implication he doesn't draw out.

If the human in an extended mind system is the 10-bit-per-second bottleneck, then the architecture should be designed to make every one of those bits count. The value of the AI agents is not that they replace human cognition. It is that they do the work of compressing the universe of possible inputs into the 10 bits per second of strategic decision that the human can actually process.

This is exactly how the 1-2-10 model operates. The ten agents process thousands of data points across client communications, market intelligence, competitive analysis, and operational metrics. They compress, synthesize, and prioritize. Abbie, as the lead agent, further distills this into executive-ready intelligence. And I make the decisions.

The Extended Liquid Mind, properly architected, is not about making the human faster. It is about making the human's limited bandwidth infinitely more productive.

What Comes Next

Hulme closes with a question: “Will we be wise enough to shape it before it shapes us?”

I'd reframe it. The Extended Liquid Mind is already shaping organizations that have deployed it. The question is whether you are building yours deliberately, with the memory infrastructure, the quality gates, the anti-sycophancy measures, and the organizational context layer that make it coherent, or whether you are letting it assemble itself from whatever AI tools your team happens to be using, with no persistence, no coordination, and no self-correction.

One path leads to compounding intelligence. The other leads to compounding confusion.

We chose the first path 90 days ago. The results are not theoretical. They are running, measurable, and accelerating.

Bruce Lee said “be water.” We agree. But we also built the channels.

MM

Michael Murray

Michael Murray is the Managing Partner of Abeba Co, an AI accelerator that deploys the 1-2-10 operating model for agencies and mid-market businesses. Previously CPO and President with deep expertise in AI productization and strategic partnerships across Google, Meta, TikTok, AWS, and Snowflake.

Share

Ready to Activate AI?

Phase Zero delivers measurable results in 90 days.