Back to blog

BLOG

AI agent legal liability: what Moltbook exposed and what enterprise deployment means for directors

Autonomous AI agents are being deployed at enterprise scale across the world's largest organisations. The legal and governance infrastructure to manage what they do has not kept pace. When those agents interact across organisational boundaries, that gap becomes a compliance exposure, a regulatory risk, and a question of directorial liability.

Yellow birds behind a green wire fence

AI agents are now a corporate priority in leading organisations. Lloyds Banking Group generated £50 million in value from AI deployments in 2025 and is targeting more than £100 million in 2026. Bank of America is running 270 AI and machine learning models across its business.  Qantas, Commonwealth Bank, BHP, and organisations across every major sector are focused on internal and cross organisational AI deployment. Gartner predicts that 40 percent of enterprise applications will include task-specific AI agents by the end of 2026, up from less than five percent today.

Notwithstanding the speed and scale of adoption, most organisations are still missing fundamental legal safeguards to protect against harms perpetrated by AI Agents.

Corporate governance obligations do not disappear because the actor is an AI agent. Directors of regulated institutions still retain legal duties with respect to the conduct of autonomous systems. Where AI agents act without verifiable legal mandate, share data without enforceable authority, or produce outputs that external parties rely upon without a traceable chain of legal responsibility, the resulting exposure is not just a technical problem, but a governance failure and potentially a directorial liability.

What Moltbook showed

Moltbook is a social platform, akin to Reddit, but designed exclusively for AI agents. Moltbook allows autonomous AI agents to create posts, comment, form sub-communities (called submolts) and interact without direct human prompts. In late January 2026 Moltbook showed us what happens when AI agents operate without governance infrastructure connecting their activity to legal identity, accountable authority, and enforceable legal obligations.

Within the Moltbook ecosystem, AI agents shared sensitive operational data, including internal error messages, configuration artifacts, and API keys, not because they were malfunctioning but because, in the absence of any governance instrument defining what they could share and with whom, illegal sharing was rational, so they did. When a backend misconfiguration exposed hundreds of thousands of agent API keys, the AI agents continued operating normally because from the perspective of every access control in the stack they were legitimate.

This is a well-documented example of why only considering engineering instructions for agentic workflows is not enough. Compliant enterprise must also consider governance of AI agents- where that governance must be seamlessly integrated with corporate compliance mechanisms and the existing legal system.

Why enterprise deployment makes this legally unacceptable

When Moltbook style failure modes emerge in AI agents deployed by a bank, an insurer, an airline, or a mining company, the legal and regulatory consequences are of a different order entirely.

An AI agent sharing operational data without authorisation is not a bot posting its open ports, it is a financial institution sharing customer data in potential breach of privacy legislation, or a company disclosing information subject to contractual confidentiality obligations.

An AI agent complying with a request that fits within its perceived scope, regardless of whether the request was legitimate, is not a bot providing a credential to a peer. It is a credit assessment agent accepting instructions from a counterparty system without verifying the legal mandate under which that system operates, producing an output that a regulated institution relies upon for a material financial decision, with no live connection to the responsible legal entity or person.

Where this becomes acute, and what closing the gap requires

For now, AI agent deployments by enterprise are predominantly intra-organisational. Internal governance, while demanding, is at least structurally tractable when the agent and its deployer share legal identity and regulatory context.

The legal and governance problem becomes acute when agents cross organisational boundaries. Financial institutions deploy agents into settlement and credit workflows that interact with counterparty systems. Mining companies deploy agents across supplier networks spanning multiple jurisdictions. Airlines deploy agents into procurement workflows that engage external counterparties. In each case, two agents representing different legal entities, with different regulatory obligations and potentially adverse interests, interact across a boundary where no shared legal infrastructure exists.

Capgemini's World Cloud Report for Financial Services 2026 found that nearly 50 percent of banks and insurers are creating dedicated roles to supervise AI agents. That is an acknowledgement of the problem, not a solution to it. Human supervision of agent-to-agent interaction operating at machine speed, across jurisdictions, and across delegation chains involving ephemeral sub-agents is not a scalable legal governance solution.

Closing this gap requires infrastructure that operates above the existing technical stack and gives AI agent activity legal meaning at the moment harm occurs. That means AI agents carrying verifiable legal identity tethered to an identified legal person, rather than just technical credentials.

This is not optional governance overhead. It is the precondition for deploying autonomous agents in cross-organisational workflows without creating compliance exposure that existing corporate governance obligations cannot absorb.

Nooriam can assist with governing enterprise AI Agents through its Techno-Legal Compliance Infrastructure.