RESEARCH ยท WORKING PAPER

Reading the TLIFs, the Leading Attribution Gap: Legal Identity, Rights and Liability in Cross-Organisation Multi-Agent Systems

A framework for giving cross-organisational AI agents legal identity, data rights, and enforceable liability pathways before harm and value attribution become unmanageable.

Red leaf detail

ABSTRACT

The deployment of autonomous AI Agent systems across organisational boundaries is accelerating faster than the necessary governance infrastructure. While technical protocols governing agent authentication and communication, principally MCP and A2A, are maturing rapidly, protocol-layer maturity does not produce legal legibility at the governance layer. Legal legibility, the capacity of a legal system to recognise, interpret, and regulate activities by mapping them onto existing legal concepts of personality, duty, rights, and liability, cannot be generated by technical protocols designed to solve for correctness, interoperability, and authentication. The resulting gap is bidirectional: it concerns not only the attribution of liability when an AI Agent causes harm, but the allocation of rights, including data rights and economic rights, generated by agent activity in the field. We characterise this as the legal attribution gap and argue it represents a material and largely unaddressed source of organisational risk in cross-organisational multi-agent deployments.

Building on an established taxonomy of six critical failure modes in multi-agent systems, we introduce a Techno-Legal Infrastructure Framework (TLIF) comprising four interlocking components: an Independent Legal Registry providing authoritative identification and authentication of registered AI Agent legal identity; Legally Authenticated Data (LAD) giving data assets verifiable legal identity and chain of custody; Legally Authenticated AI Systems (LAS) giving AI Agents verifiable legal identity and live legal status with foundational governance instruments cryptographically tethered to the LAS object; and Smart Legal Contracts (SLCs), operating legal agreements expressed in natural language and executable code that perform contractual obligations in real time, including to scope, gate and legally terminate access to live data streams that AI Agents can access throughout a workflow. The TLIF supports not only a continuous legally admissible record of performance and conduct, but performs legal revocation and deprovisioning as executable contractual acts, propagating termination through the delegation chain at machine speed.

We argue that this techno-legal infrastructure is not optional governance overhead but a precondition for the responsible, risk-managed, and economically coherent deployment of autonomous AI Agent systems at enterprise scale, and that without it neither the legal liability nor the economic value produced by agentic AI systems can be properly attributed, governed, or enforced.