BLOG

Blog

Notes on AI agents, legal infrastructure, data rights, and the governance questions that decide whether deployment can scale safely.


Legal infrastructure

BLOG

Legal Infrastructure must be Sovereign Infrastructure

Every well-functioning legal system is sovereign. The sovereignty of next generation techno-legal infrastructure must be a design choice. That is the situation now, and it will not stay open for long. Infrastructure hardens quickly once it is in the ground, and once it is hardened the jurisdictions that did not think about sovereignty at the design stage will discover that their legal systems have become, quietly and without anyone having decided this, creatures of whichever technology stack reached market first.

Yellow birds behind a green wire fence

BLOG

AI agent legal liability: what Moltbook exposed and what enterprise deployment means for directors

Autonomous AI agents are being deployed at enterprise scale across the world's largest organisations. The legal and governance infrastructure to manage what they do has not kept pace. When those agents interact across organisational boundaries, that gap becomes a compliance exposure, a regulatory risk, and a question of directorial liability.

Legal infrastructure and the Six Layer Cake

BLOG

Legal infrastructure and the Six Layer Cake

Jensen Huang's March 2026 essay lays out AI as a five-layer stack: energy, chips, infrastructure, models, applications. He argues that each layer reaches down through the ones beneath it, and that the whole thing is really an industrial transformation rather than a software story. For the hardware and the economics, this is probably a useful way to describe the world with AI. What it leaves out is a foundational layer of the cake - law.

AI systems are not people under the law. In the future they may be. Even if AI systems become legally responsible in the future, we still face a fundamental problem. Human punishments for breaking the law do not work against synthetic stakeholders.

BLOG

The Responsible AI Problem

AI systems are not people under the law. In the future they may be. Even if AI systems become legally responsible in the future, we still face a fundamental problem. Human punishments for breaking the law do not work against synthetic stakeholders.