BLOG
The Responsible AI Problem
AI systems are not people under the law. In the future they may be. Even if AI systems become legally responsible in the future, we still face a fundamental problem. Human punishments for breaking the law do not work against synthetic stakeholders.
The Responsible AI Problem
The question of who is responsible for the acts of AI systems is currently being side-stepped by enterprise and tech vendors who rely on the (correct) legal position that all AI outputs, be they harmful or rights-accruing, ultimately flow back to a legal person. As far as the law is concerned, a human is always in the loop. On this view AI systems create no new problems, even if identifying the responsible legal person takes some digging.
This view has served us to date but it will not serve us much longer. This is because, increasingly, the span of control of an AI system cannot be meaningfully moderated by a human. As we know, agents beget agents beget a spawn of agents, beget...
A partial solution
All legal punishments assume that the law will be implemented against a legal "person". In light of this, one response has been to suggest that we give certain AI systems legal status as a "person", and that we recognise AI systems within the legal system in a similar way to the way in which the law recognises corporations as legal persons.
This is only a partial solution at best. Because even when dealing with non-human legal persons, legal punishments assume that the enforcement of a law will impact a human (e.g. a director of a corporation). In all respects, the law assumes the existence of human feelings, human desires and human bodies.
Even if AI systems become legally recognised, we still face a fundamental problem because the legal system is designed around humans and the law always looks for a person to hold accountable.
An AI system does not care if you put it in prison, shame it, repossess its vehicle, or prohibit it from having certain relationships.
Recognising certain AI systems as legal persons is a step in the right direction, but more is needed.
The solution
Meaningful enforcement against AI requires a legal solution that moves at the same speed and span of control as AI. Nooriam provides embedded legal guardrails that can enforce on AI systems in device. The law, built in.