Infrastructure to build and trade authenticated data.

The legal system, fundamentally designed around human agency and accountability, faces significant challenges when attempting to extend its principles to machines. Unlike humans, machines, including sophisticated AI, lack the capacity for moral judgment and cannot be held accountable in the same way individuals are. Legal rights and responsibilities presuppose the ability to make conscious decisions and understand the consequences of one’s actions, a capability machines fundamentally lack. This discrepancy raises complex questions about liability, accountability, and agency, as current legal frameworks are ill-equipped to attribute blame or rights to entities devoid of consciousness or intent. Thus, as technology progresses, the legal system must evolve to address the unique challenges posed by the integration of machines into societal roles traditionally held by humans.

 

Absorbing the new AI stakeholder in our legal systems

Increasingly capable AI systems will serve as tools, builders, collaborators and decision-makers in the workplace and our homes. Global legal systems will face several key inflection points in adapting to a new AI driven reality. While AI currently handles the more mundane roles of data entry and copywriting, the future will see AI performing tasks autonomously, often in ways we don’t fully understand.

To date, the law has been designed around and built for the benefit of the person (or what “persons” value). Most (if not all) legal frameworks have been built for a world where every action or decision, contract or creation can be tracked back to the concept of a person as the ultimate stakeholder of legal interest. The introduction of increasingly complex AI into traditional human roles catalyses the need for a shift in the law to accommodate this new and increasingly visible AI stakeholder.  We call this legal complexity the Responsible AI Problem.

 

AI as a separate legal personality

Artificial Intelligence possesses the potential for independent reasoning, problem-solving, and perhaps one day even emotions. Should we grant legal rights to AI systems? How do we legally categorise independent AI, for example, is AI property? a sentient being akin to a legal person? a corporate structure? an animal? or an entirely new classification?  The answer of course is most likely, “all of the above”, particularly as AI will manifest in many different ways, often as a hybrid (consider brain-computer interface and AI in your refrigerator), and with different legal rights and obligations over time.

Let’s examine this transition more closely through the lens of the Artificial Intelligence Organisations (AIOs) or Autonomous Organisations (AOs). We are already seeing first-generation or early AIOs with existing decentralised autonomous organisations or DAOs operating in the economy.

AIOs will go on to challenge our traditional view of enterprises having human directors – especially where decisions made by an AI transcend the predictions or even desired outcomes of human creators. These entities may even be legally directed by code-maximised directors & officers that are based on composite digital twin progeny of, for example, a combination deep fake of Robert Kennedy, Oprah Winfrey, Martin Luther King, Greta Thunberg and Donald Trump.  The nub of the Responsible AI Problem (discussed further below) is made out using this example – how would the law hold these code based digital twins accountable? How would we probe and characterise legal requirements around intent? How do we punish code when it breaks the law?  Who or what do we insure for when things go wrong in governance?

 

Responsible AI Problem

Existing legislation around the world assumes rights, responsibilities, and penalties for non-compliance with the law to ultimately be attributed to a ‘person’, not a machine or algorithm. Who is at fault when things go wrong? The user, the technician, the training team, the original developers, or perhaps the AI itself?

Currently, unlawful acts or omissions that are AI-augmented are attributable to a legal person. The viral NYC lawyer who sued an airline on behalf of his client using AI-generated submissions received a sanction by the court recently for acting in bad faith and making misleading statements. This is despite the fact ChatGPT was responsible for generating the fictitious case citations relied upon (and confirming they were indeed, real and valid).

A partial solution to the Responsible AI Problem is to give certain AI systems (including as appropriate AIOs) legal status as a person. This may even be the natural evolution of the limited liability company.  Putting aside the possibility of a conscious Artificial General Intelligence (AGI) system for now, this is only a partial solution because it does not deal with the failure of a machine to care about the “long arm of the law” –that is, an AI system will not care if you put it in prison, shame it, repossess its vehicle, or prohibit it from having certain relationships

 

AI in Everything

AI tools like ChatGPT and Bard position AI in the public eye as a standalone technology. But as tools like Microsoft’s O365 Copilot blend into the Windows menu bar, using AI may soon be as normalised as ‘using the computer.’

Distinguishing human and AI interaction or output has been in contemplation in the development of the EU AI Act under Articles 13 and 52. These emphasise (amongst other things) that high-risk AI systems must be sufficiently transparent so that users can interpret the system output to use appropriately, and that consumers have the right to know when they’re dealing with AI.

In Europe, the law[1] states that individuals should not be subject to a decision that is based solely on automated processing (such as algorithms) and that is legally binding, or which significantly affects them. A decision may be considered as producing legal effects when the individual’s legal rights or legal status are impacted (such as their right to vote). In addition, automated processing can significantly affect an individual if it influences their personal circumstances, their behaviour or their choices (for example automatic processing may lead to the refusal of an online credit application).

This approach may fall out of favour as AI is increasingly embedded in the plumbing of everything, from our homes to our cars. Further, it may be that individuals assume they are dealing with an AI in some form unless informed that they are dealing with a human. In fact, a future problem, not contemplated in the current drafting, is that, in a world where AI does a better job of decision-making, we may instead want to be informed when we are dealing with a human, or subject to a decision that is based solely on human processing.

We need flexible and forward-looking laws that understand the trajectory of embedded AI in everything. Laws rigidly designed around today’s tech specifics will quickly become outdated. The need of the hour is future proofing technology-agnostic legislation that anticipates these legal inflection points where AI merges with our institutions and our lives.

 

[1] Article 4(4) and Article 22 and Recitals (71) and (72) of the GDPREDPB Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation (EU) 2016/679.