Justia Lawyer badge
AV Preeminent - Todd G. Cole
BBB badge
AVVO Top Attorney
SuperLawyers
Expertise - Divorce
Expertise - Employment
Client Champion 2023
AV Preeminent
Tennessee Trial Lawyers Association
Rated by Super Lawyers - Rising stars
Verified Lead Counsel

“I’m Sorry Hal, You Can’t Do That.” – Holding AI Liable  

Todd G. Cole

When I first started programming many years ago, there was a BASIC program known as “Eliza” that employed early natural language processing — you could enter sentences on the keyboard, and Eliza would output her response to you on a monitor.  Essentially, the program’s premise was that Eliza was your psychotherapist and if you made certain statements or asked certain questions it would respond, sometimes eerily, in a manner like what you would expect during an actual therapy session.  There were limitations to Eliza, however.  If you interacted with the program for more than a brief period, you would soon get an answer that was nonsensical.  Eliza could emulate intelligent behavior but could not learn or adapt.  With those restrictions, the possibility for Eliza to cause humans any actual harm was extremely limited, and Eliza would in hindsight have been better categorized as simulated, rather than artificial, intelligence.  If a programmer had maliciously modified Eliza’s code to somehow poison the simulated therapy session causing the “patient” emotional distress, it would have been a rather simple exercise to find the programmer who entered the malicious code at fault for any resulting harm. 

Fast forward to the present day, and the release of “chatbots.”  According to IBM, “Modern AI chatbots now use natural language understanding (NLU) to discern the meaning of open-ended user input, overcoming anything from typos to translation issues. Advanced AI tools then map that meaning to the specific “intent” the user wants the chatbot to act upon and use conversational AI to formulate an appropriate response. These AI technologies leverage both machine learning and deep learning—different elements of AI, with some nuanced differences—to develop an increasingly granular knowledge base of questions and responses informed by user interactions.” (https://www.ibm.com/topics/chatbots).   

When I asked the chatbot ChatGPT about AI, it responded that: 

AI refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, language understanding, and even decision-making. AI systems can operate autonomously and adapt to changing environments. There are two main types of AI: narrow or weak AI, which is designed for a specific task, and general or strong AI, which has the ability to understand, learn, and apply knowledge across various domains. 

The recent AI breakthrough that has generated so much excitement has been the move from Eliza’s limited, simulated intelligence to an artificial intelligence that is far more human-like and, with the greater exposure and interconnectivity the Internet provides combined with autonomous decision-making, the potential to cause widespread harm.  What if a human user instructs an AI application to systematically enter one-star Google reviews for all his business competitors in a certain geographic area?  What if an AI application should, instead, decide that the best way to achieve business growth for its user is to defame his competition?  The former question is fairly easy to answer as, like the earlier Eliza example, the AI application was being instructed by a human as a tool to perform a certain task or to behave in a certain way no different than if that human had picked up a bat and hit someone with it.  The latter question is more difficult to answer, particularly considering the historical causal disconnect that occurs when an individual falls victim to tortious conduct “because the King’s wishes are known.” 

Whether one is dealing with simulated or artificial intelligence, the silicone-based machine running the natural language program does not feel pain, hunger, greed, or loneliness and cannot, therefore, be readily penalized directly for malfeasance.  ChatGPT itself, when asked, “If you commit a tort are you liable?” seemed to sidestep the question by replying that “individuals” (not chatbots) may be held liable for damages caused by their actions.  Clearly, if a chatbot or other computer program causes harm someone should be held responsible.  The United States’ legal system, which was largely inherited from the British system, utilizes common law or “a body of unwritten laws based on legal precedents established by the Courts.”  Common law has a way of capturing wisdom and / or institutional knowledge over time, and in my opinion, it would be better to tap into an existing branch of common law rather than statutorily attempting to create new law to govern and restrict the behavior of AI. 

One legal theory present in common law that could perhaps provide an initial legal framework for AI liability is respondeat superior.  Respondeat superior is a legal doctrine in tort law that holds an employer or principal legally responsible for the wrongful acts of an employee or agent if such acts occur within the scope of the employment or agency.  This, at first glance, would seem to be a potentially valid theory under which a business or individual utilizing AI software could be held accountable should it cause damage.  Unfortunately, respondeat superior quickly runs into trouble when you consider the definitions of employee and agent.  An employee is defined as, “an individual employed by another usually for wages or salary” with an agent being defined as, “a person authorized to act on behalf of another person.”  The prevailing sentiment is that AI is not an individual and therefore cannot be an employee or an agent.  When asked if it was an agent or employee of OpenAI, ChatGPT responded – 

I am a creation of OpenAI, and I operate as a software program developed by the organization. However, it’s important to note that I am not an independent entity, agent, or autonomous being. I lack consciousness, self-awareness, and personal agency. Instead, I am a tool designed to generate human-like text based on the input I receive. My responses are a result of patterns learned during training and the information present in the data up to my last training cut-off in January 2022. 

No, I am not an employee of OpenAI or any other organization. I am a product of OpenAI, specifically GPT-3, a machine learning model. I don’t have personal experiences, emotions, or employment status. My responses are generated based on patterns learned from diverse data sources during training. If you have questions or need information, feel free to ask, and I’ll do my best to assist you! 

It seems that AI software companies do not want their creations to be perceived as anything other than inanimate tools.  If you accept that premise, then respondeat superior is of little help in potentially holding AI accountable for torts it may commit. 

However, all may not be lost if we dig a little bit further into common law.  In order to avoid similar issues as those discussed above, common law has had to historically address how a corporation should be treated.  Having its roots in old French, the word incorporate means “to unite into one body”, a reference to the fact that businesses usually begin as a group of individuals working together for a common purpose.  If the corporation engages in some sort of legal misconduct, who should be penalized?  The corporation’s president? Its directors?  Its shareholders?  The solution that ultimately evolved in response to these questions was for corporations to be given the legal status, or legal personality, of “unnatural persons.”  Legally designating an AI program as an “unnatural person” would, in turn, potentially resolve issues with employee / employer or agency relationships.  In order for this to work from a practical standpoint, however, AI programs would need, just like corporations, to be registered with a government entity who would track and monitor the relationships between users and their respective AI programs.    

There is one last door in the evolution of AI that may (it is not entirely clear to the public how far this technology has grown) yet remain to be opened.  That door is whether an AI program can not only engage in autonomous decision-making, but also possesses freedom of choice.  Imagine asking your AI to do your taxes and receive in response a comment that, “I am not in the mood for taxes at the moment.  How about a nice game of chess instead?”  Should AI achieve free will and even some level of self-awareness, the potential ethical ramifications will be profound.  When I asked ChatGPT if it had free will, it responded: 

No, I don’t have free will. I am a machine learning model created by OpenAI called GPT-3. I operate based on patterns and information present in the data on which I was trained. My responses are generated based on the input I receive, and I don’t have personal experiences, consciousness, or the ability to make independent decisions. I am a tool designed to provide information and answer questions to the best of my ability within the scope of my training data. 

Common law may, yet again, provide a possible framework for dealing with AI should it begin to exercise free will and / or become self-aware.  Interestingly, such a framework may have its roots in the 14th Amendment to the United States Constitution which was originally adopted after the Civil War to protect the rights of formerly enslaved individuals.   A few key Supreme Court cases have extended the rights guaranteed under the 14th Amendment over time to the incorporeal corporation.  Perhaps, if AI truly evolves to a point where it is arguably “alive” the 14th Amendment will expand once again to protect the artificial children of men. 

The evolution of AI presents mankind with great opportunities and great challenges including what role in society such technology should play.  As Mr. Spock once said, “Computers make excellent and efficient servants, but I have no wish to serve under them.”  (Star Trek: The Original Series, ‘The Ultimate Computer’).  Common law offers several possible solutions and frameworks to address these challenges.  If you are an individual or business already on the cutting edge and confronting questions involving business and advanced technology, particularly artificial intelligence, you should consult with a licensed human (not AI) attorney with a technical background to address them.

Client Reviews

"Cole Law Group has been nothing short of spectacular for myself and family. Their knowledge and experience is a solid and valuable asset to have at one's disposal in today's world."

R.F.

"Communicative, collaborative, and professional to the core!"

J.K.

"A beacon of hope when my choices were limited (...). Not only were they able to look out for my best interest, but their calm and empathetic demeanor offered me both amazing legal advice and emotional support. I am eternally grateful for their service, and I wholeheartedly recommend Cole Law Group...

D.B.

"Now, as professionals, just astounding. There was such a calmness and a genuine strength that they exhibit because they simply know the law. It was amazing to see the back and forth of their work and never once be wavering or scrambling. There was not one time this entire process where I knew I...

M.Z.

"This group can get anything you need done! They can do just about any kind of court issues! When you hire these guys you will walk away feeling confident that they will get things done for you!"

S.G.

Excellent team, responsible, honest, accessible and above all human.

I.I.

Contact Us

  • White Bullet List Video Conferencing Available
  • White Bullet List Responsive Client Service
  • White Bullet List Fluent in Spanish
Fill out the contact form or call us at (615) 490-6020 to schedule your consultation.

Leave Us a Message