Recently I wrote a short post about the implications of artificial intelligence (AI), especially as it applies to lawyers, in light of recent advancements in technology and the creation of tools such as ROSS and Watson.  In this post, I explore some of the difficult questions attorneys and legislators will face when dealing with AI-related criminal law issues.  Hopefully the questions I pose below and my theories will inspire some comments.

What Happens if a Robot Commits a Crime?

Robot and human shaking handsTrue AI would be just like the Mika Model discussed in my prior post – a story in which the Mika Model, a lifelike robot with emotions and true human interaction ultimately kills her owner.  While that’s not here yet, the concept of a Mika Model running around the streets of Virginia or elsewhere has left me with a lot of questions as a criminal defense attorney.  For example, if Mika commits a crime, who is responsible?  What rights, if any, does she have to a lawyer or against unreasonable searches and seizures?  And what if she’s the victim?  Does she have any rights?  Is the assailant criminally liable or just civilly?

The Mika Model story presents one approach to dealing with AI-related crime.  The author’s approach is to simply let the company deal with the problem.  A company representative arrives on the scene to disable (or kill, depending on how you want to see Mika’s rights) the Mika model that murdered its owner.  However, is that really the way society will want to deal with these types of issues?

One of the key components of criminal justice is deterrence for others who commit similar acts.  If AI functions in a manner similar to Mika, in that its software is constantly updated as each robot uploads its experiences to a cloud server, then perhaps the author’s approach isn’t the best, even if the company immediately takes control of the program.  

Internal safety mechanisms aside, if AI learns from other machines’ experiences, then perhaps punishment in the traditional sense is appropriate.  This approach would be particularly important if multiple companies compete in the AI space and provide competing models that “learn on the go.”  Not all businesses are the same, and when it comes to software, there are often proprietary models, algorithms, and formulas that drive complicated machines.  Thus, it would be important for uniformity purposes to use traditional modes of punishment to make sure all AI-powered machines understood the consequences of crime.  

On the other hand, perhaps showing machines the crimes of other machines would send a different message.  Perhaps it would embolden them, thereby leading to increased violence and crime instead of deterring it.  Additionally, being locked in a cell, eating bad food, and not having any entertainment, may mean very little to machines that likely don’t eat, have remote access to the internet at all times, and couldn’t care less about how hard or soft their bed is.  Thus, if punishment was still used as a deterrent for AI crime, it would need to be reformulated to actually deprive the robots of things they cherish.   

Punishing robots assumes that traditional concepts of intent and knowledge apply to AI-powered machines.  It’s easy to say a robot that’s driving 55 in a 25 should be held liable for their actions because no intent element is required for such offenses.  However, when you begin considering crimes like rape, robbery, and murder, and intent becomes an issue, then the answer is unclear.

AI, at its core, will always be a man-made creation due to the human coding element.  While the original code may be expanded upon by the robots’ experiences and other sources, man will have been responsible for coding the “thinking” part of the AI.  

If that’s the case, then can the machine be blamed for a flaw in the code?  For example, if Mika’s decision to murder her owner was caused by a bug in her system or an overlooked piece of code, then did she ever form the intent to murder?  Arguably no.

In fact, a bug or flaw of this nature may fit into a classic insanity defense, because “the voices made me do it.”  In this case, the voices are the code flaws or bugs in the programming.  On the other hand, if the machine develops its sense of morality through experience, which would be a scary way to program a robot, then criminal liability seems more fitting for the machine than its creators.   

If the Machine is not Criminally Liable, Then is Anyone?  

Could the manufacturer, the software programmer, or someone else be prosecuted?  This would no doubt create a chilling effect on development in this area if programmers and manufacturers believed they could be held responsible for programming bugs and system errors.  Furthermore, if the person or people responsible for programming the robot that commits the offense can be held liable, are they liable as accomplices or directly?  These will be the most difficult questions to answer, especially given that even well-developed technology does not always work perfectly.  See e.g., Samsung Galaxy and Note 7….

If society decides to prosecute machines, then it must decide whether the machines should be tried before the same courts as humans or whether they’ll have separate courts.  Whatever system is put in place, society will also have to determine if the founding fathers of the U.S. intended AI-powered machines to be part of “We the People” and whether the rights and protections of the U.S. Constitution will apply to their cases.  

In addition to considering the liability of machines and their creators to their human counterparts, what happens if an owner “kills” his own machine through an act of force or violence?  For example, if Mika had been killed by her owner, would he be criminally culpable for that?  After all, Mika is merely a piece of property.

However, even property sometimes has rights.  For example, a pet owner could not lawfully engage in bestiality with his own pet.  Yet, dogs and cats are traditionally considered property under Virginia law.  

Moreover, criminal justice seeks to protect the greater community.  One school of thought may be that, if a particular person is serial killing AI robots, perhaps this is a person that should be incarcerated because they may switch to humans or are simply unsafe to be in public.  On the other hand, the criminal defense attorneys will argue, this is a safe person who is taking his anger and aggression out on an appropriate object; killing robots is akin to punching a punching bag or smacking your TV or computer when it’s acting up.  However, if history is an indicator, it is possible that crimes against machines may be prosecuted as if they were against humans.  See e.g., 18 U.S.C.S. 2256(8)(B) (2003) (prohibiting computer-generated child pornography).  

The way forward is, and will remain, unclear for the time being as this technology seems farther away than it probably is and given the political climate in the U.S., it’s not likely a hot button issue.  But hopefully legislators and others will be discussing these issues soon, so that the law does not merely react to new technology but creates a framework within which it will function.  

If you, like me, are interested in technology as applied to the legal field and how artificial intelligence may or may not change our criminal justice system, please call me at Greenspun Shapiro today.

Join The Conversation
Isabella B 03/30/2019 06:27 PM
I enjoyed this article and super interested in learning and reading as much about AI crime. Specifically, what a trial for a murder committed by an AI humanoid would look like in the future.
Post A Reply
GSPC 09/16/2020 05:56 PM
Isabella, thank you for your feedback! We are happy to hear that you enjoyed this article. We think it's a very interesting topic.
Post A Reply
Post A Comment