888-624-CADY - Available 24/7!

When most attorneys think about AI, the thought that usually comes to mind is “Will AI take my job?” With headlines such as: “Lawyers could be replaced by artificial intelligence” and “Are Robots Going to Take Our Legal Jobs?” being splashed across newspapers, it is no surprise that artificial intelligence invokes feelings of fear in most attorneys.  However, AI shift in the practice of law will not be to eliminate all jobs, but lead to new opportunities.  Specifically, the rise of AI will lead to a new, viable area of law to practice, with complexities spanning from civil to criminal law. Due to these complexities, legislatures and courts will have to deal with creating and developing new laws and evolving old law.


In 1956, John McCarthy coined the term and defined AI as “the science and engineering of making intelligent machines.”  Since then, the definition of AI has varied depending on the source.  Despite the lack of a concrete definition, most definitions of AI have common characteristics focusing on the ability of a machine to replicate the intelligence of a human being.  Due to this, AI is often evaluated on a scale of weak to strong AI.  Strong AI or true intelligence systems are built to replicate the intelligence of a human being.  The core characteristic of strong AI is the ability of the AI to make judgements and decisions without outside influences in a manner similar to how a human would behave.  While weak or narrow AI focuses on creating a system that works without including the logic or reasoning of a human being.  Between strong and weak AI are systems that focus on using the logic and reasoning of humans as a model, but not for the AI systems’ end goal.

The most common AI available today falls under the category of weak AI and can be seen in technology like Siri or Alexa.  Strong AI does not exist yet.  Experts in the industry have struggled to replicate human intelligence.  As experts gain a better understanding of the human brain, they are getting closer to developing strong AI.


Although strong AI does not currently exist, the intersection of AI and the law is developing from theories and hypotheticals to real court cases.  Below are just a few examples of how AI has already begun to interact with the legal industry and a brief discussion on current laws.


The AI Criminal

In 2014, a group of artists in Switzerland created an AI system, Random Darknet Shopper, with the purpose of creating an art exhibit.  The exhibit focused on the dark web, a hidden part of the internet that is unindexed, which contains access to over 16,000 items to purchase, both legal and illegal.  The artists created Random Darknet shopper as an automated online bot that would purchase items on the dark web that would then be placed in the exhibit.  After letting the AI program loose, the AI purchased legal items like Nike trainers and illegal items like ecstasy pills.  Eventually, the illegal items were seized by the government and the prosecutor had to make a decision on whether to charge the AI or artists.  In the end, the Switzerland prosecutor did not press charges against the artists or the AI system reasoning that the items were not being used in a crime because the public should have the right to look at these items in such a manner.

Current Laws and the Future for the AI Criminal

Courts and prosecutors in the United states have not had to evaluate the question of criminal liability when AI is used yet.  When evaluating criminal liability, current laws do not provide a means of prosecuting AI for two reasons.  First, AI is not considered a human being and under most criminal statutes, such as Ohio Revised Code 2901.21, the actor must be a “human.”  Second, even if “human” was redefined to encompass AI, AI is unable to meet the actus reus (the physical element of the crime) and mens rea (the requisite mental state of the actor).  Generally, for actus reus to be met the action done by the actor must be voluntary.  Due to the current limitations of AI, AI is unable to act voluntarily beyond the confines of the algorithm that the creator designed.  Furthermore, AI at this current stage does not have the cognitive ability to recognize the requisite mental state required under most state laws.

Due to this, courts and prosecutors must focus their attention on holding the AI creator and the AI user accountable for the actions of the AI.  By doing this, courts and prosecutors can still apply the traditional model of criminal liability and treat the AI as the tool of the creator or the user.  However, as AI becomes more intelligent, courts and prosecutors may need to change the traditional model of criminal liability to encompass AI that acts beyond the intention of the users or the creator.

The AI Court: a Trade Secret

In 2013, Eric Loomis was arrested by officers in Wisconsin on charges of being the driver in a drive-by shooting.  Mr. Loomis took a plea deal that convicted him of two minor crimes, which led him to a six-year prison sentence based on the court finding Mr. Loomis was a risk to the community.  In making the decision, the court relied in part on a score from COMPAS. A weak AI system that creates an assessment of whether the person is predicted to recidivate based on comparing data and information of people in similar data groups.

Mr. Loomis appealed the court’s decision to use COMPAS all the way to the Supreme Court of Wisconsin.  In State v. Loomis, 881N.W.2d 749 (2016), Mr. Loomis argued that using COMPAS violated his due process rights.  Mr. Loomis asserted that he was unable to verify the accuracy of his sentencing because COMPAS’s algorithm is held as proprietary information.  The Supreme Court of Wisconsin did not agree with Mr. Loomis, holding that courts could rely on COMPAS as a part of the risk assessment.  The Supreme Court was satisfied that Mr. Loomis was able to verify the accuracy of COMPAS’s report because COMPAS provided a brief sheet on how the questions were evaluated.

Current Laws and the Future For the AI Court: A Trade Secret

COMPAS was considered proprietary information by Wisconsin Supreme Court.  However, COMPAS is one example of how courts are evaluating AI technology and determining whether there is a need to reveal the AI proprietary information or algorithm.  Under current laws, AI may find some protection under patent law; however, patent law is not a complete defense because of some of the underlying technology used in AI like data compilations.  However, AI can be easily protected under federal and state law as a trade secret.  In general AI meets the requirements of a trade secret because the AI is an economically valuable secret.  For the AI system to be protected as a trade secret reasonable measures must be taken to ensure secrecy is maintained.

General counsels for companies working on AI must ensure that reasonable measures are put in place to protect the AI as a trade secret.  For those trying to seek admission or discovery of the AI algorithm need to inform the courts on the importance of understanding how the algorithm works and having the underlying data.

The AI Employer

In 2014, Amazon began to build an AI system that specialized in reviewing resumes.  The system was built using natural language processing, which allowed the AI to understand language as it is spoken by humans.  The AI system was also provided 10 years’ worth of resumes to aid the AI to recognize key patterns in resumes that would weed out weak candidates.  However, after only a year, Amazon stopped using the AI.  Amazon found that the system had trained itself to weed out women and prefer men despite qualifications.  The flaw in Amazon’s AI was how the system was trained to spot patterns.  Specifically, the resumes that the AI was given to rely upon in seeking good candidates heavily favored men.

Current Laws and The Future For AI the Employer

Under Federal Law, an employer cannot discriminate in the hiring process, thus consideration of a person’s race, religion, gender, etc. is not allowed.  Because AI systems, like Amazon’s can make decisions that clearly violate the law, the general counsel of companies using the AI systems need to be on alert.  General counsels should be evaluating the technology and ensuring that the results are legal and non-discriminatory.


The complexities of AI are already reaching our courts and AI is sure to present more legal issues in the future including determining liability for autonomous vehicle accidents, ownership and licensing works of art and literature created by AI, and the constitutionality of using digital AI avatars as lie detectors.  As AI technology continues to advance and reach into every aspect of our life, so too will AI continue to infiltrate the law. As AI becomes intertwined with the law, attorneys on both sides, civil and criminal, will need to be prepared to handle the issues that AI will create.  Attorneys should not fear this technology, but should embrace it and the new opportunities that lay ahead.


Republished by permission of the Cleveland Metropolitan Bar Association and the author.


Alexandra (Ali) Nienaber is an associate at Koehler Fitzgerald LLC, where she works on commercial litigation and Out-of-Network defense.  She is a graduate of Michigan State University College of Law.  Before joining Koehler Fitzgerald LLC, Ali was the Legal Innovation Counsel for Michigan State University College of Law’s LegalRnD program.  She has been a CMBA member since 2017.  She can be reached at (216) 539-9370, anienaber@koehler.law, or @AliNNienaber.