Humans are More Complex than AI

The reality of humanity: nothing is ever as simple as it seems. The deep truth of human experience is that we are complicated. With this complexity comes a disconnect between the law and artificial intelligence (“AI”). The legal system is built on moral responsibility and subjective judgment. AI, by contrast, operates mechanically and without any ethical obligations. With the rapid development of AI, confusion and problems arise regarding how and when AI can and should be used in the law.  

While AI can seem trustworthy and reliable, there are significant risks that come from using it for legal matters. Using AI as a substitute for the unique capabilities human lawyers provide can create operational and ethical failures. Some problems include: fabricated legal information (“hallucinations”); confidentiality risks; risk of bias; mechanical operation; and accountability for error.  

Fabricated Court Decisions 

One of the most dangerous risks for all practitioners, but especially for those acting pro se (i.e., on one’s own behalf), is the AI’s tendency to produce hallucinations. Generative AI, such as ChatGPT or Google Gemini, have repeatedly produced fabricated, non-existent court decisions, non-existent quotes, and inaccurate analysis of the law.  

Meaning, the analysis of the law could simply be incorrect or, more dangerously, the content itself could be completely fabricated and fake. While the provided citations may seem trustworthy, they are not and can pose serious consequences for anyone who submits them to a court. For instance, if you filed a motion with the court that included fabricated cases, you could face fines, sanctions, and/or total dismissal of your claim. 

AI Does Not Understand Confidentiality  

Additionally, unlike human attorneys, AI is not bound by ethical and professional rules. Lawyers, on the other hand, are required to protect client communications and maintain strict confidentiality. Generative AI utilizes information put into it in order to develop its systems and increase the knowledge it has. Putting your information into AI is not only dangerous on a personal level, it is also dangerous on a legal level. If you put confidential and private case details into an AI tool, that information could be treated as voluntarily disclosed by the court and stripped of all legal protections. By simply using an AI tool, you risk exposing your sensitive information to the public.  

How to Avoid AI Problems 

The overwhelming consensus is that AI is not going anywhere. Although there are certain dangers associated with AI, there are also some benefits and conveniences. Thus, because of the significant problems it poses in the legal field, the best approach to AI is to use it as a tool NOT as a substitute for a lawyer.  

To protect yourself and your interest:  

  1. Verify, verify, and verify some more: You must always verify any information generated by AI. AI has absolutely no moral or ethical obligation to protect you.  

  1. Hire a human attorney: Attorneys can think creatively and solve problems in ways AI cannot yet manage. Remember, attorneys are trained professionals and using one will ensure your documents are drafted correctly, your information is protected, and the law is applied correctly.  

AI cannot provide the nuanced understanding, interpretation, application of law, or legal protections that human attorneys are required, by law, to give. As such, please keep in mind that you need to be extremely careful using AI when dealing with the law.  

Written By: Taryn Janes

Madison Staples

Director of Marketing and Communications

Next
Next

Trick-Or-Treating Safety Tips: Keeping Halloween Fun and Injury Free