Ethical Dilemmas in Law: Steering Clear of Generative AI's Fake Case Citation Trap

Ethical Dilemmas in Law: Steering Clear of Generative AI's Fake Case Citation Trap

Lawyers face many ethical and professional dilemmas as law firms adopt advanced technologies. Below, we examine one area of particular concern when using generative AI tools: lawyers using fake case citations generated by AI tools in court filings. 

No lawyer is eager to face a reprimand from a judge or see their name in the news for using fake citations. But we all see story after story of how generative AI can deliver a competitive edge. How can you avoid being fooled by counterfeit citations while still tapping into AI's many benefits?

Learn what fake case citations look like. 

Generative AI (GAI) tools like ChatGPT are known to "hallucinate," i.e., they fabricate information that sounds plausible but isn't real. 

They offer citations for entirely made-up cases with astounding confidence. 

GAI may format fake citations properly and even refer to real cases. But their accuracy is far from guaranteed. GAI tools can misrepresent fundamental legal concepts or factual details of legal cases. They may rely on legal doctrines or principles that are incorrect or no longer relevant. Researchers recently concluded:

  • Legal hallucinations occur 69% of the time with ChatGPT 3.5 and 88% with Llama 2 when these models are asked specific, verifiable questions about random federal court cases.
  • LLMs (the large language models GAI tools rely on) often fail to correct a user's incorrect legal assumptions in a contra-factual question setup.
  • LLMs cannot always predict or do not always know when they are producing legal hallucinations.  

It’s important to distinguish between fact and fiction wherever it appears. Otherwise, you risk giving a false impression to the court.

What’s the harm of a fake citation?

Using public-facing GAI tools like ChatGPT can jeopardize your credibility and lead to court sanctions. Just ask Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. Mr. Schwartz was involved in one of the first, most highly publicized instances of a lawyer caught using a fake AI citation. In the judge’s opinion and sanctions order, the court wrote of the harms that flow from the submission of fake opinions:   

  • The opposing party wastes time and money in exposing the deception. 
  • The Court’s time is taken from other important endeavors. 
  • The client may be deprived of arguments based on authentic judicial precedents. 
  • There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. 
  • It promotes cynicism about the legal profession and the American judicial system.
  • And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

Clearly, trusting AI-generated citations without question is a recipe for disaster. However, GAI tools for lawyers are here to stay, making it imperative for lawyers to learn how to use GAI tools responsibly.  

Three ways to avoid hallucinated citations in court filings

Here are three ways to ensure the citations you rely on are accurate and credible:

Double-check every citation.

Never rely solely on the information a GAI tool presents. Manually verify each citation using a trusted and established resource that offers reliable access to court-accepted case law, e.g., legal databases like LexisNexis, Westlaw, Casetext, and Fastcase. 

There’s no substitution for your critical thinking, legal knowledge, and fact-checking capabilities (or that of a junior associate, paralegal, or other assistant).

Take notes on your legal research sessions. 

Document your legal research process, noting the AI tools you use and the steps you take to verify information. These notes will be incredibly helpful if anyone ever questions the accuracy of your citations.

Use tools that prioritize legal accuracy.

Popular legal research platforms like LexisNexis and Westlaw incorporate generative AI features to enhance your search experience. They specifically design their features to prioritize legal accuracy. That means they don’t fabricate citations and may even identify relevant cases and legal arguments you’ve missed.

Also, investigate specialized legal AI tools like Harvey.ai and SaulLM-7B that are emerging. Tools explicitly trained on legal data are less prone to generating fake citations. As new legal AI tools evolve, they may become more useful than traditional legal research platforms. 

Much remains to be seen! Subscribe to PractiPulse™  to stay informed about how generative AI is evolving in the legal field. Learn its strengths and weaknesses to maximize its benefits while avoiding pitfalls.

To view or add a comment, sign in

Explore topics