US
US | Incident Report Professional Guidance
2025-09-03

LACBA newsletter ethics column warns of sanctions tied to AI-hallucinated citations

AI Model: Generative AI (examples cited: ChatGPT) Legal EthicsProfessional ResponsibilityLegal TechHallucinationsGuidance

I. Executive Summary

A Los Angeles County Bar Association (LACBA) ethics column authored by John W. Amberg highlighted professional-responsibility risks as lawyers incorporate generative AI into drafting. The piece emphasizes that AI is not a substitute for lawyer judgment, warns that filing unverified AI output can lead to embarrassment and court sanctions, and urges truthful disclosure if questioned about AI usage.

II. Key Facts

  • The column was published in LACBA’s news/publications stream under the title “Artificial Intelligence Goes to Court.”
  • It focuses on ethical/legal risk when generative AI produces false or hallucinated case citations.
  • Practical guidance: do not file anything that a lawyer has not personally researched and verified; answer truthfully about AI use.
  • Framed as risk management to avoid client harm, sanctions, and reputational damage.

III. Regulatory & Ethical Implications

Bar-association guidance is converging on a consistent baseline: competence and supervision duties include verifying AI-assisted work product. These materials, while non-binding, are increasingly cited in training and may inform negligence and disciplinary standards as courts and bars codify expectations for AI use.

IV. Media Coverage & Sources