FR | Incident Report
Hallucination
2025-12-18
French court flags “AI hallucination” risk in written submissions with unverifiable citations
AI Model: Unspecified generative AI tool
I. Executive Summary
A French court decision (Pôle social, Tribunal judiciaire de Périgueux) highlighted that legal authorities cited in a party’s written submissions appeared untraceable or incorrect and explicitly warned about “hallucinations” potentially originating from generative AI tools or search engines. The case has been reported in European legal/tech press as an early French example of courts identifying AI-linked citation reliability issues in litigation materials.
II. Key Facts
- Decision date: 18 December 2025 (Tribunal judiciaire de Périgueux, Pôle social).• Court indicated some cited “jurisprudence” was untraceable/erroneous and cautioned the party (and counsel) to verify sources.• Reporting framed the issue as potential generative-AI “hallucinations” contaminating court submissions.
III. Regulatory & Ethical Implications
Signals increasing European judicial scrutiny of citation integrity and may accelerate court/Bar guidance requiring verification workflows (“human-in-the-loop”) for AI-assisted legal research and drafting; raises potential exposure to adverse procedural consequences where unreliable authorities are filed.
IV. Media Coverage & Sources
No external sources linked.