CA | Incident Report
Hallucination
2026-01-13
Ontario tribunal decision spotlights Grok-assisted filings with hallucinated authorities
AI Model: Grok (xAI)
I. Executive Summary
Canadian legal trade press reported on a Law Society Tribunal matter in which a lawyer (with a suspended licence) acknowledged using generative AI, specifically Grok, to research and draft motion materials. The tribunal noted the materials contained non-existent or misleading authorities and procedural misunderstandings, and the lawyer admitted insufficient verification. The decision and reporting highlight how AI-assisted errors can become relevant in professional regulation proceedings, including costs and process-integrity considerations.
II. Key Facts
- Reported publicly on January 13, 2026 by Canadian Lawyer.• Matter involved the Law Society Tribunal (Hearing Division) and motion materials containing non-existent/misleading authorities.• The applicant admitted relying on Grok and not sufficiently verifying citations, hyperlinks, and application of tribunal rules.• Tribunal denied the bias/recusal motion and indicated AI-related errors could be considered in later stages (e.g., costs and the motion to vary/remove suspension).
III. Regulatory & Ethical Implications
This incident reinforces that “reasonable verification” expectations apply in disciplinary and motion practice, not only merits briefs. It also signals that regulators may treat AI hallucinations as a process-integrity issue with potential cost and conduct consequences even absent intent to mislead.