IT
Italy | Regulatory Framework Status: in force
Effective: 2025-10-08
moderate

CSM recommendations on AI use in the administration of justice

Recommendations on the use of artificial intelligence in the administration of justice — Plenary resolution of 8 October 2025

I. Regulatory Summary

Judges and judicial offices should implement tighter governance over AI tool usage, including a whitelist of authorised tools and user training. Operational workflows need a consistent human-review step and data-minimisation controls to prevent leakage of sensitive information. The recommendations also push institutions to prepare for the EU AI Act’s high-risk compliance model in the justice sector, including experimentation in controlled environments and periodic audits.

II. Full Description

## Context and purpose This plenary resolution adopts recommendations on the use of AI in the administration of justice. It highlights benefits of generative and predictive AI for study, analysis, document management, and workflow organisation, while stressing risks for fundamental rights, data protection, confidentiality, transparency, and decision-making responsibility. ## Legal and regulatory references The document references, among others, the CEPEJ European Ethical Charter on AI in judicial systems (3 December 2018), the EU AI Act (Regulation (EU) 2024/1689), the GDPR (Regulation (EU) 2016/679), and other EU instruments on data and cybersecurity. It also discusses Italy’s legislative developments (including a delegation bill on AI and the role assigned to the Ministry of Justice for regulating AI uses related to judicial services). ## Scope and approach The recommendations distinguish between: - **Strict judicial activity**: assisting a judicial authority in research and interpretation of facts and law and applying the law to concrete facts. - **Limited procedural / administrative / organisational support**: tasks that do not materially influence the judicial decision-making outcome. During the transitional phase referenced in the document (up to **August 2026**), the recommendations advise excluding any non-authorised AI use in strict judicial activity. Permissible uses are framed as those carried out through secure, traceable tools within the ‘justice domain’ and/or authorised by the Ministry of Justice, with human review. ## Permissible use cases (illustrative) The document lists examples such as doctrinal research support, summarising decisions and writings for internal classification, organisational analytics and reporting, document comparison, generating standardised drafts for low-complexity matters (subject to judge adaptation), linguistic/stylistic review, assisted translation (always with human verification), and scheduling support. ## Core safeguards - **Human oversight**: every AI output must be reviewed; users must be able to replicate conclusions independently. - **Confidentiality and data minimisation**: do not input sensitive/confidential data into general-purpose AI tools; consider re-identification risks. - **Bias and reliability**: address hallucinations, sycophancy, and bias; ensure fairness and representativeness where datasets are relevant. - **Security and profiling prevention**: protect the justice network and prevent profiling (including risks from RAG over justice archives). ## Governance and next steps The resolution encourages coordinated institutional governance with the Ministry of Justice, including authorisation and traceability criteria for tools, protocols for data protection and cybersecurity, a joint regulatory sandbox for controlled experimentation, and periodic audits of AI impacts. ## Key dates - Adoption (plenary resolution): 8 October 2025. - Transitional period referenced: up to August 2026 (no specific day stated).

III. Scope & Application

Non-binding recommendations adopted by Italy’s High Council of the Judiciary to orient the use of AI (including generative and predictive systems) within the administration of justice. They apply primarily to judges and judicial offices, distinguishing between (i) strict judicial decision-making tasks (fact/law assessment and application of the law) and (ii) administrative/organizational or limited procedural support tasks. They advise excluding any non-authorised AI use in strict judicial activity during the transitional phase and limiting permissible uses to secure, traceable tools provided/authorised within the justice domain, with mandatory human review and strong data-protection safeguards. They anticipate the EU AI Act’s high-risk regime for justice-related AI and recommend preparatory governance (tool authorisation, traceability, audits, and a controlled sandbox).

IV. Policy Impact Assessment

Judges and judicial offices should implement tighter governance over AI tool usage, including a whitelist of authorised tools and user training. Operational workflows need a consistent human-review step and data-minimisation controls to prevent leakage of sensitive information. The recommendations also push institutions to prepare for the EU AI Act’s high-risk compliance model in the justice sector, including experimentation in controlled environments and periodic audits.

Primary Focus: procedural court ai rules