IT
Italy | Regulatory Framework Status: draft
Effective: N/A
moderate

AgID draft guidelines — adoption of AI systems in the public administration (consultation)

Draft guidelines for the adoption of Artificial Intelligence in the Public Administration — Public consultation (Version 1.0, 14 February 2025)

I. Regulatory Summary

Requires public administrations to formalise AI governance: human oversight for critical decisions, security-by-design with periodic checks, risk management procedures, and GDPR compliance (including DPIA and role allocation). Implementation typically implies internal policies, templates (DPIA/risk assessment), staff training, and documentation/recordkeeping.

II. Full Description

## Nature of the instrument Draft AgID guidelines released for public consultation (Version 1.0 dated 14 February 2025). ## Scope - Target audience: public administrations under Article 2(2) of the Digital Administration Code (CAD). - Object: modalities for adopting AI systems, with emphasis on legal compliance and organisational impact. ## Key operational requirements - Human oversight for critical decisions. - Robust security measures and periodic security checks/updates. - Internal risk assessment and risk management procedures. - Data protection compliance: allocation of controller/processor roles and DPIA requirements; consultation with the GPDP where required. - Governance options: establishment of mechanisms such as an AI Ethics Committee. ## Annexed tools The draft includes annexes (“tools”) such as an AI impact assessment model and a template for an AI ethics/code of conduct aligned to the EU AI Act.

III. Scope & Application

Draft AgID guidelines addressed to the public administrations identified in Article 2(2) of the Italian Digital Administration Code (CAD). They provide operational requirements and recommendations for adopting AI systems, focusing on legal compliance and organisational impact across the AI lifecycle. The document uses normative drafting terms (e.g., MUST/SHALL/SHOULD) and includes annexed tools, such as an AI impact assessment model, an AI ethics code template, risk management elements, and references aligned to the EU AI Act.

IV. Policy Impact Assessment

Requires public administrations to formalise AI governance: human oversight for critical decisions, security-by-design with periodic checks, risk management procedures, and GDPR compliance (including DPIA and role allocation). Implementation typically implies internal policies, templates (DPIA/risk assessment), staff training, and documentation/recordkeeping.

Primary Focus: public_sector_ai_governance