IT
Italy | Regulatory Framework Status: in force
Effective: 2026-01-08
high

Italian DPA warning on AI deepfake content-generation services (Provv. 789)

Warning to users of AI-based digital audio/video content generation services capable of manipulating reality (deepfakes)

I. Regulatory Summary

Law firms and advisors supporting AI tool providers, platforms, or clients using AI image/audio tools should strengthen consent and lawful-basis analysis, introduce tool-approval and data-handling controls, and implement misuse safeguards and incident response processes for deepfake-related risks.

II. Full Description

Italy’s data protection authority adopted Delibera of 18 December 2025 (Provvedimento n. 789), published in the Italian Official Gazette on 8 January 2026, issuing a formal warning regarding AI-based services capable of generating or manipulating digital audio/video content (deepfakes) from real persons’ images or voices. The warning emphasises risks to fundamental rights and highlights the need for appropriate legal bases (including consent where required) and safeguards to prevent non-consensual and unlawful uses.Sources: [{“name”: “Gazzetta Ufficiale (Italy)”, “url”: “https://www.gazzettaufficiale.it/atto/serie_generale/caricaDettaglioAtto/originario?atto.codiceRedazionale=26A00005&atto.dataPubblicazioneGazzetta=2026-01-08&elenco30giorni=false”, “lang”: “it”}, {“name”: “Reuters”, “url”: “https://www.reuters.com/legal/litigation/italys-privacy-watchdog-warns-grok-over-deepfake-ai-content-2026-01-08/”, “lang”: “en”}, {“name”: “ANSA”, “url”: “https://www.ansa.it/canale_tecnologia/notizie/web_social/2026/01/08/dal-garante-privacy-avvertimento-a-grok-e-altri-servizi-rischio-violazione-diritti_e87ad862-65fb-49ac-b7e4-eac9c6371443.html”, “lang”: “it”}]

III. Scope & Application

Italy’s data protection authority issued a formal warning concerning AI services that generate or manipulate digital audio/video content (deepfakes) from real people’s voices or images. The warning highlights risks to fundamental rights and stresses the need for consent and safeguards, noting that certain uses (including non-consensual sexualised imagery) may constitute serious privacy violations and may also amount to criminal offences under applicable law.

IV. Policy Impact Assessment

Law firms and advisors supporting AI tool providers, platforms, or clients using AI image/audio tools should strengthen consent and lawful-basis analysis, introduce tool-approval and data-handling controls, and implement misuse safeguards and incident response processes for deepfake-related risks.

Primary Focus: data protection / deepfakes