France | Regulatory Framework
Status: in effect
Effective: 2026-01-05
moderate CNIL AI how-to sheets (English) — GDPR-compliant AI system development
AI system development: CNIL’s recommendations to comply with the GDPR
I. Regulatory Summary
Legal and advisory teams supporting AI development (including within professional services firms and client advisory work) should align internal playbooks and client guidance to CNIL’s practical expectations on legal basis selection, transparency at scale, rights handling, model status/anonymity analysis, and security-by-design during AI development. This may require updated DPIA/assessment workflows, documentation templates, and governance controls for datasets, training, and vendor oversight.
II. Full Description
CNIL published an English consolidated presentation of its practical recommendations on how GDPR applies to the development of AI systems that process personal data (the “AI how-to sheets”). The package provides operational guidance across major compliance themes relevant to AI development projects, including: (i) choosing and documenting an appropriate legal basis (often legitimate interests) and managing safeguards; (ii) informing data subjects (including in scenarios involving indirect collection and large-scale sourcing); (iii) ensuring and facilitating data subject rights; (iv) assessing whether an AI model/system should be treated as anonymous versus subject to GDPR (with emphasis on memorisation/regurgitation and re-identification risk); (v) implementing security controls during development; and (vi) managing data annotation processes. The CNIL also published an implementation-oriented checklist and supporting figures/diagrams alongside the recommendations.
III. Scope & Application
CNIL published a consolidated set of practical “AI how-to sheets” (English) describing how GDPR applies to the development of AI systems that process personal data. It covers key compliance questions across the AI development lifecycle (including data sourcing, legal basis, transparency, rights handling, model status/anonymity assessment, security, and data annotation). While not a binding law, it sets regulator expectations and is likely to be relied upon in compliance assessments and advisory work.
IV. Policy Impact Assessment
Legal and advisory teams supporting AI development (including within professional services firms and client advisory work) should align internal playbooks and client guidance to CNIL’s practical expectations on legal basis selection, transparency at scale, rights handling, model status/anonymity analysis, and security-by-design during AI development. This may require updated DPIA/assessment workflows, documentation templates, and governance controls for datasets, training, and vendor oversight.
Primary Focus: data protection governance for AI development