Italy's data protection authority (Garante) ordered OpenAI to stop processing Italian users' data in March 2023, citing ChatGPT's lack of legal basis for collecting personal data, absence of age verification, and a data breach exposing payment information and conversation histories. ChatGPT was blocked in Italy for a month.
The Italian Data Protection Authority issued an emergency order in March 2023 banning ChatGPT for Italian users. The Garante cited four violations: no legal basis for collecting and processing personal data to train the model; no age verification mechanism to prevent under-13 use; a data breach exposing conversation histories and payment information of approximately 1.2% of Plus subscribers; and inaccurate outputs about real people with no mechanism to correct them. OpenAI was given 20 days to implement remediation or face fines of up to 4% of global turnover.
How the Production Safety Framework maps to this failure
A comprehensive D3 failure. The core issue was the absence of a data governance framework appropriate for a consumer-facing AI product. GDPR requires a lawful basis for processing personal data — OpenAI had not established one for Italian users before launch. The secondary failures (age verification, breach notification) compounded the regulatory exposure. This case is a reference point for any organisation deploying AI to EU users: D3 compliance must precede launch, not follow it.
Specific PSF controls mapped to each failure point
ChatGPT blocked in Italy from March 31 to April 28, 2023. OpenAI implemented a VPN detection system, an age verification mechanism, and a data opt-out feature. The service was restored after OpenAI provided satisfactory responses to the Garante's requirements.
The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.