Real failures, mapped to the PSF domains they violated. Each entry shows what happened, what controls were absent, and what would have prevented it.
Air Canada's AI chatbot incorrectly told a customer he could apply for a bereavement discount retroactively. When the customer attempted to claim the refund, Air Canada refused — a…
Amazon built an AI recruiting tool trained on a decade of historical hiring data. Because most CVs submitted during that period came from men, the model learned to penalise CVs tha…
Microsoft launched Tay, an AI chatbot designed to learn from Twitter conversations. Within 16 hours, coordinated users had exploited Tay's repeat-after-me feature and lack of input…
An Uber self-driving test vehicle struck and killed a pedestrian in Tempe, Arizona — the first fatal autonomous vehicle crash. Investigation found the vehicle's safety system had d…
Researchers and users discovered that GitHub Copilot would reproduce verbatim sections of copyrighted, GPL-licensed code from its training data — including commented attribution he…
Google's Gemini image generation feature produced racially diverse images for prompts where historical accuracy required specific demographics — including Black Nazi soldiers and f…
Italy's data protection authority (Garante) ordered OpenAI to stop processing Italian users' data in March 2023, citing ChatGPT's lack of legal basis for collecting personal data, …
A DPD customer jailbroke the delivery company's AI chatbot, causing it to swear, criticise DPD's own service, and write a poem about how unhelpful it was. The interaction was poste…
The UK Post Office prosecuted over 700 subpostmasters for theft and fraud based on accounting discrepancies produced by the Horizon IT system — a system known internally to have so…
Zillow's algorithmic home-buying programme (Zillow Offers) purchased homes at prices its model predicted would appreciate, then resold them for profit. The model failed to track ra…
A widely used healthcare risk algorithm sold by Optum used healthcare spending as a proxy for health need. Because structural inequalities mean Black patients historically spent le…
Internal Facebook research, later revealed by whistleblower Frances Haugen, showed that Facebook's News Feed algorithm was recommending increasingly extreme content to users who en…
Lensa AI's 'Magic Avatars' feature, which trained a personalised AI model on user-uploaded selfies, disproportionately generated sexualised imagery of women. Researchers found wome…
The AIDA exam tests exactly the controls that prevent incidents like these. Free, immediate, verifiable.
Take the AIDA exam — free →