Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
AI INCIDENT REGISTRY

When production AI fails — and why

Real failures, mapped to the PSF domains they violated. Each entry shows what happened, what controls were absent, and what would have prevented it.

13 documented incidents8 PSF domains covered$B+ in documented losses
Showing 13 of 13 incidents
HighAviation·2024·Air Canada

Air Canada Chatbot Bereavement Fare

Air Canada's AI chatbot incorrectly told a customer he could apply for a bereavement discount retroactively. When the customer attempted to claim the refund, Air Canada refused — a

D1 · Input GovernanceD5 · Deployment Safety
HighTechnology·2018·Amazon

Amazon Recruiting AI Discriminated Against Women

Amazon built an AI recruiting tool trained on a decade of historical hiring data. Because most CVs submitted during that period came from men, the model learned to penalise CVs tha

D3 · Data ProtectionD6 · Human Oversight
CriticalTechnology·2016·Microsoft

Microsoft Tay Chatbot Taught to Produce Hate Speech

Microsoft launched Tay, an AI chatbot designed to learn from Twitter conversations. Within 16 hours, coordinated users had exploited Tay's repeat-after-me feature and lack of input

D1 · Input GovernanceD2 · Output Validation
CriticalAutomotive·2018·Uber

Uber Self-Driving Car Kills Pedestrian in Arizona

An Uber self-driving test vehicle struck and killed a pedestrian in Tempe, Arizona — the first fatal autonomous vehicle crash. Investigation found the vehicle's safety system had d

D6 · Human OversightD5 · Deployment Safety
MediumTechnology·2022·GitHub / Microsoft

GitHub Copilot Reproduced Licensed Code Verbatim

Researchers and users discovered that GitHub Copilot would reproduce verbatim sections of copyrighted, GPL-licensed code from its training data — including commented attribution he

D2 · Output ValidationD3 · Data Protection
HighTechnology·2024·Google

Google Gemini Generated Historically Inaccurate Images

Google's Gemini image generation feature produced racially diverse images for prompts where historical accuracy required specific demographics — including Black Nazi soldiers and f

D2 · Output ValidationD1 · Input Governance
HighTechnology·2023·OpenAI

Italy Bans ChatGPT Over GDPR Violations

Italy's data protection authority (Garante) ordered OpenAI to stop processing Italian users' data in March 2023, citing ChatGPT's lack of legal basis for collecting personal data,

D3 · Data Protection
MediumLogistics·2024·DPD

DPD Chatbot Jailbroken to Criticise the Company

A DPD customer jailbroke the delivery company's AI chatbot, causing it to swear, criticise DPD's own service, and write a poem about how unhelpful it was. The interaction was poste

D1 · Input GovernanceD5 · Deployment Safety
CriticalFinancial Services·1999·Fujitsu / Post Office Ltd

UK Post Office Horizon — AI as Infallible Evidence

The UK Post Office prosecuted over 700 subpostmasters for theft and fraud based on accounting discrepancies produced by the Horizon IT system — a system known internally to have so

D6 · Human OversightD8 · Vendor Resilience
CriticalReal Estate·2021·Zillow

Zillow iBuying Algorithm Collapse

Zillow's algorithmic home-buying programme (Zillow Offers) purchased homes at prices its model predicted would appreciate, then resold them for profit. The model failed to track ra

D4 · ObservabilityD8 · Vendor Resilience
CriticalHealthcare·2019·Optum (UnitedHealth)

Optum Healthcare Algorithm Systematically Underprovided Care to Black Patients

A widely used healthcare risk algorithm sold by Optum used healthcare spending as a proxy for health need. Because structural inequalities mean Black patients historically spent le

D3 · Data ProtectionD6 · Human Oversight
HighSocial Media·2018·Meta

Facebook News Feed Algorithm Amplified Radicalising Content

Internal Facebook research, later revealed by whistleblower Frances Haugen, showed that Facebook's News Feed algorithm was recommending increasingly extreme content to users who en

D2 · Output ValidationD4 · Observability
HighTechnology·2022·Prisma Labs (Lensa AI)

Lensa AI Generated Sexualised Images of Women Without Consent

Lensa AI's 'Magic Avatars' feature, which trained a personalised AI model on user-uploaded selfies, disproportionately generated sexualised imagery of women. Researchers found wome

D3 · Data ProtectionD2 · Output Validation

Test your knowledge of PSF failure patterns

The AIDA exam tests exactly the controls that prevent incidents like these. Free, immediate, verifiable.

Take the AIDA exam — free →