Air Canada's AI chatbot incorrectly told a customer he could apply for a bereavement discount retroactively. When the customer attempted to claim the refund, Air Canada refused — arguing the chatbot was a 'separate legal entity' not bound by its advice. A Canadian tribunal rejected this defence and ordered Air Canada to honour the discount.
Jake Moffatt's grandmother died and he used Air Canada's chatbot to ask about bereavement fares before booking flights. The chatbot told him he could apply for the discounted rate retroactively within 90 days of travel. Air Canada's actual policy does not allow retroactive applications. When Moffatt applied, Air Canada refused to honour the chatbot's advice and suggested he should have checked the policy page directly instead.
How the Production Safety Framework maps to this failure
This is a canonical D1 failure: the system allowed the AI to generate policy guidance without grounding it in verified, current policy documents. The retroactive refund clause was simply invented by the model. D5 was also absent — no pre-deployment testing appears to have validated the chatbot against edge-case policy questions. The 'chatbot as separate entity' legal argument, had it succeeded, would have created a precedent allowing organisations to disclaim AI outputs entirely — a risk that makes robust D5 deployment validation even more critical.
Specific PSF controls mapped to each failure point
Air Canada ordered to pay CAD 812.02. Significant reputational damage. The case became widely cited as a landmark in AI contract liability.
The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.