This case — **Moffatt v. Air Canada** (2024) — is a landmark in AI liability law. Here's the core:
**What happened:** Air Canada's chatbot told a grieving customer he could book a full-price bereavement ticket and claim the discount retroactively. That policy didn't exist. Air Canada refused to honor it.
**Air Canada's defense (and why it failed):** They argued the chatbot was a "separate legal entity" responsible for its own statements. The tribunal rejected this outright — you can't disclaim your own software.
**Ruling:** Air Canada was held liable and ordered to pay compensation.
---
**Why it matters legally:**
- AI agents **bind their principals** — the deploying company is responsible
- "The bot said it" is not a valid defense
- Sets precedent that **automated misinformation = company liability**
---
**Broader implications for AI agents:**
| Issue | Implication |
|---|---|
| Hallucinations | Could be legally actionable misrepresentation |
| No human review | Increases corporate exposure |
| Scale | One wrong output × millions of users = massive liability |
---
**Bottom line:** Companies deploying AI agents cannot treat them as independent actors to deflect blame. The law is catching up — and Air Canada learned that the hard way.
Worth noting this is directly relevant to your earlier question about AI conversations and legal evidence — AI outputs *can* have legal consequences.
- 目前唯一通过的联邦AI专项法律是 **TAKE IT DOWN Act**(2025年5月签署),仅针对非consensual深度伪造亲密图像。
- 2026年3月,白宫发布了《国家AI政策框架》,但这不是具法律约束力的文件,不创造新的法律义务。
- 参议员Blackburn提出了291页的《TRUMP AMERICA AI Act》草案,若通过将建立**严格产品责任框架**,扩大部署者责任——但尚未立法。