Where the PSF defines what safe AI deployment looks like at the system level, PAI-8 defines what safe AI governance looks like at the organisational level. Eight controls. Four maturity levels. The governance standard boards and regulators are converging on.
Each control addresses a distinct category of organisational AI risk. Assessment scores each control L0–L3 against the maturity model.
Formal accountability for AI risk at board and executive level, with documented policies, decision gates, and a register of all AI use cases.
Formal AI risk assessment before any new system deployment, with risk tiering, third-party AI inclusion, and regular reassessment as systems change.
Provenance, consent, and lifecycle management for all data used to train or operate AI systems, including vector databases and fine-tuning datasets.
Pre-deployment evaluation against production-representative benchmarks, including bias testing and independent review for high-risk use cases.
Defined autonomy limits, operational override mechanisms, escalation paths, and documented human intervention capability for all AI systems.
AI-specific incident classification taxonomy, severity tiers for AI harms, response runbooks, and post-incident review processes.
Decision-level logging for AI outputs, log retention aligned to challenge windows, and explainability artefacts for high-risk decisions.
Inventory of all third-party AI dependencies, vendor risk assessment, contractual AI continuity protections, and tested fallback capability.
Each of the 8 controls is scored L0–L3. A PAI-8 assessment produces a per-control maturity score and an overall governance posture rating.
The two frameworks address different layers of AI safety. Neither replaces the other — a complete AI safety posture requires both.
Certify as an AI auditor, commission an independent assessment, or study the standard before the exam.