Production AI Institute · Independent certification for production AI practice
Verify a credential|Contact
HomeResearch
Research & Publications

Research on production AI safety

PAI publishes original research on production AI deployment safety — incident analysis, regulatory mapping, framework evolution, and practitioner survey data. All publications are freely available.

Publications

ReportForthcoming
H2 2026

State of Production AI Safety 2026

A survey of production AI deployment practices across practitioner organisations. Covers incident rates, guardrail adoption, human oversight patterns, and PSF compliance self-assessment. To be published H2 2026.

DeploymentSafetySurvey
Position paper
2025

The EU AI Act and the Production Safety Framework: A Practitioner's Guide

Maps PSF domains to EU AI Act obligations for high-risk AI system deployers. Covers conformity assessment requirements, technical documentation standards, and human oversight obligations under Article 14.

EU AI ActComplianceRegulation
Analysis
2025

Incident Patterns in Production LLM Deployments

Analysis of 47 documented production incidents involving large language models. Identifies common failure modes, root causes across PSF domains, and intervention patterns. Anonymised case data from PAI community members.

IncidentsLLMRoot cause
Position paper
2024

Human Oversight in High-Stakes AI: What 'Meaningful' Means in Practice

Examines what constitutes meaningful human oversight — as distinct from rubber-stamping — in high-stakes AI-assisted decisions. Includes design patterns for effective human checkpoints and common failure modes.

Human oversightPSF Domain 05Design patterns
Framework note
2024

PSF v1.0 Rationale and Development History

Documents the reasoning behind each PSF domain, the alternatives considered, and how practitioner feedback shaped the framework during the public comment period.

PSFFramework development

Research areas

Incident research

PAI maintains an anonymised incident registry contributed to by certified practitioners and CAI-recognised organisations. This data informs framework evolution and is the basis for our annual incident pattern analysis.

Framework development

The PSF is a living standard. Research findings directly inform version updates. Domain weightings, assessment criteria, and coverage areas are reviewed annually against practitioner-reported incident data.

Regulatory mapping

As AI regulation matures globally — EU AI Act, UK AI Safety Institute, US executive orders — PAI publishes guidance mapping PSF compliance to regulatory obligations, so practitioners do not need to do this mapping themselves.

Practitioner surveys

Annual surveys of the PAI practitioner community capture deployment patterns, tool adoption, organisational maturity, and self-assessed PSF compliance. Findings are published openly.

Contribute to the research programme

PAI's incident registry is built on anonymised case data contributed by certified practitioners and CAI-recognised organisations. If you have experienced a production AI incident and are willing to share the details under anonymisation, we welcome your contribution. All contributors are acknowledged in published analysis.

Submit an incident reportRead the PSF