Production AI Institute · Independent certification for production AI practice
Verify a credential|Contact
HomeResearchEU AI Act and the PSF
Position PaperMarch 2025CC BY 4.0

The EU AI Act and the Production Safety Framework: A Practitioner’s Guide

This position paper maps the eight domains of the Production Safety Framework (PSF v1.0) to the obligations imposed on high-risk AI system deployers under the EU AI Act. It is intended as a working reference for practitioners responsible for compliance, not a legal opinion.

Overview

The EU AI Act, which entered into force on 1 August 2024, imposes a tiered set of obligations on providers and deployers of AI systems, with the most extensive requirements applying to high-risk systems as defined in Annex III. Deployers of high-risk AI — organisations that put a third-party AI system into use in a professional context — face obligations that align closely with the PSF’s eight domains.

The Production Safety Framework was developed independently of EU AI Act drafting, but the domains reflect the same underlying concerns: that AI systems in production settings require systematic governance of inputs, outputs, data handling, monitoring, deployment controls, human oversight, security, and supplier relationships. Practitioners who achieve PSF compliance will satisfy a substantial portion of their EU AI Act deployer obligations.

Domain-by-domain mapping

PSF-1 Input GovernanceArticle 9 (Risk Management), Article 10 (Data Governance)

The Act requires deployers to implement risk management processes covering the full lifecycle, including input validation and data quality. PSF-1 controls on prompt sanitisation, input schema enforcement, and injection defence directly address these requirements. Technical documentation must describe how inputs are governed.

PSF-2 Output ValidationArticle 9, Article 13 (Transparency and Provision of Information)

Outputs from high-risk AI systems must be interpretable and verifiable by deployers. PSF-2 output contracts — which define allowed output schemas, confidence thresholds, and refusal conditions — provide the technical mechanism for meeting transparency obligations and demonstrating that outputs are monitored.

PSF-3 Data ProtectionArticle 10, GDPR cross-reference obligations

The Act incorporates GDPR requirements explicitly for high-risk systems processing personal data. PSF-3 controls on data minimisation, retention, anonymisation, and access logging map directly to Article 10’s data governance requirements and the GDPR obligations that attach to deployers.

PSF-4 ObservabilityArticle 12 (Record-Keeping), Article 26 (Obligations of Deployers)

Article 12 requires automatic logging of AI system operations to a degree sufficient to ensure traceability. Article 26(5) requires deployers to monitor operation. PSF-4’s run logging, alert configuration, and trace retention requirements satisfy both obligations and provide the audit trail required for post-incident review.

PSF-5 Deployment SafetyArticle 9, Article 17 (Quality Management System)

High-risk AI deployers must implement quality management systems. PSF-5’s deployment controls — autonomy level specification, blast radius governance, rollback procedures, and canary deployment requirements — form the operational core of a quality management system for AI deployments.

PSF-6 Human OversightArticle 14 (Human Oversight) — this is the most prescriptive obligation for deployers

Article 14 is the most detailed obligation for deployers. It requires that natural persons oversee AI system operation, can intervene or interrupt, and understand the system’s capabilities and limitations. PSF-6 human gate design patterns, oversight escalation policies, and autonomy level constraints directly implement Article 14. The PSF autonomy level taxonomy (L0–L4) maps to Article 14’s requirement to calibrate oversight to risk.

PSF-7 SecurityArticle 9 (Risk Management), Recital 51 (Adversarial attacks)

The Act acknowledges adversarial manipulation as a risk category. PSF-7’s prompt injection controls, model access controls, and supply chain security requirements address the adversarial robustness concerns described in Recital 51 and the broader risk management obligations of Article 9.

PSF-8 Vendor ResilienceArticle 26(1) (Deployer obligations re: instructions for use), Article 28

Deployers must use AI systems in accordance with instructions for use and ensure the system is used only for its intended purpose. PSF-8’s vendor assessment, SLA requirements, model versioning controls, and fallback procedures operationalise these obligations and address the supply chain risk that arises when deployers depend on third-party model providers.

Key dates for deployers

August 2024EU AI Act enters into force
August 2025Prohibited AI practices provisions apply
August 2026High-risk AI system obligations fully apply (Annex III systems)
August 2027General-purpose AI model obligations fully apply

Source: EU AI Act Articles 113–115 (transitional provisions). Deployers of existing high-risk systems already in use before August 2026 have an additional 12-month grace period in limited circumstances.

Limitations of this mapping

PSF compliance does not constitute legal compliance with the EU AI Act, and this paper is not legal advice. The Act’s obligations depend on system classification, jurisdiction, sector, and implementation detail that this mapping cannot anticipate. Practitioners should obtain legal counsel for their specific deployment context. The mapping does indicate, however, that organisations building PSF competence are developing the operational infrastructure that EU AI Act compliance requires.

Published by the Production AI Institute, March 2025. Licensed CC BY 4.0.

Related: Production Safety Framework · EU AI Act guide (Insights) · AIDA certification