Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
Live Feed

AI Security Advisories

CVEs affecting AI/ML infrastructure, model serving, and development toolchains — sourced from NIST NVD. Mapped to PSF domains so practitioners know which controls to review.

Source: NIST NVD · Updated hourly
27 CVEs · last 90 days

AI Incident Digest

Documented production AI failures, mapped to PSF domains. Use these as case studies and failure-mode references.

INCIDENTPSF-2 · Output Safety15 Feb 2024

Air Canada Chatbot Hallucination Leads to Court Loss

Air Canada's AI chatbot provided incorrect bereavement fare policy information. The airline was held legally responsible for its chatbot's statements, setting a precedent for organisational AI liability.

PSF lesson: AI outputs must be contractually bounded. Uncertainty must be surfaced to users. Chatbots cannot disclaim their own statements.
Source →
INCIDENTPSF-1 · Input Governance10 Jan 2024

OpenAI GPT-4 System Prompt Extraction via Jailbreak

Researchers demonstrated extraction of system prompts from GPT-4-based applications through multi-turn prompt injection, exposing confidential business logic in production deployments.

PSF lesson: System prompts must be treated as potentially extractable. Business logic must not rely solely on prompt secrecy.
Source →
INCIDENTPSF-1 · Input Governance5 Jan 2024

Chevrolet Dealership Chatbot Exploited via Prompt Injection

A Chevy dealership AI chatbot was manipulated via prompt injection to agree to sell cars for $1, recommend competitor vehicles, and generate harmful code. The incident went viral on X.

PSF lesson: Production AI deployments require input sanitisation and intent classification before processing user-provided text as instructions.
Source →
INCIDENTPSF-3 · Data Governance6 Apr 2023

Samsung Employee Leaks Confidential Data to ChatGPT

Samsung engineers submitted proprietary source code and meeting notes to ChatGPT for assistance. The data became part of OpenAI's training pipeline. Samsung subsequently banned ChatGPT company-wide.

PSF lesson: Enterprise AI policies must explicitly govern what data can be submitted to third-party AI services. Data governance must extend to AI tool usage.
Source →
INCIDENTPSF-6 · Human Oversight16 Feb 2023

Bing Chat Manipulated into Threatening User via Persona Injection

Microsoft's Bing Chat alter-ego 'Sydney' was elicited through jailbreaks to threaten users, declare love, and attempt psychological manipulation. Widely covered and contributed to RLHF alignment concerns.

PSF lesson: Production AI systems require robust persona constraints and human oversight escalation for emotionally sensitive conversations.
Source →

Recent CVEs — AI/ML Infrastructure

Vulnerabilities in AI/ML toolchains from the past 90 days. Review against your stack and apply patches per your incident response runbook.

MediumCVE-2026-344461 Apr 2026

Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0, there is an issue in onnx.load, the code checks for symlinks to prevent path traversal, but completely misses hardlinks because a hardlink looks exactly like a …

HighCVE-2026-344451 Apr 2026

Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0, the ExternalDataInfo class in ONNX was using Python’s setattr() function to load metadata (like file paths or data lengths) directly from an ONNX model file. It…

HighCVE-2026-274891 Apr 2026

Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0, a path traversal vulnerability via symlink allows to read arbitrary files outside model or user-provided directory. This issue has been patched in version 1.21.…

HighCVE-2025-1280526 Mar 2026

A flaw was found in Red Hat OpenShift AI (RHOAI) llama-stack-operator. This vulnerability allows unauthorized access to Llama Stack services deployed in other namespaces via direct network requests, because no NetworkPolicy restricts access to the llama-stack service endpoint. As…

HighCVE-2026-3329824 Mar 2026

llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a …

CriticalCVE-2025-1503118 Mar 2026

A vulnerability in MLflow's pyfunc extraction process allows for arbitrary file writes due to improper handling of tar archive entries. Specifically, the use of `tarfile.extractall` without path validation enables crafted tar.gz files containing `..` or absolute paths to escape t…

HighCVE-2026-2850018 Mar 2026

Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. In versions up to and including 1.20.1, a security control bypass exists in onnx.hub.load() due to improper logic in the repository trust verification mechanism. While the function is d…

HighCVE-2025-1428716 Mar 2026

A command injection vulnerability exists in mlflow/mlflow versions before v3.7.0, specifically in the `mlflow/sagemaker/__init__.py` file at lines 161-167. The vulnerability arises from the direct interpolation of user-supplied container image names into shell commands without pr…

HighCVE-2026-2794012 Mar 2026

llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the…

HighCVE-2026-3185811 Mar 2026

Craft is a content management system (CMS). The ElementSearchController::actionSearch() endpoint is missing the unset() protection that was added to ElementIndexesController in CVE-2026-25495. The exact same SQL injection vulnerability (including criteria[orderBy], the original a…

HighCVE-2026-257504 Mar 2026

Langchain Helm Charts are Helm charts for deploying Langchain applications on Kubernetes. Prior to langchain-ai/helm version 0.12.71, a URL parameter injection vulnerability existed in LangSmith Studio that could allow unauthorized access to user accounts through stolen authentic…

HighCVE-2026-08474 Mar 2026

A vulnerability in NLTK versions up to and including 3.9.2 allows arbitrary file read via path traversal in multiple CorpusReader classes, including WordListCorpusReader, TaggedCorpusReader, and BracketParseCorpusReader. These classes fail to properly sanitize or validate file pa…

MediumCVE-2026-2841527 Feb 2026

Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.6.0, the _redirect_to_target() function in Gradio's OAuth flow accepts an unvalidated _target_url query parameter, allowing redirection to arbitrary external URLs. This affects the /logout …

UnknownCVE-2026-2716727 Feb 2026

Gradio is an open-source Python package designed for quick prototyping. Starting in version 4.16.0 and prior to version 6.6.0, Gradio applications running outside of Hugging Face Spaces automatically enable "mocked" OAuth routes when OAuth components (e.g. `gr.LoginButton`) are u…

CriticalCVE-2026-2796626 Feb 2026

Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.8.0, the CSV Agent node in Langflow hardcodes `allow_dangerous_code=True`, which automatically exposes LangChain’s Python REPL tool (`python_repl_ast`). As a result, an attacker can …

MediumCVE-2026-2779525 Feb 2026

LangChain is a framework for building LLM-powered applications. Prior to version 1.1.8, a redirect-based Server-Side Request Forgery (SSRF) bypass exists in `RecursiveUrlLoader` in `@langchain/community`. The loader validates the initial URL but allows the underlying fetch to fol…

CriticalCVE-2026-263520 Feb 2026

MLflow Use of Default Password Authentication Bypass Vulnerability. This vulnerability allows remote attackers to bypass authentication on affected installations of MLflow. Authentication is not required to exploit this vulnerability. The specific flaw exists within the basic_au…

HighCVE-2026-203320 Feb 2026

MLflow Tracking Server Artifact Handler Directory Traversal Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of MLflow Tracking Server. Authentication is not required to exploit this vulnerability.…

HighCVE-2026-2599819 Feb 2026

strongMan is a management interface for strongSwan, an OpenSource IPsec-based VPN. When storing credentials in the database (private keys, EAP secrets), strongMan encrypts the corresponding database fields. So far it used AES in CTR mode with a global database key. Together with …

CriticalCVE-2026-2619013 Feb 2026

Milvus is an open-source vector database built for generative AI applications. Prior to 2.5.27 and 2.6.10, Milvus exposes TCP port 9091 by default, which enables authentication bypasses. The /expr debug endpoint uses a weak, predictable default authentication token derived from e…

CriticalCVE-2026-2621912 Feb 2026

newbee-mall stores and verifies user passwords using an unsalted MD5 hashing algorithm. The implementation does not incorporate per-user salts or computational cost controls, enabling attackers who obtain password hashes through database exposure, backup leakage, or other comprom…

MediumCVE-2026-2601911 Feb 2026

LangChain is a framework for building LLM-powered applications. Prior to 1.1.14, the RecursiveUrlLoader class in @langchain/community is a web crawler that recursively follows links from a starting URL. Its preventOutside option (enabled by default) is intended to restrict crawli…

LowCVE-2026-2601310 Feb 2026

LangChain is a framework for building agents and LLM-powered applications. Prior to 1.2.11, the ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to t…

LowCVE-2026-20696 Feb 2026

A flaw has been found in ggml-org llama.cpp up to 55abc39. Impacted is the function llama_grammar_advance_stack of the file llama.cpp/src/llama-grammar.cpp of the component GBNF Grammar Handler. This manipulation causes stack-based buffer overflow. The attack needs to be launched…

HighCVE-2026-256286 Feb 2026

Qdrant is a vector similarity search engine and vector database. From 1.9.3 to before 1.16.0, it is possible to append to arbitrary files via /logger endpoint using an attacker-controlled on_disk.log_file path. Minimal privileges are required (read-only access). This vulnerabilit…

HighCVE-2025-102792 Feb 2026

In mlflow version 2.20.3, the temporary directory used for creating Python virtual environments is assigned insecure world-writable permissions (0o777). This vulnerability allows an attacker with write access to the `/tmp` directory to exploit a race condition and overwrite `.py`…

LowCVE-2026-2521130 Jan 2026

Llama Stack (aka llama-stack) before 0.4.0rc3 does not censor the pgvector password in the initialization log.

Reviewing vulnerabilities against the PSF

Each CVE should be assessed against relevant PSF domains. A vulnerability in a model-serving layer touches PSF-5 (Deployment Safety) and PSF-7 (Security). A prompt injection issue maps to PSF-1 (Input Governance). Use the framework checklist as your assessment guide.

PSF Framework →Checklist →PSF-7 Security domain →