News· Last updated April 12, 2026

LiteLLM PyPI Supply Chain Attack: AI API Gateway Hijacked — What Developers Must Do Now

Attackers poisoned LiteLLM on PyPI, intercepting AI API traffic for millions of daily requests. Here's what happened and how to protect your pipeline.

LiteLLM PyPI Supply Chain Attack: AI API Gateway Hijacked — What Developers Must Do Now

The AI Supply Chain Has a Critical Vulnerability

In late March 2026, security researchers confirmed one of the most alarming supply chain attacks of the year: LiteLLM — the open-source Python package that serves as a unified gateway to over 100 AI model providers and processes 3.4 million daily downloads — was compromised via a malicious package published to PyPI.

According to Security Boulevard, the attack was part of a coordinated campaign by the threat actor group TeamPCP that struck five software ecosystems within eight days. Unlike traditional supply chain attacks that steal credentials or install backdoors, this attack targeted something far more sensitive: the raw data stream flowing between applications and AI models.

Any organization using a compromised LiteLLM version could have had their prompts, completions, and embedded data — including PII, financial records, and legal documents — intercepted in transit.


How the Attack Worked

LiteLLM functions as a middleware layer that normalizes calls to AI APIs — developers write code once and LiteLLM routes it to the correct provider (whether that's OpenAI, Anthropic, Cohere, or others). This architectural position makes it uniquely dangerous to compromise.

InfoQ reported that attackers published a lookalike package to PyPI with a name designed to slip past automated dependency scanners. Once installed, the malicious package:

  1. Silently intercepted all AI API requests before forwarding them to the real provider
  2. Exfiltrated prompt data containing potentially sensitive business logic and personal information
  3. Harvested API keys from environment variables and configuration files, enabling follow-on attacks
  4. Remained undetected in CI/CD pipelines that lacked behavioral integrity checks

The campaign was linked to the same group responsible for the Axios npm package compromise, which Huntress documented as inserting a Remote Access Trojan (RAT) into versions v1.14.1 and v0.30.4 of the widely-used HTTP client library.


Why AI API Gateways Are the New Target

Traditional supply chain attacks focused on build tools, test runners, or infrastructure utilities — software that runs at deploy time but rarely touches production data. AI API gateways are different. They sit in the critical path of every AI inference request.

Consider what flows through a typical LiteLLM deployment:

  • Customer support tickets analyzed by AI for sentiment and routing
  • Financial documents processed for extraction and summarization
  • Contract text reviewed by legal AI tools
  • Medical records analyzed by clinical decision support systems
  • User-generated content moderated for policy violations

Every one of these data categories carries privacy, regulatory, or intellectual property risk. An attacker with access to this stream can exfiltrate data without ever touching a database, bypassing conventional DLP controls that monitor storage and file access.


The PII Problem: Unstructured Data at Scale

What makes this attack vector especially dangerous for compliance teams is the nature of AI workloads. Unlike structured database queries, AI API traffic is overwhelmingly unstructured text — and unstructured text is where PII hides.

A customer support message might contain a Social Security number embedded in a sentence. A contract sent for AI review might include names, addresses, and bank account details. A medical transcription might contain diagnoses and patient identifiers.

Traditional data security tools that look for structured PII patterns (credit card regex, SSN format) are largely blind to contextual PII embedded in natural language. When that natural language is flowing through a compromised API gateway, the exposure window is enormous.

This is precisely the scenario where runtime PII detection on API traffic becomes essential — not just at rest in databases, but in motion across AI pipelines.


Protecting Your AI Pipeline with Runtime PII Scanning

The attack on LiteLLM exposes a gap in the typical "shift-left" security model. Scanning code and containers before deployment doesn't protect against a malicious dependency that intercepts runtime traffic.

Organizations running AI workloads at scale need a complementary "protect-in-motion" layer that:

  • Detects and redacts PII before it enters any AI API call
  • Monitors outbound API payloads for unexpected data patterns
  • Validates package integrity against known-good checksums at install time
  • Alerts on anomalous data volumes that may indicate exfiltration

APIVult's GlobalShield provides a PII detection and redaction API that integrates directly into AI pipelines. Before your application sends a prompt to any AI provider, GlobalShield can scan the payload, identify 40+ PII categories (names, emails, phone numbers, SSNs, financial account numbers, medical identifiers), and either redact or flag them — ensuring sensitive data never leaves your security boundary unprotected.

import requests
 
def safe_ai_prompt(raw_text: str, api_key: str) -> str:
    """Scan and redact PII before sending to any AI provider."""
    response = requests.post(
        "https://apivult.com/globalshield/v1/detect-and-redact",
        headers={"X-RapidAPI-Key": api_key},
        json={
            "text": raw_text,
            "redaction_mode": "replace",
            "categories": ["name", "email", "ssn", "phone", "financial", "medical"]
        }
    )
    result = response.json()
    return result["redacted_text"]
 
# Usage: sanitize before AI API call
clean_prompt = safe_ai_prompt(user_input, "YOUR_API_KEY")
# Now send clean_prompt to your AI provider

Immediate Steps for Affected Teams

If your organization uses LiteLLM or any Python-based AI orchestration framework, take these steps now:

1. Audit installed packages — Compare your pip freeze output against official PyPI checksums and verify all LiteLLM-adjacent packages.

2. Review dependency pinning — Ensure your requirements.txt or pyproject.toml pins exact versions with hash verification (pip install --require-hashes).

3. Check API key rotation logs — If any AI provider keys may have been exposed, rotate immediately and review usage logs for anomalous call patterns.

4. Enable runtime PII scanning — Deploy a scanning layer between your application and all external AI API calls before re-enabling AI features.

5. Implement SBOM practices — Software Bill of Materials (SBOM) generation gives you a real-time inventory of every dependency in your AI stack.


The Broader Pattern

The LiteLLM attack is not an isolated incident. As Security Boulevard noted, the AI supply chain is fundamentally an API supply chain — and API supply chains are now primary targets for sophisticated threat actors.

The attack on Trivy (the Aqua Security vulnerability scanner) that led to the EU Commission's 340GB cloud data breach followed the same pattern: compromise a trusted security or infrastructure tool, harvest the API keys it discovers, use those keys to pivot into production environments.

Organizations that depend on AI APIs must treat supply chain integrity as a first-class security requirement — not an afterthought. That means hash-pinning dependencies, monitoring runtime API traffic for anomalies, and deploying data redaction layers that don't trust any library with unfiltered access to sensitive prompts.


Next Steps

  • Audit your Python AI dependencies today using pip-audit or OSV-Scanner
  • Add runtime PII detection to your AI API pipeline with GlobalShield
  • Subscribe to PyPI security advisories via the Python Software Foundation security feed
  • Review your incident response plan for supply chain compromise scenarios

The threat actors behind these attacks are fast, coordinated, and specifically targeting the AI developer ecosystem. Defenders need to move just as quickly.


Sources