News· Last updated April 13, 2026

RSA 2026: 46% of Companies Expose Sensitive Data to AI Agents — The API Key Security Crisis

A Keeper Security report presented at RSA Conference 2026 found nearly half of organizations give AI-powered tools access to their most sensitive data with inadequate non-human identity controls.

RSA 2026: 46% of Companies Expose Sensitive Data to AI Agents — The API Key Security Crisis

At RSA Conference 2026 in San Francisco on April 7, password security firm Keeper Security released a report that crystallized a security problem many organizations have been reluctant to confront: nearly half (46%) of companies now give AI-powered tools access to their most sensitive data and critical systems — yet the majority lack adequate controls for managing the non-human identities (NHIs) that govern this access.

The report, covered by HackRead, arrives in a week when the LiteLLM PyPI supply chain compromise demonstrated exactly what happens when AI agent infrastructure is breached: 4TB of data exfiltrated, API keys stolen, and a wave of lawsuits following the downstream incident at AI staffing firm Mercor.

What Are Non-Human Identities?

Non-human identities are the software-based credentials that authenticate automated systems rather than human users. They include:

  • API keys — used by applications, microservices, and AI agents to authenticate against external services
  • Service accounts — machine-level accounts that run automated processes and batch jobs
  • OAuth tokens — authorization grants used by third-party integrations and AI tools
  • AI agent credentials — the access tokens, API keys, and session credentials that agentic AI systems use to interact with data stores, SaaS platforms, and APIs

The Keeper Security report found that organizations are rapidly deploying AI agents with broad data access — but treating those agents' credentials with far less rigor than human user credentials. The result is a sprawling, under-monitored attack surface.

The Scale of the Problem

According to the RSA 2026 report, the NHI security gap has several dimensions:

  • 46% of companies give AI tools access to their most sensitive data — customer records, financial data, PII, and intellectual property — without the same access controls applied to human users
  • API keys routinely lack expiry dates — most organizations cannot tell you how many active API keys exist across their systems, let alone when they were last rotated
  • Audit trails for NHI access are incomplete or absent — when an AI agent queries a database or calls an external API, the action is often logged under a generic service account rather than traced to the specific agent or workflow that initiated it
  • Least-privilege is rarely applied to AI agents — agents are typically granted the broadest access needed for any possible task, rather than the narrowest access required for specific workflows

This is not a theoretical concern. The LiteLLM supply chain breach reported by InfoQ demonstrated that a single compromised AI runtime library can harvest long-lived API keys from developer environments and cloud configurations, then use those keys to pivot into connected data stores. The target was developer tooling; the actual victim was 4TB of sensitive production data.

The API Key Lifecycle Gap

API keys are the connective tissue of the modern application stack. Every SaaS integration, every AI tool, every microservice communication involves an API key at some point. Yet most organizations manage API keys through informal, manual processes:

  • Keys are generated by developers and stored in environment variables, .env files, or secrets managers — with inconsistent practices across teams
  • Keys are rarely rotated unless a breach forces rotation
  • Revocation processes are slow — when an API key is compromised, identifying and revoking all affected integrations can take days
  • Keys for AI agents are often shared across multiple workflows, meaning a single compromised key invalidates multiple automated processes simultaneously

The RSA 2026 report notes that AI adoption has dramatically accelerated the NHI proliferation problem. Each new AI agent, copilot integration, or automated workflow adds new credentials to the organizational footprint — credentials that are rarely inventoried, monitored, or governed with the same rigor as human access.

What the Axios and LiteLLM Attacks Taught Us

The Axios npm supply chain attack (April 2026) and the LiteLLM PyPI compromise share a common pattern: attackers targeted the tools that developers trust, extracted credentials harvested from developer environments, and used those credentials to access production systems.

In both cases, organizations with strong NHI hygiene — short-lived tokens, automated rotation, least-privilege scoping, and real-time anomaly detection on API usage — were substantially less exposed than those relying on long-lived static API keys.

According to Security Boulevard's post-mortem on the LiteLLM breach, the AI supply chain is fundamentally an API supply chain: the credentials that AI tools use are the attack surface, and protecting those credentials requires the same discipline applied to human identity management.

Protecting PII in the AI-Augmented Data Layer

When AI agents have broad access to production data, a secondary risk emerges beyond credential theft: AI tools can inadvertently expose, log, or transmit personally identifiable information and sensitive business data in ways that violate GDPR, CCPA, and the growing wave of US state privacy laws.

An AI tool with read access to a customer database may include raw PII in its outputs, logs, or training feedback loops — creating GDPR Article 32 exposure without any traditional "breach" occurring. This is the PII governance problem that becomes acute when AI agents operate at scale.

Organizations building AI-augmented data workflows need controls that operate at the data layer, not just the identity layer:

  • Automated PII detection that flags sensitive fields before they enter AI processing pipelines
  • Redaction and pseudonymization at the API layer, ensuring AI agents receive anonymized data wherever full PII access is unnecessary
  • Audit logging that ties specific AI agent queries to specific data access events, enabling regulatory reporting on data processing activities

GlobalShield from APIVult provides exactly these controls through a simple REST API — enabling development teams to add PII detection and redaction to their AI data pipelines without rebuilding the underlying infrastructure.

What Organizations Should Do

The Keeper Security RSA 2026 report is a call to action for security, compliance, and platform engineering teams:

  1. Inventory all NHIs — catalogue every API key, service account, OAuth token, and AI agent credential in your environment. You cannot protect what you cannot see.
  2. Apply least-privilege to AI agents — scope every AI tool's access to the minimum required for its specific function, not the maximum it might ever need
  3. Implement automated key rotation — move away from long-lived static API keys toward short-lived tokens with automated rotation
  4. Add PII detection to AI data pipelines — ensure that AI agents processing production data cannot inadvertently access or leak sensitive personal information
  5. Monitor NHI behavior in real time — establish baselines for normal API usage patterns and alert on anomalies that may indicate compromised credentials

The RSA 2026 report is a reminder that AI adoption creates security debt if it outpaces security governance. The organizations that get this right are those that treat AI agent credentials with the same rigor they apply to privileged human access — and add PII controls to every data pipeline an AI agent touches.

Ready to add PII detection and redaction to your AI data pipeline? Explore GlobalShield on APIVult.

Sources