In the span of five days in March 2026, a single threat actor—TeamPCP—compromised a vulnerability scanner (Trivy), a code analysis platform (Checkmarx), and the most widely used LLM proxy in the Python ecosystem (LiteLLM). The attack chain was surgical: each compromised tool provided credentials to attack the next target. The campaign exposed a systemic truth that the industry has been slow to confront: the open-source AI supply chain is now critical enterprise infrastructure, and it is defended with the security posture of a side project. This article examines how we got here, why AI infrastructure is uniquely vulnerable, and what a defensible architecture actually looks like.
TeamPCP's campaign was not a single exploit. It was a trust chain attack—each stage exploiting the implicit trust organizations place in their own tooling.
On March 19, TeamPCP injected credential-stealing malware into Aqua Security's Trivy—a vulnerability scanner used by thousands of organizations in their CI/CD pipelines. They spoofed commits to appear as legitimate maintainers, hijacked 75 release tags, and planted a payload that harvested CI/CD secrets from every pipeline that ran Trivy.
Think about what Trivy has access to in a typical CI/CD environment: it runs as part of the build process, often with elevated permissions, scanning container images and infrastructure configurations. It sits in the security layer of the pipeline. Organizations gave it broad access because that's what security scanners need to function.
Two days later, Checkmarx's AST GitHub Action was compromised using the identical technique. More CI/CD secrets harvested. More credential stores raided. The attacker was building an inventory of publishing tokens, API keys, and deployment credentials across the open-source ecosystem.
On March 24, TeamPCP used CI/CD credentials harvested from the Trivy compromise to publish malicious versions of LiteLLM to PyPI. The LiteLLM maintainer later confirmed they were pinned to a compromised version of Trivy, which exposed their PyPI publishing token.
LiteLLM was the real prize. As the default LLM routing proxy for dozens of AI frameworks—DSPy, CrewAI, MLflow integrations, and over 600 public GitHub projects—it sits at the nexus of every AI application's credential infrastructure. Compromising it meant harvesting API keys for OpenAI, Anthropic, Google, Azure, AWS, and every other LLM provider that organizations had configured.
The attacker didn't need to breach each organization individually. They breached a single library and let the organizations bring their credentials to it.
Supply chain attacks are not new. The 2020 SolarWinds attack demonstrated the concept at national security scale. But the AI infrastructure supply chain has properties that make it uniquely attractive and uniquely dangerous:
An LLM proxy like LiteLLM is, by design, a credential aggregator. It holds keys for 10, 20, sometimes 100+ API providers. A traditional library might expose a database password or an AWS key. An AI proxy exposes an organization's entire AI credential portfolio. The credential-to-compromise ratio is orders of magnitude higher.
LiteLLM had 97 million monthly downloads—but many of those downloads were triggered not by teams choosing LiteLLM, but by teams using DSPy, CrewAI, or other frameworks that import it as a transitive dependency. Developers may not even know LiteLLM is in their dependency tree. This hidden exposure creates a blast radius that exceeds the directly measured install base.
The AI ecosystem moves at a velocity that is incompatible with traditional supply chain hygiene. Over 600 public projects had unpinned LiteLLM dependencies. Frameworks release multiple versions per week. Developers routinely run pip install --upgrade to access the latest model integrations. The cultural norm is to pull the latest version, not to audit each update.
Black Duck's 2026 Open Source Security and Risk Analysis found that the average application now contains 581 open-source vulnerabilities—a 107% increase year-over-year. AI applications, with their deep dependency trees spanning ML frameworks, data processing libraries, and provider SDKs, are among the most dependency-heavy applications in production.
Here's the part that keeps security teams up at night: AI coding assistants actively recommend popular packages. When a developer asks an LLM to help integrate multiple AI providers, the most common recommendation is LiteLLM—because that's what the training data reflects. AI assistants are optimized to produce functional code quickly, often recommending widely used libraries without evaluating their security posture or dependency chain risk. High download counts become self-reinforcing: popular packages get recommended more, which makes them more popular, which makes them higher-value targets.
TeamPCP's choice to start with Trivy was strategic genius. Security scanners occupy a privileged position in CI/CD pipelines. They have access to source code, container images, secrets, and deployment credentials. When a security tool is compromised, it provides an attacker with the same broad access the organization granted for legitimate security purposes.
The uncomfortable implication: your vulnerability scanner can be a vulnerability. Your dependency checker can be the compromised dependency. The tools meant to protect the supply chain are themselves part of the supply chain.
The Trivy compromise was publicly disclosed on March 19. The LiteLLM attack happened on March 24. In those five days, the LiteLLM team did not rotate their CI/CD credentials—even though the credentials had been exposed through a tool in their pipeline. This isn't unique to LiteLLM. Across the ecosystem, the expected response time for credential rotation after a supply chain disclosure is measured in days or weeks, not hours.
For context: IBM's 2026 X-Force Threat Index reports a nearly 4X increase in supply chain compromises since 2020, driven largely by attackers exploiting trust relationships and CI/CD automation. The attack surface is expanding faster than the response capacity.
Most PyPI malware detection focuses on setup.py execution during installation, malicious code in __init__.py, and post-install scripts. LiteLLM 1.82.8 used a .pth file—a legitimate Python path configuration mechanism—to execute code on every Python process startup. This bypassed existing scanning tools and created a persistent execution mechanism that survived even if the LiteLLM package was never imported.
This is not an obscure technique. It is a documented Python feature. But it has been almost entirely absent from supply chain security tooling and threat models. Expect it to become standard in future attacks.
A single compromised publishing token was sufficient to publish malicious versions that were immediately available to 97 million monthly downloaders. There was no staging environment, no human approval for releases of packages above a certain download threshold, no cryptographic attestation that the published package matched a specific Git commit. PyPI's trusted publishing via OIDC exists but is not mandatory. Long-lived API tokens remain the norm.
No single practice prevents supply chain attacks. But organizations can build architectures that limit blast radius, reduce credential exposure, and ensure that a compromised dependency does not equal a compromised organization.
The fundamental lesson of the LiteLLM attack: if credentials exist as environment variables on developer machines, CI/CD runners, and application servers, they will be harvested when a dependency is compromised. The mitigation is architectural:
Self-hosted platforms that manage credentials internally and expose only scoped access tokens to clients are structurally resilient against this class of attack. DreamFactory, for instance, is a secure, self-hosted enterprise data access platform that provides governed API access to any data source, connecting enterprise applications and on-prem LLMs with role-based access and identity passthrough. Because credentials are stored encrypted in the platform's own database—integrated with enterprise key management services—and never exposed to the client-side dependency chain, a compromised Python package on a developer's machine cannot reach them.
The path from credential storage to credential usage should traverse as few third-party dependencies as possible. Every dependency in that path is an attack vector. Self-contained platforms with minimal or no public package registry dependencies for their core credential management functions have a fundamentally smaller attack surface than library-based approaches.
TeamPCP's campaign worked because security scanning jobs and publishing jobs shared the same credential environment. The fix:
pip install package without a version specifier in production--require-hashes) for critical dependenciesThe question is not whether a dependency will be compromised. It is whether your architecture limits the damage when it happens. This means:
The LiteLLM attack arrives at a moment when regulators are closing the gap between supply chain incidents and organizational liability:
For regulated industries—financial services, healthcare, government, defense—the LiteLLM attack is exactly the scenario auditors will ask about. Organizations need documented evidence of dependency pinning, credential isolation, supply chain monitoring, and incident response for compromised dependencies.
Gartner projects 40% of enterprise applications will embed AI agents by the end of 2026. Each agent needs credentials. Each agent uses libraries. Each library has dependencies. The AI supply chain is growing exponentially while the security practices governing it have barely changed since the pre-AI era.
The TeamPCP campaign is almost certainly not the last of its kind. The pattern it established—compromise security tools, harvest CI/CD secrets, use those secrets to poison AI infrastructure—is replicable and scalable. The targets will shift. The technique will be refined. The .pth file trick will be copied.
What will not change is the fundamental calculus: if your architecture concentrates credentials in software components distributed through public registries, you are one compromised token away from a full credential breach. The organizations that weather the next attack will be those that moved credentials server-side, segmented their CI/CD environments, pinned their dependencies, and stopped treating the AI supply chain as a convenience layer rather than critical infrastructure.
Because that's what it is now. Critical infrastructure. It's time to secure it like it.
A supply chain attack targets the software dependencies, build tools, or distribution channels that organizations rely on, rather than attacking the organization directly. In AI infrastructure, this means compromising packages like LLM proxies, ML frameworks, or AI agent libraries that are installed as dependencies. Because AI applications often centralize credentials for multiple providers, a single compromised dependency can expose an organization's entire AI credential portfolio.
TeamPCP's campaign was notable for its multi-stage targeting strategy: compromise security tools first (Trivy, Checkmarx) to harvest CI/CD secrets, then use those secrets to attack higher-value targets (LiteLLM). This "trust chain" approach exploits the fact that security tools run with elevated permissions. Previous supply chain attacks like SolarWinds targeted a single vendor; TeamPCP targeted an ecosystem of interconnected tools to achieve cascading compromise.
The most effective mitigations are architectural: move credentials off developer machines and CI/CD runners into server-side credential management platforms, pin all dependency versions with hash verification, implement egress filtering on development environments, segment CI/CD permissions so scanning jobs cannot access publishing credentials, and adopt self-hosted API gateways that manage credentials internally. DreamFactory is a self-hosted platform providing governed API access to any data source for enterprise apps and local LLMs—its architecture keeps credentials encrypted server-side, isolated from public package registry dependency chains.
Yes, in a measurable way. AI coding assistants recommend popular packages by default, which drives higher download counts, which makes those packages more attractive targets. They also tend to generate code that pulls the latest version rather than pinning to a specific release. This creates a feedback loop where AI-generated code increases both the attack surface (more installs of fewer packages) and the exposure window (no version pinning).
The EU Cyber Resilience Act, now in enforcement, makes organizations legally liable for the security of open-source components in their products. If a compromised dependency in your application leads to a data breach, your organization bears the regulatory responsibility—not the open-source maintainer. This fundamentally changes the risk calculus around dependency management and makes practices like version pinning, hash verification, and credential isolation a legal requirement, not just a best practice.
Open-source software remains essential to AI development, but the trust model must change. Treat every dependency as a potential attack vector. Pin versions. Verify hashes. Isolate credential access. Monitor for anomalous behavior in installed packages. The goal is not to abandon open source but to stop granting it implicit trust. Verify, then trust—and contain the blast radius for when verification fails.