Meta halts AI collaboration after massive data breach: LiteLLM supply chain attack hits the industry
A supply chain attack on the popular open source tool LiteLLM has put the AI industry on high alert. Meta confirmed Friday it is indefinitely suspending all collaboration with AI training data company Mercor after a coordinated hack exposed potentially billions of dollars worth of proprietary AI training data.
The attack was carried out by a group identified as TeamPCP, which published malicious versions of LiteLLM to PyPI, Python's official package registry. LiteLLM is a widely used proxy for large language models, and security researchers estimate the tool is installed in 36 percent of all cloud environments globally. The compromised versions contained credential-stealing malware that opened backdoors into Mercor's internal systems.
Mercor, valued at 10 billion dollars, supplies training data to the biggest AI labs including Meta, OpenAI, and Anthropic. The stolen material reportedly includes data selection criteria, labeling protocols, and training strategies, as well as source code and internal databases containing videos and verification workflows.
The hacker group Lapsus$ claims to have stolen over 4 terabytes of Mercor data and is reportedly attempting to auction the material. It remains unclear whether Lapsus$ is directly connected to TeamPCP's supply chain attack or whether these are two separate actors who exploited the same vulnerability.
Meta employees assigned to Mercor projects report work disruptions as the investigation continues. Other AI labs are now assessing their exposure and reconsidering engagements with the company.
The incident highlights how vulnerable AI training infrastructure has become. LiteLLM is deeply integrated into many production systems, often without IT departments having full visibility into every deployment. The attack is a wake-up call about the need for software supply chain security in the AI era.
Mercor confirmed the incident in a brief statement and says it is cooperating with security authorities. Neither Meta, OpenAI, nor Anthropic have commented on the scope of potentially exposed data.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.