Meta unveils four new AI chip generations to cut Nvidia dependency
Meta has announced four new generations of its custom AI chips, dubbed MTIA (Meta Training and Inference Accelerator), with model names 300, 400, 450, and 500. All four generations are set to be deployed across Meta's data center infrastructure by end of 2027.
The strategy is clear: Meta wants to reduce its reliance on Nvidia and cut costs by controlling the full silicon stack itself. The company is following the same path as Google with TPUs and Amazon with Trainium and Inferentia.
The new chips are designed specifically for Meta's AI workloads, from training large language models to real-time inference across Meta's platforms serving over three billion daily users.
This comes amid a period where Meta is cutting jobs in sales, recruiting, and Reality Labs, reallocating resources toward AI investment. The company is clearly signaling that AI infrastructure is its core priority going forward.
For CIOs and technology leaders, this is a reminder that vertical integration of AI hardware is no longer exclusive to the three largest cloud players. Whoever controls the chips controls the costs and capacity in the AI era.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.