Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Meta unveils four new AI chip generations to cut Nvidia dependency
MetaAI-chipsHardwareCIO

Meta unveils four new AI chip generations to cut Nvidia dependency

JH
Joachim Høgby
26. mars 202626. mars 20263 min lesingKilde:

Meta has announced four new generations of its custom AI chips, dubbed MTIA (Meta Training and Inference Accelerator), with model names 300, 400, 450, and 500. All four generations are set to be deployed across Meta's data center infrastructure by end of 2027.

The strategy is clear: Meta wants to reduce its reliance on Nvidia and cut costs by controlling the full silicon stack itself. The company is following the same path as Google with TPUs and Amazon with Trainium and Inferentia.

The new chips are designed specifically for Meta's AI workloads, from training large language models to real-time inference across Meta's platforms serving over three billion daily users.

This comes amid a period where Meta is cutting jobs in sales, recruiting, and Reality Labs, reallocating resources toward AI investment. The company is clearly signaling that AI infrastructure is its core priority going forward.

For CIOs and technology leaders, this is a reminder that vertical integration of AI hardware is no longer exclusive to the three largest cloud players. Whoever controls the chips controls the costs and capacity in the AI era.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.