Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Meta Confirms Broadcom as AI Chip Partner: Four Generations of MTIA Over Next Two Years
MetaAI hardwareCIOsemiconductors

Meta Confirms Broadcom as AI Chip Partner: Four Generations of MTIA Over Next Two Years

JH
Joachim Høgby
18. mars 202618. mars 20264 min lesingKilde:

Meta has publicly confirmed Broadcom as its long-term partner for custom AI chip development. The partnership covers Meta's MTIA (Meta Training and Inference Accelerator) series, with the company now unveiling a four-generation roadmap spanning the next two years.

Four Generations of MTIA

According to reports from Digitimes and MarketBeat, Meta has detailed the following ASIC roadmap:

  • MTIA 300 — already in mass production
  • MTIA 400 — in development
  • MTIA 450 — planned
  • MTIA 500 — planned

All four generations are designed by Broadcom, representing one of the largest long-term AI chip partnerships in the industry to date.

Strategic Implications

For Meta, custom silicon is a strategic necessity. Dependence on Nvidia GPUs is expensive, and by building specialized chips for inference (running AI models at scale), the company can dramatically reduce costs at high volumes.

MTIA is primarily aimed at inference — not training. This means Meta remains dependent on Nvidia H100/H200 and Blackwell chips for training its models, but is gradually taking control of the higher-volume, cost-intensive inference side.

Context: Meta's Challenging AI Year

The announcement comes amid internal AI struggles. Meta has:

  • Scrapped "Behemoth" (its largest Llama 4 model) due to weak benchmarks
  • Delayed the "Avocado" model to at least May
  • Announced the largest layoffs in the company's history (15,000+)

The Broadcom partnership is one of the few bright spots in Meta's internal AI strategy. It sends a clear signal to the market: Meta is committed to AI infrastructure long-term, regardless of near-term model setbacks.

Implications for Enterprise Leaders

The trend is clear: hyperscalers are building their own chips. AWS (Trainium/Inferentia), Google (TPU), Microsoft (Maia), Apple (Neural Engine) — and now Meta more explicitly than ever.

For organizations consuming AI APIs, this ultimately means inference costs will continue to fall as cloud providers achieve better margins on proprietary silicon. GPT, Claude, and Gemini will get cheaper to call — not because AI gets simpler, but because the underlying hardware is being aggressively optimized.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.