Intel and Google Strengthen AI Infrastructure Collaboration
Intel and Google announced today a multiyear collaboration to advance the next generation of AI and cloud infrastructure, focusing on expanded Intel Xeon processor deployment and development of custom infrastructure processing units.
Expanded Processor Collaboration
The new partnership ensures continued deployment of Intel® Xeon® processors across Google Cloud infrastructure for AI, inference, and general-purpose workloads. This builds upon the existing collaboration between the two technology giants.
A key aspect of the expanded partnership involves co-development of custom ASIC-based infrastructure processing units (IPUs). These IPUs are designed to offload networking, storage, and security functions from host CPUs.
Enhanced Efficiency and Performance
The IPU technology aims to improve utilization, increase efficiency, and enable more predictable performance in hyperscale AI environments. This is particularly important as AI workloads become increasingly complex and resource-intensive.
The collaboration will also see Intel and Google align across multiple generations of Intel Xeon processors to enhance performance, energy efficiency, and total cost of ownership across Google's global infrastructure.
Strategic Significance
Amin Vahdat, SVP and Chief Technologist for AI Infrastructure at Google, stated that CPUs and infrastructure acceleration remain a cornerstone of AI systems.
This partnership marks a significant investment in future AI infrastructure and positions both Intel and Google to meet the growing demand for enterprise-level AI capabilities.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.