Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Google DeepMind releases Gemini Robotics-ER 1.6 for more autonomous robots
CIORobotics

Google DeepMind releases Gemini Robotics-ER 1.6 for more autonomous robots

JH
Joachim Høgby
14. april 202614. april 20264 min lesingKilde:

Google DeepMind announced Gemini Robotics-ER 1.6 on April 14, a new version of its embodied reasoning stack for robots. This is not just a language-model update. The focus is sharper physical reasoning, including better spatial understanding, stronger multi-view perception, and a new ability to read instruments such as pressure gauges and sight glasses.

That makes the launch notable because it pushes AI value closer to real operations. DeepMind says the model can act as a robot’s high-level reasoning layer, call tools such as Google Search, work with vision-language-action models, and decide whether a task has actually been completed. Together with stronger pointing, counting, and success detection, this is the kind of infrastructure needed for robots in warehouses, industrial environments, and inspection workflows.

The clearest new capability is instrument reading, developed with Boston Dynamics. That is a useful signal for where the market is moving: not toward generic robot demos, but toward systems that can interpret physical environments well enough to do meaningful work under real safety and reliability constraints.

For CIOs, the bigger story is that the next AI layer will not only sit in chat interfaces. It will increasingly become the control layer for physical workflows, and Google is making that strategic move early by offering the model through the Gemini API and Google AI Studio from day one.

Source: Google DeepMind, "Gemini Robotics-ER 1.6: Powering real-world robotics tasks through enhanced embodied reasoning," published April 14, 2026.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.

Relaterte saker

Meta velger AWS Graviton for agentisk AI i stor skala
CIOInfrastructure

Meta velger AWS Graviton for agentisk AI i stor skala

Akkurat nå4 min lesing
Åpne saken
Meta taps AWS Graviton to scale agentic AI
CIOInfrastructure

Meta taps AWS Graviton to scale agentic AI

Akkurat nå4 min lesing
Åpne saken
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet
Breaking
CIOOpen Source

DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet

Akkurat nå4 min lesing
Åpne saken