Google DeepMind releases Gemini Robotics-ER 1.6 for more autonomous robots
Google DeepMind announced Gemini Robotics-ER 1.6 on April 14, a new version of its embodied reasoning stack for robots. This is not just a language-model update. The focus is sharper physical reasoning, including better spatial understanding, stronger multi-view perception, and a new ability to read instruments such as pressure gauges and sight glasses.
That makes the launch notable because it pushes AI value closer to real operations. DeepMind says the model can act as a robot’s high-level reasoning layer, call tools such as Google Search, work with vision-language-action models, and decide whether a task has actually been completed. Together with stronger pointing, counting, and success detection, this is the kind of infrastructure needed for robots in warehouses, industrial environments, and inspection workflows.
The clearest new capability is instrument reading, developed with Boston Dynamics. That is a useful signal for where the market is moving: not toward generic robot demos, but toward systems that can interpret physical environments well enough to do meaningful work under real safety and reliability constraints.
For CIOs, the bigger story is that the next AI layer will not only sit in chat interfaces. It will increasingly become the control layer for physical workflows, and Google is making that strategic move early by offering the model through the Gemini API and Google AI Studio from day one.
Source: Google DeepMind, "Gemini Robotics-ER 1.6: Powering real-world robotics tasks through enhanced embodied reasoning," published April 14, 2026.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.