Google DeepMind Launches Project Genie — AI That Generates Interactive Worlds
Google DeepMind has made Project Genie available in beta to Google AI Ultra subscribers in the US. Powered by the Genie 3 model, the system allows users to create, explore, and remix interactive environments from simple text descriptions or images.
What is Project Genie?
Project Genie is a "world model" — an AI model that doesn't just generate static content, but creates fully interactive, playable environments in real-time. Users can describe a scene, and Genie builds a world they can navigate and modify.
The Genie timeline:
- Genie 1 (2024): Generated 2D games from screenshots
- Genie 2 (December 2024): Extended to 3D environments
- Genie 3 (August 2025): Higher resolution, visual consistency over several minutes
- Project Genie (now): Available to end users via AI Ultra
Practical Use Cases
While it's tempting to imagine anyone creating a GTA clone from their couch, the reality is more nuanced. The technology is promising for:
- Training and simulation: Organizations can build interactive training scenarios
- Rapid prototyping: Designers can sketch 3D concepts verbally
- Games and entertainment: Indie developers can iterate faster on world design
- Architecture and visualization: Build interactive room walkthroughs from floor plans
Competition and Context
Google positions Genie as a response to OpenAI's Sora (video) and Meta/Apple's emerging world model projects. Combined with Gemini Robotics (integrated into Boston Dynamics' Atlas), DeepMind is painting a picture of AI that doesn't just understand the world — but simulates and interacts with it.
For CIOs and product leaders, this is an early signal: interactive AI-generated simulation is approaching production-ready technology.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.