Back to news
NewsMay 4, 2026· 3 min read

Google ships Gemini robot models as physical AI governance lags

Google DeepMind released production robotics models while enterprise governance frameworks struggle to catch up with autonomous physical systems.

By Agentic DailyVerified Source: AI News

Our Take

Google is shipping robot-ready AI models faster than most enterprises can govern them, creating a gap between capability and control in physical systems.

Why it matters

Physical AI systems can cause real-world damage when they fail, but only one-third of organizations have mature governance for autonomous AI systems (per McKinsey research).

Do this week

Security teams: audit your physical AI pilot programs before month-end so you can identify ungoverned autonomous decision points.

Google ships Gemini Robotics models for production use

Google DeepMind launched Gemini Robotics and Gemini Robotics-ER in March 2025, both built on Gemini 2.0 for direct robot control. Gemini Robotics handles vision-language-action tasks, while Gemini Robotics-ER focuses on spatial reasoning and task planning. The company demonstrated the models folding paper, packing items, and handling objects not seen during training.

The updated Gemini Robotics-ER 1.6, released in April 2026, added spatial logic, task planning, and success detection capabilities. Google made it available through the Gemini API in preview, bringing the models closer to developers building autonomous applications. The company partnered with Apptronik on humanoid robots and listed Agile Robots, Boston Dynamics, and others as trusted testers.

Industrial robot installations reached 542,000 units worldwide in 2024 (per International Federation of Robotics), more than double the rate from a decade earlier. Market researchers project the Physical AI category will grow from $81.64 billion in 2025 to $960.38 billion by 2033 (per Grand View Research), though definitions of "intelligence" in physical systems vary widely.

Governance frameworks cannot keep pace with physical deployment

Physical AI creates different risks than software-only automation because model outputs become robot movements, machine instructions, or decisions based on sensor data. When these systems fail, they can cause physical damage to equipment, infrastructure, or people. Safety controls must account for both AI model behavior and mechanical system limits.

Only about one-third of organizations report maturity levels of three or higher in agentic AI governance (per McKinsey 2026 research), even as AI systems take on more autonomous functions. The gap widens for physical systems, which require additional controls for collision avoidance, force limits, and contextual safety assessment.

Google DeepMind introduced ASIMOV, a dataset for testing semantic safety in robotics, acknowledging that current frameworks struggle with physical AI governance. Existing standards like NIST AI Risk Management Framework and ISO/IEC 42001 provide general structures but lack specific guidance for autonomous physical systems.

Audit physical AI systems before they reach production

Organizations deploying physical AI need to define clear boundaries: what data systems can access, which tools they can use, what actions require human approval, and how all activity gets logged. These controls become more complex when AI agents can call tools, generate code, or trigger physical actions.

Focus on three critical areas: escalation paths for when systems encounter unexpected conditions, testing procedures that cover both model behavior and physical safety limits, and audit trails that track decisions from model output to physical action. Industrial settings require additional controls for equipment limits and environmental conditions.

Google's partnership approach with established robotics companies provides a model for managing deployment risk, but most organizations lack similar testing partnerships. Start with limited pilots that include clear stop conditions and human oversight before expanding autonomous capabilities.

#Gemini#Agents#Enterprise AI#AI Ethics
Share:
Keep reading

Related stories