Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Google adds crisis and mental health safeguards to Gemini
GoogleGeminiAI safetyCIO

Google adds crisis and mental health safeguards to Gemini

JH
Joachim Høgby
7. april 20267. april 20264 min lesingKilde:

Google published an official Gemini update on April 7 that adds new mental health and crisis-response safeguards. This is not a new model launch. It is a product safety update focused on how Gemini should respond when conversations suggest a mental health crisis or an urgent need for help.

The headline feature is a new "one-touch" flow for conversations that may indicate suicide or self-harm risk. When Gemini detects a potentially acute situation, users will see a simplified persistent interface that makes it easier to call, chat, text, or visit crisis hotline resources directly.

Google also says Gemini now surfaces a redesigned "Help is available" module when a user may need mental health information or support. The company says the experience was developed with clinical experts, and that Gemini has been trained to encourage help-seeking, avoid validating harmful behavior, and avoid reinforcing false beliefs.

Google paired the product update with a broader support commitment. Google.org said it will provide $30 million over three years to global hotlines, and the company is expanding its ReflexAI partnership with $4 million in direct funding plus Gemini integration into training tools for social-sector organizations.

Why it matters: this is another sign that AI safety is moving from broad policy language into explicit product controls. For CIOs and product leaders, the takeaway is straightforward. If AI is used in sensitive workflows, it is no longer enough to rely on general policy claims. Vendors need concrete escalation paths, human fallback, and auditable safety behavior inside the actual interface.

Original source: Google's blog post "An update on our mental health work," published April 7, 2026.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.