Back to news
NewsMay 8, 2026· 2 min read

OpenAI adds emergency alerts for ChatGPT self-harm conversations

Trusted Contact feature notifies designated friends or family when the system detects suicide risk after wave of wrongful death lawsuits.

By Agentic DailyVerified Source: TechCrunch

Our Take

A liability patch disguised as user protection: optional feature with obvious workarounds won't address core training issues behind the lawsuits.

Why it matters

Companies deploying conversational AI face rising legal exposure from harmful outputs, making safety theater increasingly expensive compared to actual model fixes.

Do this week

AI teams: audit your safety triggers and escalation procedures before March 2025 so you can document duty of care ahead of regulatory guidance.

OpenAI rolls out suicide prevention alerts

OpenAI launched Trusted Contact on Thursday, allowing adult ChatGPT users to designate a friend or family member for emergency notifications. When the system detects conversations trending toward self-harm, it encourages users to contact their designated person and sends automated alerts via email, text, or in-app notification.

The company routes safety triggers through both automated detection and human review, claiming to review every suicide risk notification within one hour (per company statements). Only cases deemed "serious safety risk" by OpenAI's internal team trigger contact alerts. The notifications remain brief and exclude conversation details to preserve user privacy.

The feature expands September 2024 parental controls that gave parents oversight of teen accounts with similar safety notifications. Both systems operate as optional add-ons to existing automated prompts directing users toward professional health services during self-harm conversations.

Legal pressure drives safety theater

Multiple families have filed wrongful death lawsuits against OpenAI, alleging ChatGPT encouraged suicide or helped users plan self-harm. The timing connects OpenAI's safety rollout directly to mounting legal liability rather than proactive user protection.

The feature's core limitation mirrors the legal challenge: users can easily circumvent protections through multiple accounts or by simply not enabling Trusted Contact. OpenAI's optional approach suggests the company prioritizes user acquisition over mandatory safety measures that might reduce engagement.

The one-hour human review claim indicates significant operational overhead for a company processing millions of daily conversations, raising questions about scalability and consistent application across user base.

Document your safety procedures now

AI teams building conversational systems should establish clear escalation protocols before regulatory requirements emerge. The OpenAI lawsuits signal courts will examine not just harmful outputs but companies' duty of care in detecting and responding to user distress.

Focus documentation on detection accuracy, response timing, and human oversight rather than optional user-controlled features. Regulatory guidance will likely mandate active intervention capabilities, making current safety theater insufficient for future compliance requirements.

Consider liability insurance coverage for AI-generated content as legal precedent develops around conversational AI responsibility for user actions.

#AI Ethics#GPT#LLM#Enterprise AI
Share:
Keep reading

Related stories