Back to news
NewsMay 5, 2026· 2 min read

AMA asks Congress to regulate mental health chatbots

Medical group warns AI therapy tools lack crisis detection and transparency as adoption outpaces safety oversight.

By Agentic DailyVerified Source: HR Executive

Our Take

The medical establishment wants regulation before documented harm spreads, but Congress moves slower than AI deployment cycles.

Why it matters

HR teams increasingly offer AI mental health benefits to employees while regulatory frameworks remain undefined. The liability gap widens as usage scales without clear professional oversight standards.

Do this week

HR leaders: audit your employee assistance programs before January board meetings so you can identify which mental health tools use AI versus human clinicians.

AMA demands AI mental health guardrails from Congress

The American Medical Association sent letters to congressional AI and digital health caucuses requesting immediate oversight of mental health chatbots. The medical group cited risks including misinformation, privacy breaches, and documented cases of harmful responses to users in crisis.

The AMA's specific demands include mandatory transparency when users interact with AI rather than human clinicians, prohibition of chatbots presenting as licensed professionals, and real-time crisis detection systems that route self-harm risks to human support.

AMA CEO John Whyte warned the technology "could erode patient trust" without consistent safeguards, even as adoption accelerates across healthcare settings.

Regulatory vacuum meets growing workplace deployment

The FDA has begun developing AI medical device frameworks, but no comprehensive approach exists for mental health chatbots specifically. The American Psychological Association has raised parallel concerns about accuracy and bias in AI emotional support tools.

Demand continues growing due to provider shortages and care access barriers documented by the National Institute of Mental Health (per agency data). AI tools increasingly fill gaps in mental healthcare delivery, particularly in employee benefit programs where HR teams face pressure to expand mental health support options.

The timing creates a liability exposure: organizations deploying AI mental health tools lack clear regulatory guidance on safety requirements or professional oversight standards.

Map your AI mental health exposure now

HR teams should immediately audit employee assistance programs and mental health benefits to identify which services use AI versus human clinicians. Many vendors embed AI capabilities without prominent disclosure, creating unknown risk profiles.

The AMA framed its recommendations as "a starting point, not a limit" as technology evolves, signaling more stringent requirements ahead. Organizations using AI mental health tools should document current vendor safeguards and crisis intervention protocols before regulatory requirements crystallize.

The medical group emphasized AI should "complement, not replace" clinical care, but stopped short of defining specific professional oversight requirements for workplace mental health programs.

#AI Ethics#Healthcare AI#Enterprise AI
Share:
Keep reading

Related stories