Our Take
OpenAI's health policy blueprint reads like regulatory capture in progress: sensible recommendations that conveniently clear the path for their own products.
Why it matters
Health AI regulation is forming now, and companies are racing to shape rules before competitors arrive. OpenAI's early positioning could influence how the entire sector gets regulated.
Do this week
Health system CIOs: Read OpenAI's policy blueprint before your next vendor meeting so you can spot which regulatory changes they're banking on.
OpenAI published health AI policy wishlist
OpenAI released a policy blueprint alongside ChatGPT for Clinicians last month, outlining recommendations for how AI should be regulated in healthcare. The document presents itself as guidance for "unlocking AI's potential to change the broader health care system" (per OpenAI).
The timing follows OpenAI's recent healthcare product launches: ChatGPT Health for consumers in January, ChatGPT for Healthcare for hospitals, and ChatGPT for Clinicians. All three products operate in non-regulated areas of healthcare.
Health policy experts reviewing the blueprint told STAT the recommendations are technically reasonable but serve OpenAI's commercial interests. "They're trying to have their cake and eat it too," said David Blumenthal, former national coordinator for health IT and Harvard health policy professor. "They're trying to potentially sound like responsible parties in the current conversation while at the same time wanting the markets to stay open for their products."
First-mover advantage in regulatory capture
OpenAI's policy blueprint represents an early attempt to shape healthcare AI regulation before it solidifies. The company is positioning itself as a responsible actor while advocating for rules that would benefit its products and business model.
This matters because healthcare AI regulation is still forming. Companies that influence early policy discussions often see their preferred approaches become industry standards. OpenAI's focus on non-regulated healthcare areas gives it practical experience to cite when making policy arguments.
The blueprint also signals OpenAI's longer-term healthcare ambitions beyond consumer chatbots and clinical documentation tools.
Decode vendor policy positions
Healthcare organizations should read vendor policy recommendations as business strategy documents, not neutral guidance. OpenAI's blueprint reveals where the company sees regulatory opportunities and constraints.
IT leaders evaluating healthcare AI vendors should ask which regulatory changes each vendor is advocating for. Companies pushing for specific policy changes are betting their product roadmaps on those outcomes.
Organizations planning healthcare AI implementations should also track how current non-regulated tools might be affected if regulations expand. OpenAI's current products operate outside direct FDA oversight, but that could change as the regulatory landscape evolves.