Our Take
Performance metrics won't save the next wave of sepsis AI from the same workflow integration problems that killed Epic's first attempt.
Why it matters
Sepsis kills 350,000 Americans annually, but healthcare AI adoption depends more on alert fatigue and physician workflow than algorithmic accuracy.
Do this week
Health systems: audit current sepsis alert volumes and physician response rates before evaluating any new AI vendor claims.
Epic's sepsis AI crashed, new models emerge
Epic's sepsis prediction algorithm, adopted by hundreds of hospitals five years ago, failed spectacularly in real-world deployment. The system generated so many alerts that physicians ignored them or hospitals disabled the technology entirely, despite promising paper results.
Now multiple new sepsis detection systems are entering the market. Epic has released a retooled version of its algorithm. Startups are testing models in health systems. One team applies large language models to mine clinical notes for sepsis indicators. On Tuesday, Bayesian Health announced FDA clearance for its sepsis flagging device, developed with Johns Hopkins origins.
Sepsis, a life-threatening reaction to infection, kills more than 350,000 people in the United States annually (per STAT reporting).
Alert fatigue trumps algorithmic precision
The Epic failure reveals that technical performance alone cannot drive hospital adoption of clinical AI tools. Even accurate algorithms fail when they overwhelm physicians with alerts or disrupt established workflows.
This creates a higher bar for new entrants. They must solve not just the prediction problem but the integration challenge that defeated Epic's first attempt. The question is whether five years of additional experience with clinical AI has prepared health systems to deploy sepsis tools more effectively.
Workflow integration beats accuracy metrics
Health system leaders evaluating sepsis AI should prioritize alert volume management and physician workflow integration over vendor-reported performance statistics. The Epic experience shows that even well-funded, widely-adopted systems can fail if they create more clinical burden than value.
Demand demonstration of alert tuning capabilities and physician response rate data from current deployments. Any vendor that leads with sensitivity and specificity numbers without addressing alert fatigue learned nothing from Epic's expensive lesson.