Our Take
One clinician's personal journey offers a template for responsible AI adoption, but lacks the metrics needed to prove patient outcomes improve.
Why it matters
Healthcare AI adoption often happens without clear clinical governance frameworks, making practitioner-led oversight models worth studying before scaling.
Do this week
Healthcare leaders: audit your current AI tools for escalation paths and bias testing protocols before next quarter's deployment decisions.
Physical therapist shifts from AI skeptic to adopter
Steven Griffin, a licensed physical therapist and Senior Manager of Clinical Navigation at TailorCare, documented his transition from AI skeptic to routine user in musculoskeletal care. Griffin now uses AI-powered tools for real-time support during patient calls and pattern identification across conversations.
Griffin's adoption came with specific conditions: human oversight in design and deployment, clear escalation paths, regular audits, population-based bias testing, and privacy protections. He cited concerns about human decision-making being sidelined and the complexity of MSK care, where "two people can present with the same diagnosis and still need a very different next step."
The PT sees AI tools currently supporting documentation, motion tracking in virtual PT platforms, and decision-support for care teams. Nearly 1 in 2 U.S. adults lives with an MSK condition at any given time (per the article), making the field a significant target for AI deployment.
Template for clinical AI governance emerges
Griffin's framework addresses a gap in healthcare AI deployment: practitioner-led oversight models that balance efficiency gains with clinical accountability. His emphasis on "non-negotiable rules" reflects broader concerns about AI adoption pace in healthcare settings.
The MSK care context matters because outcomes depend heavily on rapport and clinical judgment. Griffin notes that administrative work pulls clinician attention away from patients, creating cognitive load that AI tools might address without replacing human connection in "high-stakes or complex moments."
Payment increasingly tied to outcomes and total costs creates pressure for earlier intervention, where AI pattern recognition could identify coaching opportunities Griffin's team might miss.
Governance framework before deployment
Healthcare organizations can adapt Griffin's oversight requirements: human supervision in initial design, ongoing monitoring to ensure tools work as intended, clear escalation protocols when AI recommendations need human review, and regular audits including bias testing across patient populations.
Griffin's experience suggests focusing AI tools on cognitive load reduction rather than clinical decision replacement. Real-time suggestions during patient interactions and communication optimization based on individual learning styles represent areas where AI supports rather than supplants clinical expertise.
The key question shifts from whether AI belongs in healthcare to where it fits without eroding patient trust. Griffin's model requires proving AI helps clinicians "intervene earlier, communicate better, and make the process easier" while maintaining the human connection that builds rapport with patients in pain.