Our Take
The parallel between substance dependence and AI reliance is clinically grounded, but lacks quantified thresholds for when helpful becomes harmful.
Why it matters
AI adoption happens gradually through relief-seeking, making dependence patterns harder to recognize than social media's external reward loops. Organizations deploying AI tools need frameworks to preserve human cognitive development.
Do this week
Team leads: audit which cognitive tasks your team now defaults to AI for, document the specific discomforts AI relieves, and establish boundaries for tasks that build core competencies.
Addiction psychiatrist maps AI dependence patterns
Dr. Jonathan Avery, vice chair for addiction psychiatry at Weill Cornell Medicine, documented a progression in AI use that mirrors early-stage substance dependence. Students described starting with grammar assistance, then moving to idea clarification, outline generation, and eventually using AI to prepare for conversations and decisions.
The pattern follows a familiar medical sequence: relief before harm. Students reported feeling "uneasy about how much they relied on it" and wanted to reduce usage but "found themselves returning to it anyway." This matches what Avery calls a clinical warning sign: doubting your ability to function without help.
Unlike social media addiction, which exploits external social rewards, AI operates internally by organizing thoughts, resolving uncertainty, and reducing cognitive strain. This addresses the discomfort of blank pages, uncertain decisions, and difficult conversations - moments that are frustrating but essential for developing competence.
Cognitive outsourcing has neurological precedent
Neuroscience shows unused abilities diminish over time. GPS reduced navigation skills; calculators weakened mental math. AI extends this dynamic into judgment, creativity, and communication - more intimate cognitive territory.
The medical framework applies: addiction risk increases when something becomes the primary method for managing discomfort, whether emotional or cognitive. The discomfort AI relieves is subtle but fundamental to skill development. Writing develops knowledge through forced clarity. Decision-making strengthens judgment. Conversation builds emotional awareness.
Avery uses AI daily, including for editing, but distinguishes between augmentation and replacement. The technology provides genuine benefits: accessing unaffordable expertise, helping patients understand medical information, assisting with complex data processing.
Set protective boundaries, not moral rules
Neuroscientist Tim Requarth, cited in Avery's analysis, established personal boundaries: avoiding AI for early drafts because struggle aids thinking, and limiting use when tired or stressed due to weaker self-monitoring.
These function as protective measures rather than moral restrictions. In addiction treatment, similar boundaries preserve autonomy without requiring complete elimination. The goal is preventing the gradual shift from optional use to psychological reliance.
History suggests the most powerful technologies become dangerous not through obvious harm, but by working so effectively that we stop noticing what they replace. Recognition requires monitoring when AI shifts from helpful tool to primary cognitive crutch.