Back to news
AnalysisMay 6, 2026· 2 min read

Frontier firms use 3.5x more AI per worker than typical companies

OpenAI data shows leading enterprises pull ahead through deeper, more complex AI workflows rather than just higher usage volume.

By Agentic DailyVerified Source: OpenAI

Our Take

The 16x gap in Codex usage between frontier and typical firms reveals the real divide is delegation depth, not chat frequency.

Why it matters

Enterprise AI competitive advantage is crystallizing around workflow depth and agentic delegation, not seat count or basic productivity gains.

Do this week

IT leaders: Audit which teams use advanced tools like Codex most intensively this week so you can identify internal frontier practices to scale.

Leading firms now use 3.5x more AI intelligence per worker

OpenAI's B2B Signals research reveals frontier enterprises (95th percentile users) now consume 3.5x as much AI intelligence per worker as typical firms, up from 2x in April 2024 (company-reported). The gap stems primarily from usage depth rather than volume: message count explains only 36% of the frontier advantage (per OpenAI analysis).

The clearest indicator of frontier status is advanced tool adoption. Frontier firms send 16x as many Codex messages per worker compared to typical companies (company data). Similar patterns appear across ChatGPT Agent, Apps in ChatGPT, and GPTs, suggesting leading organizations excel at adopting tools for coding, multi-step task delegation, and complex research.

Cisco exemplifies this depth approach in production workflows. Using Codex across its enterprise engineering organization, the company reduced build times by 20%, saved 1,500+ engineering hours monthly, and increased defect-resolution throughput by 10-15x (company-reported results).

Delegation separates leaders from followers

The data indicates a structural shift from AI as enhanced search to AI as work executor. Typical firms use AI to answer questions; frontier firms delegate complex execution. This suggests the competitive moat comes from operational muscle around AI delegation, not tool access.

Function-specific usage patterns reinforce this trend. IT and Security teams concentrate on procedural guidance, Software Development teams show heavy coding usage, and Finance teams focus on analysis and calculation (per OpenAI's usage classification). AI is moving beyond general productivity into core functional responsibilities.

Travelers Insurance demonstrates this evolution with its AI Claim Assistant, which handles first notice of loss, answers policy questions, gathers claim information, and creates claims directly in company systems. The assistant is projected to handle approximately 100,000 calls in its first year (company estimate).

Measure depth, not just deployment

Organizations can advance toward frontier status through five practices: measuring usage depth rather than seat count, building governance that enables production deployment, treating AI enablement as core infrastructure, identifying high-performing internal teams to scale, and shifting from chat assistance to delegated workflows.

The largest task-level advantage appears in education and learning applications, suggesting leading firms use AI not just for work completion but for building employee AI capabilities. This creates a compounding effect where better AI skills enable deeper AI integration.

Multiple entry points exist for AI advancement. Some industries lead in broad ChatGPT adoption, others in Codex usage or API intensity. This means organizations can choose their path: scale access, deepen existing usage, adopt agentic tools, or build AI directly into products and systems.

#Enterprise AI#Agents#Developer Tools#LLM
Share:
Keep reading

Related stories