Back to news
NewsMay 8, 2026· 2 min read

OpenAI launches GPT-5.5-Cyber for critical infrastructure defense

OpenAI expands Trusted Access for Cyber with GPT-5.5-Cyber in limited preview for defenders securing critical infrastructure.

By Agentic DailyVerified Source: OpenAI

Our Take

OpenAI admits GPT-5.5-Cyber isn't more capable than GPT-5.5, just more permissive on security tasks that normally trigger refusals.

Why it matters

Critical infrastructure defenders need AI that can handle red teaming and penetration testing workflows without safety refusals blocking authorized security work.

Do this week

Security teams: Apply for Trusted Access for Cyber now to reduce classifier refusals on vulnerability research and malware analysis.

OpenAI splits cyber AI into two tiers

OpenAI expanded Trusted Access for Cyber with GPT-5.5-Cyber, a limited preview model for defenders responsible for critical infrastructure. The new model joins GPT-5.5 with Trusted Access for Cyber (TAC) in a tiered framework where access level determines how often safety classifiers block security-related requests.

GPT-5.5 with TAC handles most defensive workflows including vulnerability triage, malware analysis, and detection engineering while blocking malicious activities like credential theft and malware deployment. GPT-5.5-Cyber goes further, supporting red teaming and penetration testing that require more permissive behavior. Starting June 1, 2026, users accessing the most permissive models must enable Advanced Account Security (company-reported).

The company provided examples showing the differences: while GPT-5.5 with TAC will create proof-of-concept exploits for published vulnerabilities, GPT-5.5-Cyber will execute those exploits against live targets in controlled environments. OpenAI states the initial preview "is not intended to significantly increase cyber capability beyond GPT-5.5" but rather enable workflows blocked by safety refusals.

Critical infrastructure gets priority access

OpenAI is betting that different security workflows need different safety thresholds. Most security teams hit refusal walls when legitimate defensive work triggers safety classifiers trained to block potentially harmful requests. The tiered approach attempts to solve this by matching access levels to verified use cases rather than blanket restrictions.

The focus on critical infrastructure reflects growing pressure to defend power grids, water systems, and transportation networks against state-sponsored attackers. Traditional security tools often lag behind threats, and defenders need AI that can keep pace with adversaries already using similar models for offensive operations.

Partners including Cisco, Intel, SentinelOne, and Snyk are integrating these capabilities into enterprise security workflows. Cisco's Chief Security Officer Anthony Grieco noted that "frontier models are fundamentally changing the velocity of our operations" while emphasizing that "speed cannot be traded for trust."

Access requires verification and stronger controls

Security teams wanting GPT-5.5 with TAC must pass OpenAI's verification process for defensive work in authorized environments. The process involves identity checks and use case validation, though OpenAI doesn't specify timeline or approval criteria. GPT-5.5-Cyber requires additional verification and monitoring.

The company frames this as a "security flywheel" where researchers disclose vulnerabilities with exploit proof-of-concepts, supply chain tools prevent vulnerable code from reaching production, and detection systems identify exploitation attempts. Network providers can deploy WAF rules and edge mitigations while patches roll out.

Teams should evaluate whether current security workflows hit refusal barriers with standard models before applying for enhanced access. For most defensive security work, GPT-5.5 with TAC remains the recommended starting point (per OpenAI).

#LLM#GPT#Enterprise AI#Developer Tools
Share:
Keep reading

Related stories