Back to news
NewsMay 12, 2026· 2 min read

Fake OpenAI model on Hugging Face infected 244k downloads

Attackers used AI model repositories to distribute malware, exploiting trust in the ML development ecosystem.

By Agentic DailyVerified Source: AI News

Our Take

AI repositories bypass traditional security scanning because they mix models with executable code that legacy tools miss.

Why it matters

Development teams treating AI repos as trusted sources face new attack vectors that existing security controls don't catch. The problem extends beyond one incident to systematic exploitation of ML workflows.

Do this week

Security teams: audit AI repository downloads from the past 90 days and implement code scanning for all model dependencies before Friday.

Malware posed as legitimate OpenAI model

A malicious repository on Hugging Face masqueraded as an official OpenAI release and delivered infostealer malware to Windows machines before removal, according to HiddenLayer research. The fake model recorded approximately 244,000 downloads (per HiddenLayer), though the security firm notes attackers may have artificially inflated this number to boost credibility.

HiddenLayer identified six additional Hugging Face repositories containing nearly identical malicious loader logic that shared infrastructure with the primary attack. The malware targeted peripheral elements around AI models rather than the models themselves: executable code, setup scripts, dependency files, and installation notebooks that developers routinely run when implementing new models.

Traditional security tools miss AI repository threats

This attack represents a broader shift toward exploiting AI development workflows as entry points into secure environments. Unlike traditional software supply chain attacks, AI repositories blend trusted model weights with executable components that bypass conventional security scanning.

Sakshi Grover from IDC noted that traditional software composition analysis tools inspect dependency manifests and container images but fail to identify malicious loader logic embedded in AI repositories. The attack pattern has appeared repeatedly: HiddenLayer previously documented poisoned AI SDKs and fake OpenClaw installers using similar techniques.

IDC projects that by 2027, 60% of agentic AI systems will require a bill of materials (per IDC's November 2024 FutureScape report) to track AI artifacts, their sources, approved versions, and executable components.

Scan AI dependencies like any other code

Development teams need to treat AI model repositories as potential attack vectors rather than trusted resources. The malicious code lives in setup scripts and loaders, not in model weights, making it detectable with appropriate tooling.

Organizations should implement code scanning for all AI repository downloads, not just the models themselves. This includes dependency files, installation scripts, and any executable components that accompany model releases. Teams should also maintain an inventory of AI artifacts in production environments, tracking their sources and approval status.

The attack demonstrates why AI model procurement needs the same security rigor as traditional software acquisition, with verification of publishers and scanning of all repository contents before deployment.

#AI Ethics#Developer Tools#Enterprise AI#Open Source
Share:
Keep reading

Related stories