Back to news
AnalysisApril 9, 2026· 5 min read· 756 views

The AI Transparency Crisis: Why Model Makers Are Going Dark

The Foundation Model Transparency Index dropped from 58 to 40 points. Over 90% of frontier models come from private companies, and they are keeping training data, costs, and parameter counts secret.

By Agentic DailyVerified Source: Stanford HAI

Our Take

A 31% drop in transparency scores should alarm anyone building on top of foundation models. You cannot do proper risk management if you do not know what is in the model you are deploying.

Transparency in Freefall

The Foundation Model Transparency Index saw average scores drop to 40 points from 58 the previous year. Companies are increasingly treating model details — training data, dataset sizes, parameter counts, and compute costs — as trade secrets.

Private Company Dominance

Over 90% of notable frontier models released in 2025 came from private companies rather than academic labs. This concentration means the most powerful AI systems are controlled by entities with limited accountability to the public.

Why It Matters

Less transparency means less ability to audit for bias, verify safety claims, or understand failure modes. As these models are deployed in healthcare, finance, and criminal justice, the stakes of opaque systems continue to rise.

Some positive signals: Meta's Llama 4 Scout remains open-weight, Mistral continues to release open models, and the EU AI Act's transparency requirements will force some disclosure by August 2026. But the overall trajectory is toward less openness, not more.

#Transparency#AI Ethics#Open Source#Regulation
Share:
Keep reading

Related stories