Our Take
A 31% drop in transparency scores should alarm anyone building on top of foundation models. You cannot do proper risk management if you do not know what is in the model you are deploying.
Transparency in Freefall
The Foundation Model Transparency Index saw average scores drop to 40 points from 58 the previous year. Companies are increasingly treating model details — training data, dataset sizes, parameter counts, and compute costs — as trade secrets.
Private Company Dominance
Over 90% of notable frontier models released in 2025 came from private companies rather than academic labs. This concentration means the most powerful AI systems are controlled by entities with limited accountability to the public.
Why It Matters
Less transparency means less ability to audit for bias, verify safety claims, or understand failure modes. As these models are deployed in healthcare, finance, and criminal justice, the stakes of opaque systems continue to rise.
Counter-Trends
Some positive signals: Meta's Llama 4 Scout remains open-weight, Mistral continues to release open models, and the EU AI Act's transparency requirements will force some disclosure by August 2026. But the overall trajectory is toward less openness, not more.