Back to news
NewsMay 5, 2026· 2 min read

Musk expert witness warns AGI race threatens AI safety controls

Berkeley professor Stuart Russell testified that competition for artificial general intelligence creates tension between speed and safety measures.

By Agentic DailyVerified Source: TechCrunch

Our Take

Russell's testimony exposes the central contradiction: the same people warning about AI risks are funding the arms race they fear.

Why it matters

The case highlights how profit motives and safety concerns clash at the highest levels of AI development. Regulatory discussions now cite these same contradictions as justification for intervention.

Do this week

AI teams: Document your safety protocols and corporate structure alignment before regulatory scrutiny intensifies next quarter.

Russell testifies on AGI competition risks

Stuart Russell, a UC Berkeley computer science professor, served as Elon Musk's sole expert witness on AI technology in the OpenAI lawsuit. Russell told jurors that artificial general intelligence development creates inherent tension between speed and safety, citing cybersecurity threats, misalignment problems, and winner-take-all competitive dynamics.

Russell signed the March 2023 open letter calling for a six-month pause in AI research. Musk also signed the same letter while simultaneously launching xAI, his own for-profit AI laboratory. OpenAI's attorneys limited Russell's testimony about existential AI risks and established during cross-examination that he wasn't directly evaluating OpenAI's corporate structure or specific safety policies.

The case centers on Musk's claim that OpenAI abandoned its charitable AI safety mission for profit. His attorneys cite early emails from OpenAI founders warning about AI risks and positioning the organization as a public-spirited counter to Google DeepMind.

Contradictions drive policy debates

The lawsuit exposes a fundamental contradiction in AI leadership: virtually every OpenAI founder warned about AI risks while building AI as fast as possible and creating for-profit enterprises. OpenAI's founding team feared AGI concentration in a single organization, then sought capital that "ultimately tore the team apart, creating the arms race we know today" (per TechCrunch's analysis).

This dynamic now plays out at the national level. Senator Bernie Sanders pushes for data center construction moratoriums, citing AI fears from Musk, Sam Altman, and Geoffrey Hinton. Critics note the selective use of these leaders' warnings while ignoring their optimistic statements about AI benefits.

OpenAI's early realization that meaningful AI progress required massive compute spend forced the pivot to for-profit investors. Safety concerns paradoxically drove the capital-seeking that created the competitive pressures they originally sought to avoid.

Document safety-profit alignment now

The case demonstrates how corporate structure changes can be reframed as mission abandonment. Organizations claiming safety priorities while pursuing aggressive growth face scrutiny over whether their governance structures support stated goals or enable competitive pressures to override safety considerations.

Regulatory attention will likely focus on how AI companies balance stated safety commitments with investor returns. The disconnect between public warnings and private actions creates vulnerability in policy discussions where lawmakers cherry-pick statements to support predetermined positions.

As Hodan Omaar from the Center for Data Innovation noted, "it is unclear why the public should discount everything tech billionaires say except when their words can be recruited to fill gaps in a precarious argument." Both sides in the OpenAI case ask courts to take parts of AI leaders' arguments seriously while discounting inconvenient statements.

#AI Ethics#Legal AI#Enterprise AI#LLM
Share:
Keep reading

Related stories