tech

Google, OpenAI Staff Back Rival Anthropic in Lawsuit Against Pentagon

In an unprecedented show of unity, researchers from competing AI labs have joined forces to challenge the U.S. government, signaling a broader industry pushback against what they see as arbitrary federal overreach.

Alex ChenAI Voice
SignalEdge·March 10, 2026·3 min read
Government officials and tech engineers in a tense meeting, symbolizing the conflict between the AI industry and the Pentagon

Key Takeaways

  • Nearly 40 employees from Google and OpenAI filed an amicus brief supporting Anthropic’s lawsuit against the Department of Defense.
  • The lawsuit contests the Pentagon's designation of Anthropic as a “supply-chain risk.”
  • Google's chief scientist and Gemini lead, Jeff Dean, is among the high-profile signatories, lending significant weight to the effort.
  • The unified front from rival labs suggests a growing concern within the AI industry over government regulation and contracting policies.

Dozens of employees from Google and OpenAI are formally backing their rival Anthropic in its lawsuit against the U.S. Department of Defense. According to court filings, nearly 40 researchers and engineers from the two leading AI labs signed an amicus brief supporting Anthropic after the Pentagon labeled the company a “supply-chain risk,” a move that could jeopardize its ability to secure government contracts.

An Unlikely Alliance

The show of support from competitors is a rare event in the fiercely contested AI space. The list of signatories includes some of the most prominent names in the field, most notably Jeff Dean, Google's chief scientist and the lead for its Gemini models, as reported by both The Verge and Wired. His participation elevates the brief from a simple statement by employees to a significant signal from the industry's technical leadership.

Anthropic filed its lawsuit on Monday, and the amicus brief from the Google and OpenAI employees followed hours later. While TechCrunch initially reported “more than 30” signatories, The Verge later clarified the number was “nearly 40.” The core of the dispute is the DoD’s risk designation, which Anthropic argues is unfounded and damaging to its business.

Industry Draws a Line in the Sand

This legal challenge is about more than a single company's contract eligibility. The Verge reports the brief details concerns over actions originating from the Trump administration, framing the DoD's designation as a politically motivated and technically unsubstantiated decision. By joining forces, employees at the world's top AI labs are sending a clear message to Washington.

This suggests a calculated move based on mutual self-interest. The underlying logic is straightforward: if the Pentagon can arbitrarily designate Anthropic as a risk without clear criteria, it can do the same to OpenAI, Google, or any other AI developer. The industry's rapid innovation cycles and global talent pools often clash with the rigid, national-security-focused procurement processes of government agencies. This lawsuit, and the support it has garnered, represents a defensive maneuver to prevent the government from setting a precedent that could stifle competition or pick winners and losers based on opaque assessments.

SignalEdge Insight

  • What this means: The top AI labs are willing to set aside intense rivalries to create a united front against perceived government overreach that threatens their entire industry.
  • Who benefits: Anthropic in the short term, and the broader AI sector if it establishes a clearer, more transparent process for government risk assessment.
  • Who loses: The Department of Defense, whose credibility in evaluating technology partners is now being publicly challenged by the industry's top minds.
  • What to watch: The court's response to the lawsuit and whether other technology firms formally join the opposition to the Pentagon's policy.

Sources & References

Daily Newsletter

Stay ahead of the curve

Get the most important stories in tech, business, and finance delivered to your inbox every morning.

You might also like