Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering

Appeals Court Rebuffs Anthropic in AI Battle with Trump Administration

JH
Joachim Høgby
9. april 20269. april 20264 min lesingKilde:

A federal appeals court has refused to block the Pentagon from blacklisting AI laboratory Anthropic, marking a new development in the ongoing legal dispute between the Claude developer and the Trump administration over military AI deployment.

Dual Legal Front

The U.S. Court of Appeals in Washington, D.C., on Wednesday rejected Anthropic's request for an order that would shield the San Francisco company from fallout stemming from the dispute over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans.

This setback comes even as Anthropic had already prevailed in a separate case on the same issues in San Francisco federal court, where Judge Rita Lin forced President Donald Trump's administration to remove labels tainting the company as a national security risk.

Supply Chain Classification

At the heart of the dispute is the Pentagon's classification of Anthropic as a "supply chain risk" - a designation that could cripple a company locked in a race for AI supremacy against rivals such as ChatGPT maker OpenAI and Google.

The Trump administration has blasted Anthropic as a liberal-leaning company trying to dictate U.S. military policy, while the company asserts the administration is engaging in an "unlawful campaign of retaliation."

Divergent Interpretation

The Washington appeals court conceded that Anthropic would "likely suffer some degree of irreparable harm" if deemed a supply chain risk, but didn't see sufficient reason to issue its own order revoking the Trump administration's actions.

This was partly because "the precise amount of Anthropic's financial harm is not fully clear," according to the court's assessment.

Future Proceedings

Further evidence in the case is scheduled to be presented before the appeals court in a hearing scheduled for May 19.

"We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," Anthropic said in a statement.

Broader Implications

Matt Schruers, CEO of the technology trade group Computer & Communications Industry Association, expressed worries that the conflicting court decisions will muddle the business landscape at a pivotal time.

"The Pentagon's actions and the DC Circuit's ruling create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI," Schruers said.

The case illustrates the growing tension between AI innovation and national security, and how regulation of AI technology in military contexts is becoming an increasingly important policy issue.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.

Relaterte saker

Meta velger AWS Graviton for agentisk AI i stor skala
CIOInfrastructure

Meta velger AWS Graviton for agentisk AI i stor skala

Akkurat nå4 min lesing
Åpne saken
Meta taps AWS Graviton to scale agentic AI
CIOInfrastructure

Meta taps AWS Graviton to scale agentic AI

Akkurat nå4 min lesing
Åpne saken
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet
Breaking
CIOOpen Source

DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet

Akkurat nå4 min lesing
Åpne saken