Catenaa, Monday, March 30- A federal judge has blocked the U.S. Department of Defense from labeling Anthropic a national security risk, ruling the action likely violated constitutional protections and restoring the company’s standing with federal contractors.
U.S. District Judge Rita Lin issued a preliminary injunction after finding the government’s designation lacked statutory support and appeared to punish the company for its policy stance. The ruling halts enforcement of the designation and requires the government to reverse related actions while the case proceeds.
The dispute stems from a $200 million artificial intelligence contract awarded in 2025. Talks collapsed after Anthropic refused to allow its Claude model to be used for mass surveillance of Americans or for lethal autonomous weapons. Defense officials warned the company it would face consequences if it did not remove those restrictions.
Anthropic declined, and federal agencies moved to cut ties. A directive soon followed ordering agencies to stop using the company’s technology, alongside a formal designation labeling it a supply chain risk.
Judge Lin said the government’s actions raised serious concerns under the First Amendment and due process protections, noting that the designation had historically been used for foreign adversaries, not domestic firms.
The case reflects growing tension between national security priorities and private sector AI governance. Companies like OpenAI and Google have introduced internal policies to limit high-risk uses of advanced models, particularly in surveillance and military applications.
Anthropic has positioned itself as a safety-focused developer, setting strict boundaries on how its systems can be deployed. These policies have become a point of friction as governments seek broader access to advanced AI tools.
The ruling could influence how future government contracts with AI firms are structured. Legal experts say it reinforces the ability of companies to impose usage restrictions without facing punitive classification or exclusion.
The decision may also shape regulatory approaches to AI procurement, especially as governments seek to balance innovation with civil liberties protections. It signals that national security arguments may face judicial scrutiny if they appear tied to retaliation or policy disagreement.
Defense contractors that had paused or ended relationships with Anthropic may now reconsider those decisions following the injunction.
Legal analysts say the ruling sets an early precedent for disputes between AI developers and governments over acceptable use. It suggests courts may act as a check when executive actions extend beyond established legal authority.
Policy specialists note the case highlights the need for clearer frameworks governing AI deployment in defense settings. Without defined standards, conflicts between safety commitments and national security demands are likely to intensify.
The concept of supply chain risk designation has traditionally been applied to foreign entities linked to espionage or cyber threats. Its application to a domestic AI firm marks a departure that drew scrutiny from legal observers.
Anthropic filed suit earlier this month, arguing that the government’s actions damaged its business and reputation without due process. The company said its refusal to modify safeguards was based on concerns about the safety and ethical use of advanced AI systems.
The court’s order temporarily restores conditions prior to the designation and requires a compliance report within weeks. Further proceedings are expected to determine whether the injunction becomes permanent.
The case arrives as artificial intelligence plays a growing role in defense, intelligence and public sector systems, raising complex questions about oversight, accountability and the limits of government authority.
