A federal judge pressed the government on Tuesday over its public branding of Anthropic as a national security threat, questioning whether the administration’s actions were properly targeted after a dispute about how the company’s artificial intelligence can be used in warfare and domestic surveillance. The San Francisco hearing made clear the case could reshape how the U.S. evaluates and restricts commercial AI — and a ruling is expected later this week.
U.S. District Judge Rita Lin spent roughly 90 minutes probing Justice Department and Pentagon lawyers about the rationale behind the unusually public move to label the fast-growing Silicon Valley startup a security concern. Anthropic says the designation followed its effort to limit military or surveillance uses of its AI models and amounts to unlawful retaliation.
Lin did not issue a decision on Tuesday. Instead she ordered both sides to submit additional evidence by Wednesday and signaled she would issue a ruling before the week ends.
New York Times hit with lawsuit from US rights agency: claims white staffer was denied promotion
Horoscopes April 27: what every zodiac sign should expect today
Anthropic has asked the court for an emergency order to lift what it calls a damaging stigma imposed by the administration; the company has separately pursued appeal-level review in Washington, D.C. Anthropic’s lawyers say the public statements and government directives have already harmed the company’s reputation and business prospects.
Government attorneys defended the designation as a reasonable response to negotiations they describe as fraught, and argued the executive branch deserves wide deference when weighing national security risks. A Justice Department lawyer told the judge the company showed itself to be an “unreliable partner” in recent talks.
The dispute began after Anthropic tried to restrict how its technology might be used — including limits on deployment in combat systems and tools that could surveil Americans. The White House publicly criticized the company on Feb. 27 and directed federal employees to stop using its services; the Pentagon later set a six-month timeline to remove Anthropic’s systems from some government platforms. Those systems are reportedly integrated into classified tools used in the conflict with Iran.
Judge Lin repeatedly questioned whether the government’s actions were narrowly tailored to genuine security threats, noting that public branding of this sort has typically been reserved for companies tied to foreign adversaries. “It’s not my role to resolve the broader policy debate,” she said, “but I must determine whether the administration acted legally in singling out a U.S. firm.”
Anthropic’s counsel argued the company is suffering “irreparable” harm from the public statements and policy directives — damage that, they said, needs judicial relief now to prevent further erosion of contracts and partnerships. Government counsel countered that national security assessments warrant deference and that the Defense Department will continue to manage its operations without undue influence from vendors.
The case is more than a private contract fight. It touches on the larger, unsettled question of how quickly commercial AI can be limited or barred from government uses, and who gets to decide those boundaries.
Key implications
- Federal procurement: How agencies can vet and remove commercial AI from sensitive systems could change if the court limits executive authority to publicly brand vendors.
- Corporate negotiations: Companies that attempt to place ethical or usage constraints on their products may face new political and contractual risks.
- National security policy: The ruling could set precedent on when a domestic firm is treated like an adversary-linked vendor.
- Privacy and civil liberties: Limits on AI use in surveillance systems are at the center of the dispute, with potential consequences for U.S. citizen protections.
- Market and investment signals: Public government pronouncements can quickly affect partnerships, stock valuations and future deals for AI firms.
Government filings in the case have backed away from an across-the-board ban, but the initial public denunciations and the president’s social-media comments have already prompted concern among Anthropic’s customers and partners. Defense officials say the measures are precautionary and aimed at guarding sensitive systems; Anthropic contends they amount to punitive measures tied to policy disagreements.
Observers say the outcome could influence how other AI developers approach demands from regulators and defense customers, and whether companies will feel empowered to impose ethical limits on the deployment of their models.
Judge Lin’s imminent decision will test the balance between executive latitude in national security matters and the courts’ willingness to police potentially damaging public actions against private technology firms. For now, both sides have been ordered to provide more evidence, and stakeholders across industry and government are watching closely.












