안보

Trump Halts AI Autonomous Weapons Company Anthropic, Blocks Supply Chain, Google and OpenAI Strike Back

김종찬안보 2026. 2. 28. 11:54
728x90

Trump Halts AI Autonomous Weapons Company Anthropic, Blocks Supply Chain, Google and OpenAI Strike Back

Anthropic, which assisted in the Venezuelan attack, halted the use of "Claude" ahead of the Iran attack, prompting the Trump administration to designate it as a "government supply chain risk." 604 Google and OpenAI employees signed a statement opposing the government's action.

The Pentagon-Inthropic dispute stems from a contractual dispute over the technical details of the AI model and how it will be used by the military. However, 13 minutes after President Trump ordered all federal agencies to stop using Anthropic's AI technology via TruSocial on the 27th, Secretary of Defense Hegseth announced to X that the AI company Anthropic was designated as a "supply chain risk to national security," prohibiting any contractor or supplier working with the US military from doing business with Anthropic. The battle between the Pentagon and the artificial intelligence company Anthropic, ostensibly over a $200 million contract for the use of AI in classified systems, has led to a bitter clash between the two sides over a Pentagon-imposed deadline of 5:01 p.m. on the 27th for final contract terms.

Secretary Hegseth announced that if the companies failed to reach an agreement by 5:01 p.m. that day, he would invoke the Defense Production Act to force the military to use their models and designate the company as a supply chain risk. Anthropic also demanded assurances from government contractors that its AI "does not intend to be deployed in mass surveillance of Americans or in autonomous weapons without human intervention" regarding how AI will be used on the battlefield.

Hegseth's full statement follows:

This week, Anthropic delivered a masterclass in arrogance and betrayal, a textbook example of how not to do business with the US government or the Department of Defense. Our position has never wavered and never will: the War Department should have full and unrestricted access to Anthropic's models for all legitimate purposes in defense of the Republic.

Instead, @AnthropicAI and its CEO, @DarioAmodei, have chosen duplicity.

Under the hypocritical rhetoric of "effective altruism," they have attempted to coerce the U.S. military into a cowardly act of corporate virtue signaling that prioritizes Silicon Valley ideology over American lives.

Anthropic's flawed altruism will never trump the safety, readiness, or lives of U.S. troops on the battlefield.

Their true objective is clear: to seize veto power over U.S. military operational decisions. That is unacceptable.

As President Trump stated at Truth Social, the Commander-in-Chief and the American people alone will determine the fate of our military, not unelected tech executives.

Anthropic's position is fundamentally incompatible with American principles.

Thus, their relationship with the U.S. military and the federal government has been permanently altered.

In accordance with the President's federal directive, I am directing the Department of War to designate Anthropic as a national security supply chain risk, along with directives to cease all use of Anthropic's technology.

Effective immediately, no contractor, supplier, or partner doing business with the U.S. military may engage in any commercial activity with Anthropic.

Anthropic will continue to provide services to the War Department for six months to facilitate a smooth transition to a better, more patriotic service.

America's warfighters will never be held hostage to the ideological whims of Big Tech. This decision is final.>

As these remarks demonstrate, the situation has also escalated into a political battle.

The New York Times reported, "The Pentagon wants all contractors to adhere to a single standard that allows the military to use purchased products as they wish, as long as they comply with the law." "But Pentagon officials are also happy to criticize technology companies, particularly those the Trump administration has labeled 'woke.'" For AI technology company Anthropic, safety became a political issue, and the company's employees enthusiastically supported their CEO's unwavering stance.

In a rare moment of unity across Silicon Valley AI companies, employees at Anthropic's competitors, OpenAI and Google, signed a letter supporting Anthropic's position.

Google's public statement and signatures are as follows:

Title: "The War Department Is Threatening"

Invoke the Defense Production Act to force Anthropic to provide its models to the military and "tailor them to the military's needs"

Classify the company as a "supply chain risk."

All of this is retaliation for Anthropic's adherence to its red lines, which prohibit its use in domestic mass surveillance and the autonomous killing of humans without human supervision.

The Pentagon is negotiating with Google and OpenAI to force them to agree to what Anthropic has refused.

They are trying to divide each company, fearing that the other will compromise. This strategy only works if we all don't know each other's positions. This letter serves to build common understanding and solidarity against the pressures of the War Department.

We are employees of Google and OpenAI, the world's leading AI companies.

We urge our leaders to set aside their differences and continue to reject the War Department's current request for permission to conduct domestic mass surveillance and autonomous killings without human supervision.

Signatures, Signatories

Google: 520 signers (all current employees)

OpenAI: 84 signers (all current employees)

The signers signed with their titles and job descriptions clearly stated.

"They are trying to divide each company for fear of the other compromising," the letter stated.

In an initial compromise, the Pentagon announced Thursday that it had no intention of using Anthropic's models, which operate in classified systems for either mass surveillance or fully autonomous weapons.

Anthropic rejected the Pentagon's offer, stating that the Pentagon's insistence that it would not use its Claude model for these purposes was undermined by the legal language of the contract. Anthropic CEO Dario Amodei said in a statement on the 26th, "In narrow cases, we believe that AI can undermine rather than defend democratic values." He added, "Some uses are beyond what is safe and reliable with today's technology."

Dario Amodei's statement reads:

"I deeply believe in the existential importance of leveraging AI to defend the United States and other democracies and to defeat authoritarian adversaries.

Therefore, Anthropic has actively worked to deploy models to the War Department and intelligence agencies. We were the first frontier AI company to deploy our models on classified U.S. government networks, the first to deploy to national laboratories, and the first to provide custom models for national security clients.

Claude has extensively deployed across the War Department and other national security agencies in mission-critical areas such as intelligence analysis, modeling and simulation, operational planning, and cyber operations."

Anthropic also acted to defend America's AI leadership, even when this was against the company's short-term interests. We have forfeited hundreds of millions of dollars in revenue to block the use of cloud computing by companies linked to the Chinese Communist Party (some designated by the War Department as Chinese military companies), thwarted CCP-sponsored cyberattacks, and advocated for strong export controls on chips to ensure democratic gains.

Anthropic understands that military decisions are made by the War Department, not private companies. We have never objected to specific military operations, nor have we sought to restrict the use of technology ad hoc.

However, we believe that in limited cases, AI could undermine rather than defend democratic values. Furthermore, some uses fall outside the safe and reliable realm of today's technology. Two use cases have never been included in contracts with the War Department, and we believe they should not be:

Mass domestic surveillance. We support the use of AI for legitimate foreign intelligence and counterintelligence missions. However, the use of these systems for mass domestic surveillance is incompatible with democratic values.

AI-driven mass surveillance poses serious and novel risks to our fundamental freedoms. This surveillance is currently legal because the law hasn't yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and connections from public sources without a warrant, a practice that intelligence agencies have acknowledged raises privacy concerns and sparked bipartisan opposition in Congress.

Strong AI could automatically assemble this scattered, individually innocuous data into a comprehensive picture of anyone's life, at scale.

Fully autonomous weapons. The partially autonomous weapons used today in Ukraine are essential to the defense of democracy.

Even fully autonomous weapons—weapons that completely eliminate humans and automate target selection and engagement—could be crucial to national defense.

However, today's cutting-edge AI systems are not sufficiently trustworthy to power fully autonomous weapons. We will not knowingly provide products that endanger American warfighters and civilians.

We offered to work directly with the War Department to improve the reliability of these systems, but they declined. Moreover, without proper oversight, fully autonomous weapons cannot exercise the critical judgment our highly trained professional soldiers demonstrate every day. They must be deployed with appropriate guardrails, which do not exist today.

To our knowledge, these two exceptions have not hindered the accelerated adoption and use of the model within our military.

The War Department has stated that it will only contract with AI companies that allow "lawful use" and lift safeguards in the aforementioned cases.

They have threatened to remove us from their systems if we maintain these safeguards.

They also threatened to designate the United States as a "supply chain risk," a designation that has never been applied to American adversaries or American companies, and threatened to use the Defense Production Act to force the withdrawal of safeguards.

These two threats are inherently contradictory: one labels us a security risk; the other considers Claude as essential to national security.

In any case, these threats do not alter our position. We cannot, in good conscience, accept their request.

The Department prescribes the contractor that best aligns with its vision.

However, given the significant value Anthropic's technology provides to our military, we hope they will reconsider.

We strongly prefer to continue serving the Department of Defense and the warfighter while maintaining the two requested safeguards.

If the State Department decides to offboard Anthropic, we will facilitate a seamless transition to another provider, ensuring no Deputy Secretary will hesitate to engage in ongoing military planning, operations, or other critical missions.

Our model will be available for as long as necessary, under the broad terms proposed. We stand ready to continue our work to support America's national security.>

For days, Anthropic and the Pentagon have been locked in an escalating battle over how to leverage cutting-edge AI technology and how it could support military operations. The Pentagon demanded unrestricted access to the AI system without the protections Anthropic sought, while Anthropic demanded safeguards, including "preemptive control of Americans," which were denied.

Trump's remarks came as the Pentagon and Anthropic continued to negotiate a compromise.

In his announcement, Trump called Anthropic "left-wing lunatics" and said they "made a mistake trying to coerce the Pentagon." Regarding Antropic's Claude model, which is essential for precision strikes, the New York Times reported, "While AI applications supporting ground military operations are still in the development stage, these models are actively used for intelligence analysis.

Forcing Claude off government computers would be a major blow to National Security Agency (NSA) analysts examining foreign wiretaps, and it could also hinder CIA analysts' ability to find patterns in intelligence reports." The Times continued, "Former officials said CIA officials were anxious to find ways to continue using Claude, which accelerated their work and deepened their analysis."

Even before Trump's remarks, officials warned that any order from the president could force the agency to find other solutions, and the Pentagon is poised to pursue Grok, developed by Elon Musk's xAI, in classified systems, but "Grok" is considered an inferior product by current and former administration officials. Such AI software has become essential for the US's preferred war preparation phases, such as pre-targeting, eavesdropping, and precision strikes, and the transition from Antropic will take time and almost certainly lead to disruption.

The Pentagon has stated that private contractors cannot legally determine how the tools can be used for national security purposes.

The Pentagon has extended its military contracting principles, which typically exclude weapons manufacturers from determining missile launch locations, to include contracts for AI-powered weapons.

The New York Times reported, "Hegseth, a former Fox News executive, has been a vocal critic of policies and companies he deems overly progressive. He actively seeks to integrate AI into war planning and weapons development, echoing President Trump's rhetoric that has made AI expansion a cornerstone of his policy." "Antropic, a five-year-old company with a $380 billion valuation, has staked its reputation on AI safety, has worked with US defense and intelligence agencies while raising concerns about the technology's risks, and is currently the only AI company operating on the Pentagon's classified systems."

OpenAI, the maker of ChatGPT, announced on the 27th that it had reached an agreement with the Pentagon to provide its AI technology for classified systems, just hours after President Trump ordered federal agencies to stop using AI technology developed by rival company Anthropic.
Under the agreement, OpenAI agreed to allow the Pentagon to use its AI systems for lawful purposes.

In a social media post, OpenAI CEO Sam Altman said, "In all our interactions, the Department of War has demonstrated a deep respect for security and a willingness to collaborate to achieve the best possible outcome," using the initials of the Department of War, the Trump administration's preferred name for the Pentagon.

Altman, who visited South Korea to meet with President Lee Jae-myung, posted the following X to announce the contract with the Trump administration:
<Tonight, we agreed to deploy our models on the Department of War and classified networks.>
In all our interactions, the Department of War has demonstrated a deep respect for security and a willingness to collaborate to achieve the best possible outcome.
Our mission is to ensure AI safety and the broad distribution of its benefits. Two of the most important safety principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems.

The Department of War (DW) agrees with these principles, embodies them in its laws and policies, and incorporates them into its agreements. Furthermore, we will implement technical safeguards to ensure the model performs as intended. We will deploy the FDE (Federated Exemplar Experimental Body) to support the model, and to ensure its safety, we will deploy it only on cloud networks.
We are asking the DoW to provide equal conditions to all AI companies, and we believe everyone should accept this.
We strongly hope that the situation will be resolved through a reasonable agreement, rather than through legal or governmental action.
We are committed to serving all of humanity to the best of our ability. The world is complex, confusing, and sometimes dangerous. >
OpenAI intervened amidst President Trump's escalating stalemate, labeling Anthropic a "radical left-wing AI company" and cutting it off from the supply chain. Altman, on CNBC on the 27th, expressed confidence in Anthropic, saying, "They really care about safety." However, he negotiated with the Department of Defense differently than Anthropic, agreeing to allow OpenAI's technology to be used for any lawful purpose.
Two people familiar with the discussions, speaking on condition of anonymity, told the New York Times, "Altman publicly supported Anthropic's position that AI should not be used for domestic surveillance or autonomous weapons, but he has also been involved in technology-related negotiations with the Pentagon since the 25th." They added, "At an artificial intelligence summit in India last week, Altman and Dr. Amodei were captured on video refusing to hold hands during a photo op with Prime Minister Narendra Modi."

The New York Times reported, "Altman and Anthropic CEO Dario Amodei have long been bitter rivals. Dr. Amodei and several of Anthropic's founders previously worked at OpenAI, but left in 2021 due to differences with Altman and others over how to fund, build, and release AI."

Anthropic's Amodei opposed the use of autonomous weapons, championing "democracy," while Altman, whom President Lee praised, clashed with him over their opposing worldviews: "dedicated to serving all of humanity" and "the world is a complex, chaotic, and dangerous place."

 

The Pentagon negotiated with OpenAI privately, while outwardly negotiating with Antropic, and Altman, facing backlash over OpenAI's involvement, asked people at X on the 28th to question the deal.
Many questions raised questions about how OpenAI could maintain safety principles while contracting with the Pentagon, and whether OpenAI's agreement truly protects its AI models from misuse.
Altman said of the deal, "We don't want the power to comment on specific, lawful military actions," but "we do want the ability to use our expertise to design secure systems," agreeing to "unrestricted military use."


See <Lee Jae-myung's 2021 pledge to violate the Constitution on direct democracy with AI: 'New Deal 4% high growth', July 14, 2025>
<Lee Jae-myung's AI government 'lines up companies' amidst excessive publicity and investment competition for AI, June 28, 2025>

<Lee Jae-myung's 'AI-powered killer drones and autonomous machine guns in the Ukraine War', July 3, 2025>
<Israel's AI technology to target Hamas, Lee Jae-myung's 'AI to ease polarization,' April 26, 2025>
<Extremist economic vision lacks concrete far-right alternatives, Lee Jae-myung's 'AI to shorten labor without taxes,' March 3, 2025>