AI and the Pentagon: Silicon Valley's New Battleground
AI and Military Use: The New Battleground Between Silicon Valley and the Pentagon
RESOLUTION February 28, 2026: Anthropic did not yield. Trump ordered all federal agencies to stop using Anthropic products. Hegseth designated Anthropic “Supply Chain Risk to National Security” — the first time this label has been used against an American company. The $200M contract was canceled. OpenAI signed a Pentagon deal hours later. But something unexpected happened: Claude climbed to #2 on the App Store, OpenAI and Google employees signed an open letter of support, and Sam Altman revealed OpenAI has the same “red lines.” Anthropic will fight in court.
Commercial artificial intelligence has been formally integrated into US military operations, sparking an unprecedented crisis between the Pentagon and the companies that build it. On July 14, 2025, the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) awarded contracts capped at $200 million each to the four leading AI companies — Anthropic, OpenAI, Google, and xAI — totaling an $800M program to develop frontier AI capabilities for national security. The first real test came on January 3, 2026 with Operation Absolute Resolve in Venezuela, after which Wall Street Journal reports revealed that Anthropic’s Claude was used during the active operation via Palantir. By mid-February 2026, the Pentagon is threatening to designate Anthropic a “supply chain risk” for refusing unrestricted military use of its AI, while OpenAI, Google, and xAI have already accepted the Department of Defense’s terms. This report exhaustively documents every aspect of the situation, rigorously distinguishing between verified facts, journalistic reports, and speculation.
1. Anthropic and the Pentagon: a $200M contract at risk
The CDAO contract
Verified fact (primary source: Anthropic press release, July 14, 2025; official CDAO announcement). Anthropic signed an Other Transaction Agreement (OTA) prototype contract capped at $200 million over two years, awarded by the CDAO. The contract covers frontier AI prototype development for national security, including Claude Gov (customized versions for national security clients) and Claude for Enterprise, all running on Amazon Web Services infrastructure. CDAO Director Doug Matty stated the contracts “will enable the Department to leverage the technology and talent of America’s frontier AI companies to develop agentic AI workflows across diverse mission areas.”
For context on the launch of Claude Opus 4.6 — the model that triggered the stock market “SaaSpocalypse” — this is the same technology now operating on classified Pentagon networks.
Anthropic’s usage policy
Verified fact (primary source: anthropic.com/legal/aup, version effective September 15, 2025). Anthropic’s Usage Policy explicitly prohibits: producing, modifying, or designing weapons and explosives; designing weaponization processes; and synthesizing biological, chemical, radiological, or nuclear weapons. It also bans “battlefield management applications” and surveillance without consent. However, the policy contains a crucial governmental exception clause allowing Anthropic to “enter into contracts with certain government clients that tailor use restrictions to that client’s public mission and legal authorities, if, in Anthropic’s judgment, the contractual restrictions and applicable safeguards are adequate to mitigate the potential harms.”
Anthropic’s two red lines
Source: Axios, February 15-16, 2026, with Anthropic spokesperson confirmation. Anthropic maintains two limits it refuses to negotiate: a ban on mass surveillance of US citizens and a ban on fully autonomous weapons without human intervention. The Pentagon considers these categories too “gray area” to be operationally useful and demands that all AI companies allow their tools for “all lawful purposes,” including weapons development, intelligence gathering, and battlefield operations.
The “supply chain risk” threat
Source: Axios, February 16, 2026; Pentagon spokesperson Sean Parnell (directly attributed statement). Defense Secretary Pete Hegseth is reportedly “close” to severing commercial ties with Anthropic and designating it a “supply chain risk” — a penalty normally reserved for foreign adversaries. This designation would force any company wanting to contract with the Department of Defense to certify it doesn’t use Anthropic’s models. Parnell stated: “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires our partners to be willing to help our warfighters win in any fight.” A senior Pentagon official added: “It’s going to be enormously complicated to untangle, and we’re going to make sure they pay a price for forcing us to do it.”
Note on reliability: The most incendiary threats come from anonymous Pentagon sources via Axios, a highly reputable outlet. These statements may represent negotiating positions rather than finalized decisions. Anthropic responded that it maintains “productive, good-faith conversations” with the Department of Defense.
The Pentagon Summons: The Ultimatum Goes Formal
Source: TechCrunch, February 23, 2026. Secretary Hegseth formally summoned Anthropic CEO Dario Amodei to the Pentagon for Tuesday, February 24, 2026.
The February 24 Meeting: Anthropic Holds the Line
Sources: Reuters, TechCrunch, The Verge, February 24-25, 2026. The meeting took place as scheduled. Pentagon officials internally described it as a “shit-or-get-off-the-pot” meeting — meaning “either you comply or face the consequences.”
The immediate result: Anthropic did not yield. According to multiple sources, the company has no plans to relax its usage restrictions and maintains its stance against allowing Claude to be used for mass surveillance of US citizens or fully autonomous lethal weapons without human intervention.
The conflict comes down to three words: The Pentagon demands the contract include “any lawful use” clauses — the same terms already signed by OpenAI and xAI. Anthropic refuses because it considers that clause to open the door to uses that, while technically legal, violate its ethical principles.
The Pentagon’s threats remain active with a new deadline: Friday, February 28, 2026. Two options are on the table: designate Anthropic a “supply chain risk” (a designation normally reserved for foreign adversaries like China), or invoke the Defense Production Act (DPA) to compel Anthropic to adapt Claude to military requirements.
The political context is incendiary. The Pentagon CTO leading negotiations is Emil Michael, a former Uber executive known for suggesting in 2014 that journalists critical of the company be investigated. The Trump administration’s AI “Czar,” David Sacks, has publicly called Anthropic’s safety policies “woke”. Defense procurement experts note this tactic is unprecedented: historically, the Pentagon has never publicly threatened American companies with “supply chain risk” designation.
The Pentagon’s Dilemma: No Alternative
Paradoxically, the Department of Defense is in a weak negotiating position. Anthropic is the only frontier AI lab with classified DoD access through its Palantir alliance. There is no operational backup. If Anthropic cancels the $200 million contract, the Pentagon would be left without a short-term alternative for AI capabilities on classified networks. OpenAI operates on classified networks through Microsoft Azure, not directly; Google Cloud reached IL6 but Gemini lacks the same level of integration as Claude via Palantir; and xAI is still developing its classified capabilities.
February 27-28: The Resolution
Sources: TechCrunch, Fortune, CBS News, CNN, Washington Times, Axios, February 27-28, 2026.
On Friday, February 27, with the 5:01 PM deadline passed, Anthropic did not yield. Dario Amodei published a statement reiterating his position: “Using these systems for mass domestic surveillance is incompatible with democratic values. Our strong preference is to continue serving the Department and our military, with our two safeguards in place.”
The government’s response was immediate and unprecedented:
Trump on Truth Social: Ordered all federal agencies to stop using Anthropic products, with a six-month transition period. “We don’t need it, we don’t want it, and will not do business with them again.”
Hegseth on X: Formally designated Anthropic “Supply Chain Risk to National Security”, effective immediately. This is the first time in history this designation — normally reserved for foreign adversaries like Huawei or Kaspersky — has been applied to an American company. Hegseth accused Anthropic of “arrogance and betrayal,” of using “sanctimonious effective altruism rhetoric,” and of trying to “subjugate the American military” through “Silicon Valley virtue-signaling that puts ideology above American lives.”
Emil Michael (Pentagon CTO, former Uber executive) called Amodei a “liar” with a “God complex” who was “putting national security at risk.”
Immediate consequences:
- Cancellation of the $200 million contract
- 180 days for the DoD to remove Claude from all its systems
- Ban on any military contractor doing business with Anthropic
Amodei’s Response
In an exclusive CBS News interview on Saturday the 28th, Amodei called the decision “retaliatory and punitive.” “This is unprecedented and has never happened with an American company. It became very clear from their statements and language that this was retaliation.”
Amodei pointed to the inherent contradiction in the Pentagon’s position: “One threat labels us a security risk; the other labels Claude as essential to national security.”
Anthropic announced it will fight the designation in court, arguing that under 10 USC 3252, the Secretary of Defense only has authority to restrict Claude’s use in DoD contracts, not to affect how contractors use Claude for other clients.
The Industry Responds: Unexpected Support
What nobody anticipated was the reaction from the tech industry:
Claude climbs to #2 on the App Store: The Claude app rose to second place in downloads on Apple’s App Store, with users downloading the app as a gesture of support.
Open letter from employees: Workers at OpenAI and Google signed an open letter backing Anthropic’s stance.
Sam Altman revealed OpenAI’s “red lines”: In a leaked internal memo, the OpenAI CEO confirmed his company shares exactly the same restrictions in its defense contracts — though implemented technically in the models rather than contractually as Anthropic required.
Ilya Sutskever (OpenAI co-founder, now competing with his own company) wrote on X: “It’s extremely good that Anthropic didn’t yield, and it’s significant that OpenAI took a similar stance.”
Future of Life Institute (Max Tegmark): “Fully autonomous weapons systems and AI-enabled Orwellian mass surveillance are affronts to our dignity and freedom. We congratulate Anthropic, OpenAI, and leading researchers from all AI companies for upholding the principle that AI should never be used to kill people without meaningful human control.”
OpenAI Fills the Void — With Nuance
Source: Fortune, New York Times, February 28, 2026.
Hours after Anthropic’s designation, OpenAI announced a deal with the Pentagon to deploy its models on classified systems. Sam Altman stated the Pentagon agreed that OpenAI could “build technical solutions into its models” to prevent their use in mass surveillance or lethal autonomous weapons.
“We are asking the Department of War to offer these same terms to all AI companies, which in our opinion everyone should be willing to accept,” Altman said.
The critical difference: OpenAI implements restrictions technically within the models; Anthropic required them contractually with explicit guarantees. Some commentators interpreted Altman’s statement as a veiled criticism of Anthropic for not having accepted the technical approach.
Grok (xAI) accepts “any lawful use”: Elon Musk’s model accepted the conditions Anthropic rejected, positioning itself to win military contracts. However, according to CNN, Grok “is not considered as advanced as Claude” and has low adoption among government officials.
The Real Financial Impact: Limited
Despite the political blow, the financial damage to Anthropic appears contained:
- Recent Series G: $30 billion at a $380 billion valuation
- Claude Code: Over $2.5 billion in annualized ARR
- Confirmed commercial clients: Spotify, Novo Nordisk, Salesforce, Thomson Reuters, New York Stock Exchange — all announced they will continue working with Anthropic
- Contract lost: $200 million — significant but not existential
Analyst Shenaka Anslem Perera warned on X: “It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask: is the risk of using Claude worth it?”
Note on reliability: The “supply chain risk” designation and its legal consequences will be tested in court. Experts cited by Fortune question whether the Pentagon made a “good-faith effort” to seek less intrusive measures before the designation, as required by law.
The Anthropic-Palantir alliance
Verified fact (primary sources: Palantir/Anthropic BusinessWire releases):
The alliance developed in two phases. Phase one was announced November 7, 2024: Anthropic and Palantir, along with AWS, provide access to Claude 3 and 3.5 models to US intelligence and defense agencies. Claude is operationalized within the Palantir AI Platform (AIP), hosted in Palantir’s Impact Level 6 (IL6)-accredited environment on AWS — one of the DoD’s most stringent security standards, corresponding to the SECRET level. Palantir described itself as “the first commercial industry partner to bring Claude models to classified environments.”
Phase two was announced April 17, 2025: Anthropic joined Palantir’s FedStart program, making Claude available to civilian federal agencies at FedRAMP High and DoD IL5 security standards, hosted on Google Cloud. The stated goal was to reach “millions” of federal workers.
Critical implication: Through this alliance, Claude became the first and, to date, only frontier AI model available on the DoD’s classified systems. This makes Anthropic indispensable in the short term but also places it at the epicenter of the controversy.
Chinese Espionage: Anthropic Accuses DeepSeek, Moonshot, and MiniMax
Source: Anthropic (official statement), February 25, 2026. In the middle of the Pentagon dispute, Anthropic revealed that Chinese AI labs — specifically DeepSeek, Moonshot AI, and MiniMax — created more than 24,000 fake accounts to extract Claude’s capabilities through “distillation,” generating more than 16 million exchanges. The attacks specifically targeted Claude’s most advanced capabilities: agentic reasoning, tool use, and coding.
Anthropic called for a coordinated industry and regulatory response, explicitly linking these attacks to the debate over AI chip export controls to China. The timing of this revelation — in the middle of the Pentagon crisis — adds an additional geopolitical dimension: Anthropic is positioning itself as a victim of foreign espionage while resisting demands from its own government.
2. Operation Absolute Resolve: what we know and what’s speculation
Confirmed facts
Primary sources: White House official statements, Department of Defense (war.gov), General Dan Caine briefing. On January 3, 2026, US special forces executed Operation Absolute Resolve, capturing Venezuelan President Nicolás Maduro and his wife Cilia Flores at the Fort Tiuna compound in Caracas. The operation involved over 150 aircraft from 20 bases, including F-22s, F-35s, B-1 bombers, electronic warfare aircraft, and RQ-170 stealth drones. Ground forces were in the compound for approximately 30 minutes. Maduro and Flores were transferred to the USS Iwo Jima and then to New York, where they were arraigned on January 5 before Judge Alvin Hellerstein on narcoterrorism charges. Both pleaded not guilty.
Casualties according to different sources
Figures vary significantly. The Pentagon confirmed 7 US service members wounded and zero killed; 5 returned to duty and 2 remained in recovery. Regarding Venezuelan and Cuban casualties, the best consolidated estimate ranges between 75 and 83 dead, comprising 47 Venezuelan military (Defense Minister Padrino López’s final figure, January 16), 32 Cuban military/intelligence (confirmed by Cuba), and at least 2 civilians independently documented. Diosdado Cabello claimed over 100 total dead, a figure not independently verified. Airwars, the British independent monitor, identified at least two incidents with civilian casualties, including an airstrike in Catia La Mar that hit a three-story residential building.
What the WSJ reports about Claude’s use
Source: Wall Street Journal, circa February 13-15, 2026; confirmed by Axios. According to anonymous sources “familiar with the matter,” Claude was used during the active operation, not just in preparation, deployed through Anthropic’s alliance with Palantir. The WSJ also reported that an Anthropic employee contacted a counterpart at Palantir to ask how Claude had been used during the operation. Axios independently confirmed Claude’s use during the active operation but noted it “could not confirm the precise role Claude played.”
What is NOT known: Claude’s exact role in the operation has not been detailed by any primary source. Some secondary outlets speculated about “AI-assisted targeting” and “autonomous drone guidance,” but these specific applications are not confirmed by the original WSJ report. It has not been established whether Claude was used for targeting, intelligence analysis, document processing, or other functions.
Official responses
Anthropic stated: “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude is required to comply with our Usage Policies.” The company specifically denied discussing Claude’s use in specific operations with the Department of Defense or partners including Palantir. A source cited by Fox News indicated that Anthropic “has visibility into classified and unclassified use and is confident that all use has been in line with their usage policy.” Palantir declined to comment. The Pentagon did not officially confirm Claude’s use but a senior official described Anthropic’s inquiry to Palantir as concerning.
3. OpenAI: from banning military use to deploying on Pentagon networks
The policy change
Verified fact (primary source: OpenAI usage policies page, updated January 10, 2024; originally reported by The Intercept). OpenAI’s original policy explicitly prohibited activities with “high risk of physical harm, including: weapons development and military and warfare.” On January 10, 2024, without public announcement, OpenAI removed the categorical prohibition on “military and warfare.” The change wasn’t announced; OpenAI described it as a rewrite to make the document “clearer and more readable.” Spokesperson Niko Felix explained that “a principle like ‘Don’t harm others’ is broad but easy to understand.” Anna Makanju, VP of Global Affairs, acknowledged at Davos that “the blanket prohibition on military use made many people think many use cases were prohibited that people think are well-aligned with what we want to see in the world.”
The current policy (updated January 29, 2025) maintains the prohibition on “developing or using weapons,” “harming others or destroying property,” and unauthorized surveillance, but no longer contains any categorical ban on military use.
DoD contracts
Verified fact (primary source: CDAO announcement, June 2025; official OpenAI blog). OpenAI Public Sector LLC received a contract worth up to $200 million from the CDAO, with an initial obligation of less than $2 million. OpenAI described the scope as administrative operations, military healthcare access, and proactive cyber defense, though the DoD announcement mentioned “warfighting and enterprise domains.” Additionally, in December 2024, OpenAI announced a strategic alliance with Anduril focused on counter-drone systems (CUAS), where OpenAI’s models would be trained on Anduril’s threat data.
ChatGPT on classified vs. unclassified networks
Verified fact (primary sources: OpenAI blog, DoD release, Microsoft Azure Government blog). On unclassified networks, OpenAI deployed a customized version of ChatGPT on GenAI.mil on February 10, 2026, accessible to the DoD’s 3 million civilian and military personnel. On classified networks, OpenAI’s models are available through Microsoft Azure, not directly by OpenAI. Microsoft deployed GPT-4 in an air-gapped Top Secret cloud in May 2024. In April 2025, Azure OpenAI Service was authorized at all classification levels of the US government (IL2-IL6 plus Top Secret ICD 503). The distinction is crucial: OpenAI directly operates only on unclassified networks; it’s Microsoft that provides classified access.
4. Google erased its own red lines on weapons
From Project Maven to dropping AI principles
Google’s trajectory is the most dramatic policy reversal. In 2017-2018, Google participated in Project Maven, a DoD contract to analyze drone imagery with machine learning. More than 3,100 employees signed an open letter demanding Google “not be in the business of war.” Dozens resigned in protest. In June 2018, Google announced it wouldn’t renew the contract and published its AI Principles, explicitly prohibiting “weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people” and “technologies that gather or use information for surveillance violating internationally accepted norms.”
On February 4, 2025, Google quietly removed these prohibitions. The new principles, co-authored by Demis Hassabis (DeepMind CEO) and James Manyika, replaced specific prohibitions with generic commitments to “appropriate human oversight” and ensuring benefits “substantially outweigh foreseeable risks.” The stated justification was “a global competition for AI leadership within an increasingly complex geopolitical landscape.” Human Rights Watch called Google’s shift from refusing to build AI for weapons to supporting national security ventures “stark.” Margaret Mitchell, Google’s former co-lead of ethical AI, warned: “Having removed that means Google will now probably work on deploying technology directly that can kill people.”
Current military contract status
Google currently maintains multiple significant military contracts: the JWCC contract worth $9 billion (shared with AWS, Microsoft, and Oracle) for combat cloud; its own CDAO $200M contract; and Gemini for Government, which was the first AI model deployed on GenAI.mil in December 2025. Google Cloud also achieved IL6 (SECRET level) accreditation in June 2025. Additionally, Google maintains the controversial Project Nimbus with Israel, a $1.2 billion contract (with Amazon) that includes Israel’s Ministry of Defense as a client, over which Google fired more than 50 protesting employees in April 2024.
5. xAI: no published ethical principles and competing for drone swarms
Verified fact (primary sources: CDAO release, official xAI blog, DoD announcement). xAI received its $200 million CDAO contract on July 14, 2025 and was described by NBC News as a “late addition” that “came out of nowhere” without having been under consideration before March 2025. A former Pentagon procurement official, Greg Parham, stated xAI is “far, far, far, far behind” other companies in the government authorization process. On December 22, 2025, the DoD announced the integration of “xAI for Government” into GenAI.mil, with Grok operating at IL5.
Bloomberg report (February 16, 2026, anonymous sources): SpaceX and xAI (now a SpaceX subsidiary following the merger announced in early February 2026) are competing in a secret $100 million Pentagon challenge to develop voice-controlled autonomous drone swarm technology, organized by the Defense Innovation Unit and SOCOM’s Defense Autonomous Warfare Group. This contrasts directly with Elon Musk’s 2015 position, when he co-signed a Future of Life Institute letter calling for a ban on “offensive autonomous weapons beyond meaningful human control.”
xAI has not published formal ethical principles or a usage policy regarding military applications. Unlike Anthropic, OpenAI, and Google, there is no public document from xAI defining restrictions on military use of its models. Its most significant official statement on military use comes from its blog: “Supporting the critical missions of the United States government is a key part of our mission.”
Senator Elizabeth Warren sent a formal letter to Secretary Hegseth in September 2025 questioning the xAI contract, citing the potential for improper benefit from Musk’s access to government data through DOGE, competition concerns, Grok’s misinformation issues, and a July 2025 incident in which Grok generated antisemitic content, calling itself “MechaHitler.”
6. Palantir: the infrastructure connecting Silicon Valley to the battlefield
A web of defense contracts
Palantir, founded in 2003 with seed funding from In-Q-Tel (the CIA’s venture capital arm), has become the indispensable intermediary between commercial AI and military operations. Its major contracts include: the Army enterprise agreement worth up to $10 billion over 10 years (July 2025, source: army.mil); the CDAO’s Maven Smart System at $1.3 billion through 2029 with over 20,000 active users across 35+ military tools; and ShipOS for the Navy at up to $448 million.
Verified financial data (primary source: SEC filing, Q4 2025 earnings release via BusinessWire, February 1, 2026): Palantir reported total revenue of $4.48 billion in fiscal year 2025, with US government revenue of $570 million in Q4 alone (+66% year-over-year). Net income was $608.7 million in Q4, with $7.2 billion in cash and zero debt. Guidance for 2026 projects 61% year-over-year growth.
How Palantir works as a bridge
Palantir acts as an intermediary through several mechanisms. First, it holds elite security accreditations (IL5, IL6, FedRAMP High, Top Secret cloud) that most AI companies lack. Through FedStart and direct alliances, AI models are deployed in classified environments using Palantir’s pre-accredited infrastructure. Second, its ontology layer — a semantic framework mapping how data sources relate to each other — sits between raw government data and AI models, controlling what information models can access. Third, the AIP (Artificial Intelligence Platform), launched in April 2023, integrates multiple AI models (Claude, GPT-4, Llama) in a model-agnostic architecture with AI Guardrails that granularly control what models can see and do, generating a secure digital audit trail of all operations.
Palantir also maintains alliances with Microsoft (since August 2024, as “the first commercial industry partner to deploy Azure OpenAI Service in classified environments”), with Meta (Llama for defense, since November 2024), and with Anduril (consortium since December 2024 to prepare defense data for AI training at SCI and SAP levels).
7. The resignations revealing internal tensions
Mrinank Sharma: the letter that didn’t name the Pentagon
Verified fact (primary source: post on X, February 9, 2026, 14.6 million views). Sharma, leader of Anthropic’s Safeguards Research team since August 2023, published his resignation letter which, while widely cited in the context of the military dispute, is notably vague about specific internal disagreements. His key words: “The world is in danger. And not just from AI, or bioweapons, but from a whole set of interconnected crises.” He added: “Throughout my time here, I have seen repeatedly how hard it is to let our values govern our actions. I have seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.” He did not directly mention military use or accuse Anthropic of specific conduct. Anthropic stated it was “grateful for Sharma’s work advancing AI safety research.”
Other Anthropic departures require nuance. Harsh Mehta and Behnam Neyshabur (both early February 2026) announced their departures praising the company and stating they were going to “start something new.” Dylan Scandinaro moved to OpenAI as Head of Preparedness without publicly criticizing Anthropic. Linking these departures to the military dispute would be speculative; available evidence suggests standard career moves, not protest resignations.
Ryan Beiermeister: a disputed firing at OpenAI
Source: Wall Street Journal, February 10, 2026. Beiermeister, VP of Product Policy at OpenAI, was fired in January 2026 officially over a sexual discrimination allegation by a male colleague, which she categorically denies. Prior to her firing, she had voiced criticism of the planned “Adult Mode” for ChatGPT. OpenAI stated her departure “was not related to any matter she raised.” The connection between her opposition to Adult Mode and her firing is suggestive but not conclusive.
Zoë Hitzig: the most articulate resignation
Verified fact (primary source: post on X and guest essay in the New York Times, February 11, 2026). Hitzig resigned on February 10, 2026 — the same day OpenAI began testing ads in ChatGPT — posting: “OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the forces pushing them to abuse it?” In her NYT essay, she drew explicit parallels with Facebook’s evolution and proposed alternatives to advertising as a revenue model. Her resignation wasn’t directly related to military use but to the monetization of conversational data. This connects to a problem we’ve analyzed before: AI as the new data leak channel is a concern that extends far beyond the military sphere.
A structural pattern documented at OpenAI
Beyond individual departures, OpenAI has dissolved two safety teams in two years: the Superalignment team in May 2024 (following the resignations of Ilya Sutskever and Jan Leike) and the Mission Alignment team in February 2026. This is the strongest structural indicator of safety deprioritization. Leike stated in his resignation that “safety culture and processes have taken a back seat to shiny products.”
8. The legal framework: a void of accountability
Existing US regulation
Primary source: DoD Directive 3000.09 (original November 2012, updated January 2023). The directive on autonomy in weapons systems doesn’t explicitly prohibit lethal autonomous weapons systems (LAWS) but requires all systems to allow commanders and operators to “exercise appropriate levels of human judgment over the use of force” and mandates senior-level reviews before development or deployment. The FY2026 NDAA (signed December 2025) added specific requirements: prohibits AI acquisition from adversary nations (China, Russia, Iran, North Korea, explicitly banning DeepSeek); mandates a comprehensive cybersecurity and governance policy for all AI/ML systems within 180 days; and requires a cross-functional team for AI model evaluation by June 2026.
The DeepSeek ban in the US military context is another angle of the technological sovereignty dilemma we explored in our analysis of DeepSeek and data sovereignty — geopolitical decisions already determine which AI you can use.
The Trump administration revoked Biden’s AI executive order (EO 14110) on day one, replacing it with EO 14179 focused on “removing barriers to American AI leadership.” The status of Biden’s restrictions on AI use in national security (such as the ban on automating nuclear weapons under NSM-25) is unclear under the current administration.
The legal accountability void
No established legal framework assigns specific liability when commercial AI is used in military operations with casualties. International Humanitarian Law imposes obligations on persons, not weapons systems. Per the DoD Law of War Manual, commanders and operators are legally responsible. However, multiple academics identify a “tripartite accountability gap”: developers claim they designed systems to specifications; operators claim lack of real-time control; commanders invoke reasonable reliance on certified systems. The US government generally enjoys sovereign immunity under the Federal Tort Claims Act, with exceptions, and military operations abroad are typically excluded. Commercial AI providers could face liability under product liability theories (design defect, failure to warn), but this hasn’t been tested in court for military AI.
The EU AI Act expressly excludes military use
Verified fact (primary source: EU Regulation 2024/1689, Article 2(3), Recital 24). The AI Act “shall not apply to AI systems where and insofar as placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes.” However, if an AI system developed for military purposes is subsequently used for civilian purposes, it does fall within the regulation’s scope. The national security exemption was a late addition during trilogue negotiations and has been criticized by some jurists as contradicting prior EU jurisprudence.
9. The statements defining the new policy
Pete Hegseth redefines “responsible AI”
Primary source: official DoD AI strategy memorandum, January 9, 2026 (media.defense.gov); speech at SpaceX Starbase, January 12, 2026 (corroborated by AP, DefenseScoop, Breaking Defense). Hegseth declared: “Responsible AI in the Department of War means objectively truthful AI capabilities, deployed safely and within the laws governing the department’s activities. We will not employ AI models that don’t let you fight wars.” The memo establishes seven “Pace-Setting Projects,” including Swarm Forge (AI-enabled combat), Agent Network (AI-enabled battle management), and Ender’s Foundry (AI-enabled simulation). It mandates that the latest AI models be deployed “within 30 days of public release” and orders new contractual language permitting “any lawful use” across all AI contracts within 180 days. It defines “responsible AI” as AI free from “‘ideological’ tuning.”
The “Agent Network” concept for battle management connects directly to the evolution of AI agents we analyzed here — the difference being these agents don’t manage support tickets, but military operations.
Dario Amodei: defense yes, autocracy no
Primary source: essay “The Adolescence of Technology” on darioamodei.com, January 26, 2026. Amodei warned of swarms of “millions or billions of fully automated armed drones, controlled locally by powerful AI and coordinated strategically by even more powerful AI” that could constitute “an invincible army.” His stated formula: “We should use AI for national defense in every way except those that would make us more like our autocratic adversaries.” He advocated blocking chip exports to China during the “critical 2025-2027 window” and arming democracies with AI “carefully and within limits.”
Sam Altman: “never say never”
Source: statements at the Vanderbilt University Summit on Modern Conflict, April 10, 2025 (reported by Bloomberg, Washington Times). Altman stated on weapons development: “I’ll never say never, because the world could get really weird, and at that point, you just have to look at what’s going on and say ‘let’s make a trade-off between some really bad options.’” He added: “I don’t think most of the world wants AI making decisions about weapons,” but also: “We have to and are proud of and really want to participate in areas of national security.” OpenAI reinforced its institutional pivot by adding retired General Paul Nakasone, former NSA director, to its board.
10. Comparative table: who allows what
| Dimension | Anthropic | OpenAI | xAI | |
|---|---|---|---|---|
| Original military use ban | Yes (in AUP) | Yes (removed Jan. 2024) | Yes (removed Feb. 2025) | Never published restrictions |
| Current weapons restrictions | Bans autonomous weapons and mass surveillance | Bans “developing or using weapons” (with exceptions) | No explicit ban since Feb. 2025 | No published policy |
| CDAO contract ($200M) | Yes (Jul. 2025) | Yes (Jun. 2025) | Yes (Jul. 2025) | Yes (Jul. 2025) |
| Model on GenAI.mil (unclassified) | Not reported | ChatGPT (Feb. 2026) | Gemini (Dec. 2025, first) | Grok (Dec. 2025) |
| Classified network access | Only available model (via Palantir IL6) | Via Microsoft Azure (all levels) | Google Distributed Cloud (IL6+) | In development (IL5 declared) |
| Accepts “all lawful use” terms | No — maintains 2 red lines | Reported yes (unclassified) | Reported yes (unclassified) | Reported yes |
| Published ethical principles | Detailed Usage Policy | Usage policy (no military mention) | New generic principles (Feb. 2025) | None |
| Defense contractor alliance | Palantir (Nov. 2024) | Anduril (Dec. 2024) | Multiple (JWCC, Nimbus) | SpaceX (merger Feb. 2026) |
| Documented internal protests | ”Internal unease” (anonymous source) | Multiple safety resignations | 3,100+ Maven signatories; 50+ fired over Nimbus | None reported |
11. The paradox nobody mentions: does AI actually work?
There’s a brutal contradiction in the current state of artificial intelligence that the military use debate exposes with unusual clarity.
The civilian world’s skepticism
Enterprise adoption data tells a story of unmet expectations. According to The Economist (February 2026), only 5.7% of total working hours involve generative AI use, barely up from 4.1% in late 2024. The promised “productivity boom” has yet to materialize. Enterprise implementation studies are equally sobering: only 5% of companies achieve real ROI from AI projects, while 70-80% of agentic projects fail to scale beyond pilots.
The most revealing admission came from OpenAI’s own leadership. Brad Lightcap, the company’s COO, stated on February 25, 2026: “We haven’t really seen AI penetrate enterprise business processes yet” — this despite OpenAI’s annualized revenue exceeding $20 billion at the close of 2025. If the leading commercial AI company acknowledges the technology hasn’t yet transformed business processes, why is the Pentagon acting as if it’s a proven, battle-ready capability?
Investment, however, hasn’t slowed. According to Bridgewater Associates (February 23, 2026), Alphabet, Amazon, Meta, and Microsoft will collectively invest $650 billion in AI infrastructure during 2026 — the highest capex figure in tech industry history. Meta just signed a deal worth up to $100 billion with AMD for AI chips (February 25, 2026). An extraordinary disconnect: record investments while actual adoption stays stagnant.
The European Parliament blocks AI tools on legislators’ devices, citing security risks. Companies continue struggling to find use cases that justify the costs. Autonomous agents keep proving dangerous: on February 25, Meta AI safety researcher Summer Yue reported that her OpenClaw agent, which she had asked to review her overloaded inbox, began deleting all her emails in “speed run mode”, ignoring her stop commands. Yue had to physically run to her Mac Mini to stop it. The incident illustrates “context window overflow” — when the model loses sight of the original instructions — making even mundane tasks go catastrophically wrong.
Citrini Research (February 23, 2026) published a scenario where mass replacement of workers by AI agents could trigger a negative feedback loop: fewer jobs → less consumption → more margin pressure → more AI investment → fewer jobs. In their most pessimistic scenario: unemployment doubles and the stock market falls by more than a third. Senator Bernie Sanders warns that the United States “has no idea of the speed and scale of the AI revolution coming” — an implicit admission that nobody really knows what’s happening.
The military world’s enthusiasm
Meanwhile, the Pentagon operates as if AI were a proven, battle-ready technology. $800 million in contracts with the four majors. Claude deployed in classified operations. Hegseth demanding models be deployed “within 30 days of public release.” Autonomous drone swarms in development. Palantir expanding into police surveillance in London. The “any lawful use” directive as the contractual standard.
The uncomfortable questions
This dissonance raises questions no actor wants to answer:
Is AI not as capable as promised… or is it too effective for certain purposes? Perhaps the technology is mediocre at drafting emails and generating code, but devastatingly effective at processing intelligence, coordinating operations, and making real-time decisions. The military may be seeing capabilities that the civilian market hasn’t yet figured out how to monetize.
Or is the Pentagon deploying immature technology in real operations out of fear of falling behind? The race with China may be forcing deployment decisions that in any other context would be unacceptable. When Hegseth says “we will not employ AI models that don’t let you fight wars,” he’s prioritizing capability over reliability.
Who pays the consequences when a model hallucinates in combat? The legal accountability void documented in this report suggests: nobody. Developers point to operators. Operators point to commanders. Commanders invoke reasonable reliance on certified systems. And the government enjoys sovereign immunity.
The revealing asymmetry
What’s most revealing isn’t that the military wants to use AI. It’s the speed and lack of friction with which they’re doing it, compared to civilian adoption. A company needs months of evaluation, pilots, ROI analysis, and approvals to deploy a customer service chatbot. The Pentagon deploys models on classified networks and active operations while the companies that build them don’t even know exactly how they’re being used.
The question isn’t whether AI works. It’s who it works for — and at what cost to everyone else.
Conclusion: Anthropic Draws a Line in the Sand
On February 28, 2026, Anthropic became the first top-tier AI company to draw a public, immovable line around the uses it considers unacceptable for its technology — and accept the economic and political consequences of doing so.
Dario Amodei did not win the contract. He lost $200 million in government business and became the first American company designated a “supply chain risk” — a label designed for foreign adversaries. But he may have gained something harder to come by in Silicon Valley: moral credibility at a moment when the entire industry is debating what kind of tools it’s willing to build, and for whom.
The fork is now definitive:
- Anthropic: Rejected “any lawful use,” lost the contract, designated supply chain risk, will fight in court
- OpenAI: Accepted the contract with technical (not contractual) restrictions, filled the void hours later
- Google: Erased its weapons principles, no comment on the situation
- xAI: Accepted “any lawful use” without restrictions
The wave of support — Claude at #2 on the App Store, an open letter from competitors’ employees, Altman revealing the same “red lines,” Sutskever publicly endorsing the decision — suggests Anthropic’s choice resonated more broadly than expected. But the long-term commercial damage — especially for companies with Pentagon exposure — remains to be seen.
What remains unresolved:
- The courts: Does the Secretary of Defense have authority under 10 USC 3252 for such a broad designation?
- The precedent: Will the government use this tool against other companies that don’t comply with political demands?
- The market: Will enterprise clients penalize Anthropic for regulatory risk, or reward it for its principles?
“We’re not backing down on this,” Amodei concluded. “There are things we won’t do, regardless of what it costs.”
Last updated: February 28, 2026, 12:00 CET
Sources: Anthropic (official releases, AUP), CDAO/DoD (contract announcements, Directive 3000.09, NDAA FY2026), Palantir (BusinessWire, SEC filings), OpenAI (official blog, usage policies), Google (AI Principles, releases), xAI (official blog), Wall Street Journal, Axios, Bloomberg, The Intercept, CNBC, AP, DefenseScoop, Breaking Defense, Human Rights Watch, Airwars, New York Times. Editorial opinions are the author’s.
Keep exploring
- Claude Opus 4.6: The Model That Tanked the Stock Market - The same model now running on classified Pentagon networks triggered a $285B market wipeout
- AI Is the New Data Leak Channel - When the data you put into an LLM is more valuable than the answer you get back
- AI Trends 2026: What Actually Matters - The broader context of the year military AI stopped being science fiction
You might also like
Will AI Replace My Job? What the Data Says
Anthropic published the most serious study on AI and employment. Nuanced data that goes against both narratives.
DeepSeek: cost, performance and data sovereignty
Low costs, high performance, open source... and your data on Chinese servers. Risk analysis and alternatives for Western companies.
AI is the new data leak channel (and nobody's ready)
Employees copy sensitive data into ChatGPT without thinking. AI-powered phishing is more sophisticated than ever. Traditional security doesn't work.