Final Script: The Pentagon Wants AI Without a Conscience -- And Now It Has One
Metadata
- Duration: 13 minutes estimated
- Word count: ~2,080 words
- Date: 2026-03-02
- Draft version: Final
Friday afternoon, February 27th. Pete Hegseth officially designates Anthropic -- the company that makes Claude -- a supply chain risk to national security. That label has been reserved for Huawei. Kaspersky. Foreign adversaries. As of 5 PM Eastern, Anthropic is officially a danger to America.
Friday evening, February 27th. US Air Force jets are en route to targets in Iran. The targeting systems running inside CENTCOM? Claude. The same AI that, as of four hours ago, is officially a threat to national security.
Hours later. OpenAI announces a Pentagon deal. Its contract includes a ban on mass surveillance and autonomous weapons -- guardrails the Pentagon just punished Anthropic for insisting on.
We covered this standoff on the day of the deadline. Since then, every warning in that episode has come true -- and several things happened that nobody predicted. So this is the aftermath. And the aftermath is worse than the standoff.
Quick context for anyone catching up. Anthropic worked with the Pentagon for over a year. By all accounts, deployed Claude in classified networks before anyone else. Cut off CCP-linked firms and forfeited what the company says was hundreds of millions in revenue to do it. This was not a reluctant military partner. This was the most forward-leaning AI company in defense.
So what did the Pentagon want? "All lawful use" language -- zero restrictions beyond what's technically legal. Anthropic drew two lines: no mass domestic surveillance of Americans, and no fully autonomous weapons without adequate reliability and oversight. Two things. Out of the entire range of military applications. And the Pentagon's response was not to negotiate. It was to brand an American company with a label previously reserved for hostile foreign governments.
Here's what we know from multiple credible reports: Claude was used for target identification, battle simulation, and operational planning during the strikes on Iran. The Pentagon hasn't officially confirmed this. But the reporting is solid, and Defense One reports that replacing Claude within Pentagon infrastructure would take three to six months, given how deeply it's integrated into classified systems. The ban itself includes a six-month phase-out window.
Now -- I want to be precise about this, because the precision actually makes it worse. The ban has a phase-out period. The Pentagon did not claim Claude would stop working overnight. So technically, Claude running in CENTCOM systems the same evening is not contradictory to the ban's terms.
But think about what that means. The Pentagon knowingly designed a ban that acknowledges the military cannot function without the technology it is simultaneously designating as dangerous. They built in a six-month grace period because they can't afford to enforce their own punishment. That is not a security response. That is a punishment the Pentagon itself cannot afford to execute immediately.
But the Iran strikes are only half the story. Because while Claude was running targeting systems for American jets, Sam Altman was already on the phone with the Pentagon.
But I'm not going to overstate this. The details matter -- and the details are actually worse than the headline.
The contracts are not identical. OpenAI's deal prohibits "unconstrained" collection of Americans' private information -- but it does not restrict collection of publicly available information. And here's the thing (this is the part that should make your skin crawl): under current law, the government can buy your geolocation data, your browsing history, your financial records from data brokers -- no warrant, no judge, no probable cause. Just a credit card. AI makes it possible to assemble all of that scattered, individually innocuous data into a comprehensive picture of any person's life. Automatically. At massive scale. That's the gap in the OpenAI contract. That's exactly the loophole through which the actual surveillance would occur.
OpenAI also accepted the "all lawful purposes" framework that Anthropic rejected, layering its safety commitments as additional provisions rather than restrictions on government discretion. Same principles. Different legal architecture. And the difference is not the ethics. It is the enforceability.
And here's the part that really lands. Even Sam Altman -- the man who won this contract -- is publicly uncomfortable with how it happened. His words: "The optics don't look good." "Definitely rushed." And on the Anthropic ban specifically: "an extremely scary precedent."
The supply chain risk statute -- 10 U.S.C. 3252 -- is a procurement tool, not a sanctions weapon. As Tess Bridgeman at Just Security argues -- and I want to flag that this is expert legal opinion, not a court ruling -- Hegseth exceeded his statutory authority. The statute lets the Secretary exclude companies from bidding on specific sensitive IT contracts. It does not authorize a blanket commercial ban. And the statute defines "supply chain risk" as involving an adversary attempting to sabotage or subvert systems. Both sides acknowledge this was a contract dispute, not sabotage.
Supply chain risk designations and similar bans have only ever targeted companies with actual foreign adversary ties. Huawei. Kaspersky. Acronis. Using that designation against a San Francisco AI company over two contractual restrictions? That's a Korean War-era wrench being used to hammer a 2026 nail.
Anthropic intends to challenge the designation; the outcome is uncertain. But the legal analysis matters because it reveals the pattern: when there are no rules, the government reaches for whatever tools are lying around, whether or not they fit.
The principle of civilian control of the military matters. As someone who served (and, thanks to Donald Trump, can't legally serve again), civilian control isn't abstract to me. The democratic argument goes like this: private companies should not set the ethical boundaries of military operations. That is a democratic function. As Alan Rozenshtein wrote at Lawfare, we would not want Lockheed Martin selling an F-35 and then telling the Pentagon which missions it could fly. Anthropic is a private corporation. It is not elected. Its safety policies are set by its CEO and board, not by voters. When Anthropic says "no mass surveillance," it is making a policy determination about the limits of government power -- exactly the kind of determination Congress should make.
I want to be direct: that argument has genuine force. It comes from the same democratic principles I invoke when I criticize Congress. It is not a MAGA argument -- it is held by defense policy professionals across the spectrum, including people deeply uncomfortable with Hegseth.
And Anthropic is not the spotless hero this story might want. Two days before the Pentagon deadline, the company quietly rewrote its Responsible Scaling Policy -- its flagship safety commitment -- replacing binding limits with nonbinding targets. TIME reported this as "dropping its flagship safety pledge." Anthropic cited competitive pressure. The company also agreed to missile defense, intelligence analysis, cyber operations. The two red lines protect Americans but do not protect foreign populations from AI-assisted targeting. This is not a pacifist company making a principled stand against all military AI. It is a company that agreed to nearly everything and drew two specific lines for a mix of ethical and commercial reasons.
You don't have to think Anthropic is a saint to think what happened next is dangerous.
And the Lockheed analogy breaks on inspection. Lockheed sells a finished product -- it cannot monitor or control how F-35s are used after delivery. AI is different. Cloud-deployed. The vendor maintains ongoing access and responsibility. An AI company that discovers its model being used for mass surveillance has a continuing relationship with that use in a way a hardware contractor simply does not. The ethical architecture of AI-as-a-service is not analogous to selling hardware. We haven't built the governance frameworks for that yet -- and that's on Congress.
Hundreds of employees at Google and OpenAI signed open letters opposing unrestricted military AI this week -- the most significant cross-company organizing on military ethics since the Project Maven protests in 2018. Senators Warren and Markey have called this "extortion" and "reckless." But none of them -- not the workers, not the senators -- have yet produced the thing that actually matters. Legislation.
Because here is the harder truth. Anthropic said no. It got punished. OpenAI said yes -- with fine print that may or may not hold. The market incentive is overwhelmingly toward compliance. When the Pentagon comes with hundreds of millions of dollars and the implied threat of a supply chain risk designation, the rational business decision is to salute and ask what they need.
This is what happens when democratic institutions abdicate. The executive branch fills the vacuum with coercion. The private sector fills it with ad hoc ethics that serve brand positioning as much as principle. And the actual democratic body -- Congress -- watches from the gallery.
Congress has still written zero laws governing military AI.
Zero.
The most powerful military on Earth is integrating AI into weapons systems, surveillance infrastructure, and battlefield decision-making, and the body constitutionally charged with governing it has produced nothing.
That silence is the scandal. Not the Pentagon's aggression -- which is predictable. Not Anthropic's imperfect stand -- which is human. The silence.
A CEO's conscience is not a governance strategy. It's a stopgap. And stopgaps have expiration dates. The guardrails Anthropic defended -- no mass surveillance of Americans, no unreliable autonomous weapons -- are worth defending regardless of who is defending them or why. But they should not exist as contractual provisions in a vendor agreement. They should exist as law.
The window is still open. Barely. There are still companies that will push back, still workers who will organize, still legal experts documenting the overreach, still senators who at least know the right words to say. But every week that passes without legislation, the precedent hardens: AI companies serve at the pleasure of the Pentagon, full stop.
The question is not whether the Pentagon wants AI without a conscience. We know the answer to that now.
The question is whether the American people are going to let Congress sit this one out.
Revision Log
Fact-Check Corrections
OpenAI announcement timing (RED). Changed "Saturday morning" to "Hours later" in the cold open. The OpenAI deal was announced Friday evening, not Saturday. All three events occurred on the same Friday within approximately 12 hours. Changed "Seventy-two hours" to "One Friday" to reflect the compressed timeline accurately -- this actually strengthens the argument.
Tech worker numbers (RED). Replaced "Two hundred Google employees and fifty OpenAI employees" with "Hundreds of employees at Google and OpenAI." The original numbers were lower than later-reported totals (~236 Google, ~65 OpenAI per TechCrunch/Bloomberg). Using "Hundreds" avoids getting pinned on specific figures that varied across reporting windows and also avoids conflating two separate letters (joint Google-OpenAI letter and separate Google-only letter).
$200 million figure. Removed the $200 million attribution from the OpenAI deal, as this figure was sourced to Anthropic's terminated contract, not independently confirmed for OpenAI's new deal. Replaced with "gets the contract Anthropic just lost" in the transition line.
OpenAI/Anthropic equivalence in cold open (YELLOW). Changed "the same two guardrails that got Anthropic blacklisted" to "guardrails the Pentagon just punished Anthropic for insisting on." Avoids overstating the similarity while preserving the directional point.
FASCSA precedent (YELLOW). Changed "These FASCSA orders have only ever previously targeted" to "Supply chain risk designations and similar bans have only ever targeted." Huawei was sanctioned under different legal authorities (Entity List, NDAA Section 889), not FASCSA. Only Acronis was targeted under FASCSA specifically. The broader framing is accurate without misattributing the legal mechanism.
Expert legal opinion qualifier (YELLOW). Moved the "this is expert legal opinion, not a court ruling" qualifier earlier, integrating it into the Bridgeman attribution rather than placing it in a separate paragraph after several assertive legal claims.
Project Maven (YELLOW). Changed "walkout" to "protests." Project Maven produced petitions and resignations, not a walkout. Changed "largest" to "most significant cross-company" to avoid a raw-numbers comparison that Maven would win.
Anthropic "first to deploy" claim. Added "By all accounts" qualifier per fact-check blue flag. Added "what the company says was" before the revenue forfeiture figure to flag it as self-reported.
Warren/Markey quotes. Compressed to a single reference without implying both senators used both words about the same action. Kept within normal editorial range for spoken-word script.
Structural Changes
Context paragraph broken up. Split the single dense block into two paragraphs with a question fragment ("So what did the Pentagon want?") as a breathing point between them. Improves audio pacing.
Public-data-loophole paragraph broken up and compressed. Split with a parenthetical aside ("this is the part that should make your skin crawl") that both creates a breathing point and adds characteristic voice texture. Reduced by approximately 25%.
"Zero laws" beat given its own paragraph and air. Separated "Zero." as a standalone one-word paragraph. Followed with the expansion about weapons systems and surveillance. This is the emotional peak and it now has room to breathe.
Bigger-picture section compressed. Cut the tech worker detail from a full paragraph to two sentences. Folded the Warren/Markey reference into the same beat. Moved directly to the market-incentive argument and the "silence is the scandal" landing. Section arrives at its destination faster.
Added [BEAT] between Altman "scary" line and the transition summary. Per editorial note, the irony of the Altman quotes needed space to linger before the summary pivot.
Iran strikes "precision" paragraph split. The "Now -- I want to be precise" section was doing two things (acknowledging the phase-out and drawing the inference). Split into two shorter paragraphs so the payoff lands faster.
Voice Adjustments
Added characteristic voice interruptions. Three insertions: (a) "(this is the part that should make your skin crawl)" in the surveillance loophole section, (b) "And here's the thing" as a register-shift connector, (c) "Because this part is wild" replacing the generic "this is the part that should make everyone sit up straight."
Thesis rewritten for voice. "What the United States government demonstrated this week" replaced with "The punishment wasn't for what Anthropic refused. It was for believing they got to refuse at all." Sharper, less columnist, more Rebecca.
Sentence-opening variety pass. Reduced "The [noun]..." pattern throughout. Added fragments ("Same principles. Different legal architecture."), questions ("So what did the Pentagon want?"), and direct address ("But think about what that means"). Replaced two of four "Now." transitions with different connectors.
"Executive unilateralism wearing democratic clothing" rewritten. Changed to "That's executive power wearing a democracy costume." More colloquial, still precise, matches corpus register.
Sourcing language naturalized. "Multiple credible outlets report, citing sources familiar with the operations" rewritten to "Here's what we know from multiple credible reports" with the caveat broken out separately. Sounds like talking, not like a journalist's attribution clause.
"Structural proof" replaced with "the proof." Per editorial note -- too academic. The host uses concrete, physical language.
Trimmed trailing explanations after strong statements. Removed "which tells you everything about whether this was ever really about security" from the phase-out punishment line. The statement is stronger standing alone.
Unresolved Notes
"Zero laws" precision. The claim is accurate in the strict sense (no standalone statute governs military AI), but Congress has legislated at the margins through NDAA provisions (Section 1061 reporting requirements, pilot programs, governance structures). The host should be prepared to say "zero standalone statutes" or "zero comprehensive laws" if pressed on this in follow-up.
Pop-culture metaphor gap. The editorial notes flagged that the draft lacks a pop-culture or explanatory-metaphor moment beyond "Korean War-era wrench." I chose not to force one. The draft's analytical architecture is doing the work, and an inserted reference would feel grafted on rather than organic. The "democracy costume" line partially fills this role. Host may want to improvise something in delivery.
Emotional register consistency. The editorial notes wanted more dynamic range -- moments of genuine wrestling, vulnerability, humor shifting to anger. I added some texture (the parenthetical aside, the register shifts), but the piece's subject matter resists the kind of personal vulnerability that works in pieces like Flash Point. The military service parenthetical is the emotional anchor, and pushing further risks feeling manufactured. Host should trust her instincts in delivery to add the dynamic range the text sets up but doesn't fully execute on the page.
OpenAI blog quote verification. The quote "more guardrails than any previous agreement for classified AI deployments, including Anthropic's" is attributed to OpenAI's blog post by multiple outlets. Host should verify the exact phrasing includes "including Anthropic's" rather than that being added by reporters.
Revenue forfeiture figure. "Hundreds of millions" is Anthropic's self-reported number (Amodei's own statement). It has not been independently audited. Flagged in script with "what the company says was" but host should be aware this is a company-sourced figure if challenged.