For the Republic
Command Center / 🎙 Episode / 2026-03-02 · ~14 minutes (estimated from ~2,080 word final script)

The Pentagon Banned an AI — Then Used It to Bomb Iran

Draft Complete — Pending Host Review

First Draft

5/10

Draft Script: The Pentagon Wants AI Without a Conscience -- And Now It Has One

Metadata

  • Target duration: 13 minutes
  • Word count: ~2,050 words
  • Date: 2026-03-02

Friday afternoon, February 27th. Pete Hegseth officially designates Anthropic -- the company that makes Claude -- a supply chain risk to national security. That label has been reserved for Huawei. Kaspersky. Foreign adversaries. As of 5 PM Eastern, Anthropic is officially a danger to America.

Friday evening, February 27th. US Air Force jets are en route to targets in Iran. The targeting systems running inside CENTCOM? Claude. The same AI that, as of four hours ago, is officially a threat to national security.

Saturday morning. OpenAI announces a $200 million Pentagon deal. Its contract includes a ban on mass surveillance and autonomous weapons -- the same two guardrails that got Anthropic blacklisted.

⬥ ⬥ ⬥
Three facts. Seventy-two hours. And if you can make all three of those true at the same time without your head hurting, you're a better logician than I am.

We covered this standoff on the day of the deadline. Since then, every warning in that episode has come true -- and several things happened that nobody predicted. So this is the aftermath. And the aftermath is worse than the standoff.

Quick context for anyone catching up. Anthropic worked with the Pentagon for over a year. Deployed Claude in classified networks before anyone else. Cut off CCP-linked firms and forfeited hundreds of millions in revenue to do it. This was not a reluctant military partner. This was the most forward-leaning AI company in defense. The Pentagon wanted "all lawful use" language -- zero restrictions beyond what's technically legal. Anthropic drew two lines: no mass domestic surveillance of Americans, and no fully autonomous weapons without adequate reliability and oversight. Two things. Out of the entire range of military applications. And the Pentagon's response was not to negotiate. It was to brand an American company with a label previously reserved for hostile foreign governments.

⬥ ⬥ ⬥
Here's what this week proved. The Anthropic ban was never about safety. Never about capability. Never about supply chain risk. It was a loyalty test. And the proof is sitting right there in the timeline: the Pentagon used the "banned" AI in active combat the same evening, then handed a nearly identical contract to OpenAI the next business day. What the United States government demonstrated this week is that it will punish a company not for what it refuses to do, but for asserting the *right* to refuse at all. And Congress -- the institution that should be writing these rules -- is watching the whole thing happen in silence.
⬥ ⬥ ⬥
Let's start with the Iran strikes, because this is the part that should make everyone sit up straight.

Multiple credible outlets report, citing sources familiar with the operations, that Claude was used for target identification, battle simulation, and operational planning during the strikes on Iran. The Pentagon has not officially confirmed this. But Defense One reports that replacing Claude within Pentagon infrastructure would take three to six months, given how deeply it's integrated into classified systems. The ban itself includes a six-month phase-out window.

Now -- I want to be precise about this, because the precision actually makes it worse. The ban has a phase-out period. The Pentagon did not claim Claude would stop working overnight. So technically, Claude running in CENTCOM systems the same evening is not contradictory to the ban's terms. But think about what that means. The Pentagon knowingly designed a ban that acknowledges the military cannot function without the technology it is simultaneously designating as dangerous. They built in a six-month grace period because they can't afford to enforce their own punishment. That is not a security response. That is a punishment the Pentagon itself cannot afford to execute immediately -- which tells you everything about whether this was ever really about security.

But the Iran strikes are only half the story. Because while Claude was running targeting systems for American jets, Sam Altman was already on the phone with the Pentagon.

⬥ ⬥ ⬥
OpenAI's deal -- announced hours after the ban -- is the structural proof that the guardrails were never the problem. Walk through OpenAI's three red lines: no mass surveillance, no autonomous weapons, no social credit systems. These are *substantively* the same principles Anthropic was punished for asserting. OpenAI's own blog post calls the deal "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."

But I'm not going to overstate this, because the details matter and the details are where the real problem hides. The contracts are not identical. OpenAI's contract prohibits "unconstrained" collection of Americans' private information -- but it does not restrict collection of publicly available information. Anthropic argued that public data collection at scale is mass surveillance. And they're right. Under current law, the government can buy your geolocation data, your browsing history, your financial records from data brokers -- no warrant, no judge, no probable cause. Just a credit card. AI makes it possible to assemble all of that scattered, individually innocuous data into a comprehensive picture of any person's life. Automatically. At massive scale. That's the gap in the OpenAI contract. That's exactly the loophole through which the actual surveillance would occur.

OpenAI also accepted the "all lawful purposes" framework that Anthropic rejected, layering its safety commitments as additional provisions rather than restrictions on government discretion. So the Pentagon accepted the same principles but a different legal architecture -- one that gives the military more operational latitude. The difference is not the ethics. It's the enforceability.

And here's the part that really lands. Even Sam Altman -- the man who won this contract -- is publicly uncomfortable with how it happened. His words: "The optics don't look good." "Definitely rushed." And on the Anthropic ban specifically: "an extremely scary precedent."

⬥ ⬥ ⬥
When the beneficiary of the ban calls it scary, listen.

So: the company that said no gets branded a national security threat. The company that said yes -- with an asterisk -- gets $200 million. And legal experts are saying the whole thing was illegal in the first place.

The supply chain risk statute -- 10 U.S.C. 3252 -- is a procurement tool, not a sanctions weapon. As Tess Bridgeman at Just Security argues, Hegseth exceeded his statutory authority. The statute lets the Secretary exclude companies from bidding on specific sensitive IT contracts. It does not authorize a blanket commercial ban. And the statute defines "supply chain risk" as involving an adversary attempting to sabotage or subvert systems. Both sides acknowledge this was a contract dispute, not sabotage. These FASCSA orders have only ever previously targeted companies with demonstrated foreign adversary ties -- Huawei, Kaspersky, Acronis. Using that legal designation against a San Francisco AI company over two contractual restrictions is a Korean War-era wrench being used to hammer a 2026 nail.

Now. This is expert legal opinion, not a court ruling. Anthropic intends to challenge the designation; the outcome is uncertain. But the legal analysis matters because it reveals the pattern: when there are no rules, the government reaches for whatever tools are lying around, whether or not they fit.

⬥ ⬥ ⬥
Now. The obvious pushback on everything I've just said -- and it's a real one -- is that I'm rooting for the wrong team here.

The principle of civilian control of the military matters. As someone who served (and, thanks to Donald Trump, can't legally serve again), civilian control isn't abstract to me. The democratic argument goes like this: private companies should not set the ethical boundaries of military operations. That is a democratic function. As Alan Rozenshtein wrote at Lawfare, we would not want Lockheed Martin selling an F-35 and then telling the Pentagon which missions it could fly. Anthropic is a private corporation. It is not elected. Its safety policies are set by its CEO and board, not by voters. When Anthropic says "no mass surveillance," it is making a policy determination about the limits of government power -- exactly the kind of determination Congress should make.

I want to be direct: that argument has genuine force. It comes from the same democratic principles I invoke when I criticize Congress. It is not a MAGA argument -- it is held by defense policy professionals across the spectrum, including people deeply uncomfortable with Hegseth.

And Anthropic is not the spotless hero this story might want. Two days before the Pentagon deadline, the company quietly rewrote its Responsible Scaling Policy -- its flagship safety commitment -- replacing binding limits with nonbinding targets. TIME reported this as "dropping its flagship safety pledge." Anthropic cited competitive pressure. The company also agreed to missile defense, intelligence analysis, cyber operations. The two red lines protect Americans but do not protect foreign populations from AI-assisted targeting. This is not a pacifist company making a principled stand against all military AI. It is a company that agreed to nearly everything and drew two specific lines for a mix of ethical and commercial reasons.

You don't have to think Anthropic is a saint to think what happened next is dangerous.

⬥ ⬥ ⬥
Here's the pivot, and it needs to land clean. The civilian control argument is correct *in principle*. And that is *exactly* why Congress's silence is unforgivable. You cannot invoke democratic authority to override a company's ethics when the democratic institution responsible for writing the rules has refused to write them. In the absence of legislation, the only guardrails on military AI are whatever individual companies are willing to insist on. That is a terrible system. It is also the only system we have. The Pentagon is not saying "Congress should decide." The Pentagon is saying "*we* decide, and no one gets to disagree." That is not civilian democratic control. It is executive unilateralism wearing democratic clothing.

And the Lockheed analogy breaks on inspection. Lockheed sells a finished product -- it cannot monitor or control how F-35s are used after delivery. AI is a service. Cloud-deployed. The vendor maintains ongoing access and responsibility. An AI company that discovers its model being used for mass surveillance has a continuing relationship with that use in a way a hardware contractor simply does not. The ethical architecture of AI-as-a-service is not analogous to selling hardware. We haven't built the governance frameworks for that yet -- and that's on Congress.

⬥ ⬥ ⬥
Zoom out. This story is not just about one company and one contract.

Two hundred Google employees and fifty OpenAI employees signed open letters opposing unrestricted military AI this week -- the largest tech worker organizing on military ethics since the Project Maven walkout in 2018. The Anthropic standoff reactivated a constituency the defense establishment thought it had neutralized.

But here is the harder truth. Anthropic said no. It got punished. OpenAI said yes -- with fine print that may or may not hold. The market incentive is overwhelmingly toward compliance. When the Pentagon comes with hundreds of millions of dollars and the implied threat of a supply chain risk designation, the rational business decision is to salute and ask what they need. The tech workers writing letters will be ignored. The senators issuing statements -- Warren and Markey calling this "extortion" and "reckless" -- have not introduced legislation. The window in which anyone is willing to push back is closing.

This is what happens when democratic institutions abdicate. The executive branch fills the vacuum with coercion. The private sector fills it with ad hoc ethics that serve brand positioning as much as principle. And the actual democratic body -- Congress -- watches from the gallery. The Anthropic story is a case study in what American governance looks like when the legislative branch has checked out. It is the same dynamic playing out across AI governance, across tech regulation, across emergency powers. The institution designed to write the rules is not writing them, and every other actor in the system is improvising.

Congress has still written zero laws governing military AI. Zero. The most powerful military on Earth is integrating AI into weapons systems, surveillance infrastructure, and battlefield decision-making, and Congress has written zero laws about any of it.

That silence is the scandal. Not the Pentagon's aggression -- which is predictable. Not Anthropic's imperfect stand -- which is human. The silence.

⬥ ⬥ ⬥
Whether Anthropic wins this particular fight almost doesn't matter anymore. What matters is whether Congress shows up before there's nobody left willing to fight at all.

A CEO's conscience is not a governance strategy. It's a stopgap. And stopgaps have expiration dates. The guardrails Anthropic defended -- no mass surveillance of Americans, no unreliable autonomous weapons -- are worth defending regardless of who is defending them or why. But they should not exist as contractual provisions in a vendor agreement. They should exist as law.

The window is still open. Barely. There are still companies that will push back, still workers who will organize, still legal experts documenting the overreach, still senators who at least know the right words to say. But every week that passes without legislation, the precedent hardens: AI companies serve at the pleasure of the Pentagon, full stop.

The question is not whether the Pentagon wants AI without a conscience. We know the answer to that now.

The question is whether the American people are going to let Congress sit this one out.


Writer's Notes

  1. Deviation from spine on the "mundane procurement dispute" counter. The spine suggested addressing this secondary counterargument briefly. I folded its dismissal into the legal overreach beat (the supply chain risk designation is the evidence that this was not routine) rather than giving it a standalone moment. This kept the counterargument section focused on the strongest opposing view -- civilian control -- which earns more from extended engagement.

  2. Personal anchoring. Used the military service parenthetical ("and, thanks to Donald Trump, can't legally serve again") from the previous episode -- it's a corpus signature and does double duty here as sardonic aside and credibility anchor for the civilian-control argument. Kept it to one line as the spine recommended.

  3. Iran strike sourcing. Hedged once, clearly, early in the beat, per the spine and steelman guidance. Then proceeded with confidence. Did not re-hedge on subsequent references.

  4. OpenAI contract precision. Took care not to say "identical." Used "substantively the same principles" and immediately surfaced the public-data loophole. This distinction is central to the argument, not a footnote.

  5. Anthropic complications volunteered proactively. The RSP change, military cooperation scope, and selective nature of the red lines are raised in the counterargument section before any critic could force them. This should read as intellectual honesty, not concession.

  6. The close echoes but does not repeat the previous episode's ending. The original closed with "And bugs get exploited." This one closes with the direct challenge to the audience about Congress. Different emotional register -- less sardonic, more demanding.

  7. Energy map tracking. Cold open is controlled intensity (facts laid out like cards). Context is compressed and calm. Thesis hits with direct heat. Iran strikes are dramatic but precise. OpenAI beat builds analytically with the Altman quotes as ironic punctuation. Legal beat is institutional. Counterargument starts with genuine engagement, builds through Anthropic complications, then the pivot lands hard. Bigger picture escalates from reflective to urgent. Close is quiet conviction -- hand on the table.

  8. Fact-check flags. The "zero laws" claim is accurate in the strict sense (no standalone statute governs military AI) but Congress has nibbled at the edges through NDAA provisions. Host should be aware of this in case of pushback. The Iran strike reporting is credible but unconfirmed -- the hedge is accurate and should stand. Altman quotes are verified across CNBC and TechCrunch.

  9. Word count. Came in at approximately 2,050 words, which is within the 1,800-2,200 target range and should deliver around 13-14 minutes at speaking pace.