For the Republic
Command Center / 🎙 Episode / 2026-03-02 · ~14 minutes (estimated from ~2,080 word final script)

The Pentagon Banned an AI — Then Used It to Bomb Iran

Draft Complete — Pending Host Review

Steelman

3/10

Steelman Analysis

Our Thesis (Restated)

The Anthropic blacklisting was a loyalty test, not a policy dispute -- proven by the Pentagon using Claude in combat hours after banning it and accepting identical guardrails from OpenAI -- and Congress's refusal to write military AI rules is the real scandal.

Primary Counterargument

Private companies should not set the ethical boundaries of military operations. That is a democratic function -- and Anthropic's position, however sympathetic, is structurally dangerous.

This is the argument that should genuinely trouble us, because it comes from the same democratic principles we invoke when we criticize Congress for inaction. The Lawfare analysis by Alan Rozenshtein puts it plainly: "We wouldn't want Lockheed Martin selling the military an F-35 and then telling the Pentagon which missions it could fly." The analogy is imperfect -- an AI usage policy is not the same as post-sale operational control of a weapons platform -- but the principle underneath it is serious. In a democracy, the civilian government decides how military tools are used. The military answers to elected officials, not to the terms of service of its vendors.

Anthropic is a private corporation. It is not elected. It is not accountable to voters. Its safety policies are set by its CEO, its board, and its investors -- not by any democratic process. When Anthropic says "we will not allow Claude to be used for mass surveillance," it is making a policy determination about the limits of government power. That is exactly the kind of determination that, in a healthy republic, Congress would make. Anthropic may be right on the substance -- mass surveillance and autonomous weapons are genuinely dangerous -- but the mechanism by which those limits are imposed matters. A world where the ethical boundaries of military AI are set by whichever company happens to have the best model is not democratic governance. It is corporate governance of the military by another name.

The defense establishment's deeper concern is about precedent and dependency. The United States military is now dependent on a handful of private AI companies for capabilities that are increasingly central to warfighting. If those companies can impose use restrictions as a condition of access, the military's operational freedom is subject to the ethical preferences of Silicon Valley executives -- preferences that may or may not align with national security requirements, and that can change at the discretion of a board of directors. The Pentagon's insistence on "all lawful purposes" is, in this reading, not an attempt to do illegal things. It is an attempt to ensure that the definition of "lawful" is set by the law, not by a vendor contract.

Rear Admiral Lorin Selby's observation captures why defense officials see this as existential: "For most of the post-World War II era, the U.S. government defined the frontier of advanced technology. It set the requirements, funded the foundational research, and industry executed against government-driven specifications." That relationship has inverted. The government is now the customer, not the patron. And a customer who cannot use what it buys without the seller's ongoing ethical approval is a customer with a serious sovereignty problem.

Who Makes This Argument

This is not a MAGA argument. It is a mainstream national security argument held by defense policy professionals across the political spectrum, including many who are deeply uncomfortable with Hegseth specifically. You will find it articulated by Georgetown's Center for Security and Emerging Technology, by career Pentagon procurement officials, by Lawfare contributors, and by centrist defense hawks in both parties. It is also the implicit position of the Lawfare piece that our own thesis relies on -- that piece argues Congress should set the rules, which is another way of saying Anthropic should not.

Why It Has Merit

The argument has genuine force because it correctly identifies that corporate ethics policies are not a substitute for democratic governance. Anthropic's two red lines may be wise policy. But they are Anthropic's policy, not America's policy. They can be changed by Anthropic's board at any time -- as the company demonstrated two days before the deadline when it weakened its own Responsible Scaling Policy from binding commitments to nonbinding targets. A company that weakened its own safety commitments the same week it refused to weaken the Pentagon's is not a reliable long-term guardian of military ethics. The argument also correctly notes that Anthropic's guardrails would not survive Anthropic losing the contract -- if the Pentagon simply gets unconstrained AI from another vendor, the protection evaporates. Only legislation endures.

Where It Falls Short

The argument's fatal flaw is that it presupposes the existence of functioning democratic institutions. You cannot invoke Congress's authority to set military AI rules when Congress has refused to set military AI rules. The civilian control argument is an argument for Congressional action, and in the absence of Congressional action, it becomes an argument for unchecked executive power dressed in democratic clothing. The Pentagon is not saying "Congress should decide." The Pentagon is saying "we decide, and no one gets to disagree." That is not civilian democratic control -- it is executive unilateralism.

Furthermore, the "all lawful purposes" framing is itself the problem. As the Just Security analysis notes, current surveillance law predates AI's ability to aggregate legally collected public data into comprehensive tracking systems. "Lawful" is a floor, not a ceiling. The Fourth Amendment was not written to contemplate an AI that can synthesize every American's publicly available geolocation data, browsing history, and financial records into a surveillance profile without a warrant. Anthropic's red lines are not overriding the law -- they are flagging where the law has not caught up.

And the Lockheed analogy breaks down on inspection. Lockheed sells the Pentagon a finished product. It cannot monitor or control how F-35s are used after delivery. AI is fundamentally different: it is a service, often cloud-deployed, with the vendor maintaining ongoing access and responsibility. An AI company that discovers its model being used for something it considers harmful has a continuing relationship with that use in a way that a traditional defense contractor does not. The ethical architecture of AI-as-a-service is not analogous to selling hardware.

Secondary Counterarguments

The "Loyalty Test" Framing Is Conspiratorial Thinking Applied to a Mundane Procurement Dispute

The simplest counterargument: we are reading too much into this. The Pentagon wanted a contract with fewer restrictions. Anthropic refused. The Pentagon found another vendor willing to accept its terms. This is how procurement works. The timing of the Iran strikes is explained by the six-month phase-out window -- of course Claude was still running in classified systems on the same day the ban was announced; the whole point of a phase-out is that you do not yank operational systems overnight. The OpenAI deal was in discussion before the Anthropic ban; it was announced quickly because the groundwork was already laid, not because it was retaliatory. Calling this a "loyalty test" attributes a coordinated strategic malice to a bureaucracy that may simply have been impatient and heavy-handed.

This counterargument has the virtue of parsimony. But it fails to account for the supply chain risk designation itself -- a legal tool designed for adversary nations and previously applied only to companies like Huawei and Kaspersky. Using that tool against a San Francisco AI company over a contract dispute is not mundane procurement. It is, at minimum, a disproportionate escalation that requires an explanation beyond "negotiations broke down." And Hegseth's own rhetoric -- calling Anthropic "sanctimonious" and accusing it of trying to "seize veto power over the operational decisions of the United States military" -- is not the language of a routine procurement disagreement.

The OpenAI Contracts Are Not Actually Identical, and the Difference Matters

Our pitch says the Pentagon "accepted the same guardrails from OpenAI." This is mostly true but slightly overstated, and the gap matters. Anthropic's position was that AI-enabled collection of Americans' publicly available information at scale -- geolocation data, browsing history, financial records purchased from data brokers -- constitutes mass surveillance and should be explicitly prohibited. OpenAI's contract prohibits "unconstrained" collection of Americans' "private" information but does not restrict collection of publicly available information. This is not a trivial distinction. The bulk collection of public data is precisely the surveillance mechanism that civil liberties experts fear most, because it is technically legal under current law and therefore would pass any "all lawful purposes" test. OpenAI also agreed to the "all lawful purposes" framework that Anthropic rejected, while layering its safety commitments as additional contractual provisions rather than restrictions on the government's right to use the technology. The Pentagon may have accepted similar principles but a different legal structure -- one that gives the military more operational latitude.

This matters because if the contracts are meaningfully different, our "loyalty test" thesis weakens. If the Pentagon genuinely preferred OpenAI's contractual structure for reasons of operational flexibility, the Anthropic ban starts looking less like punishment for having guardrails and more like punishment for demanding a specific contractual mechanism that constrained the government's discretion. The distinction between "we agree on principles" and "we agree on enforceable terms" is exactly where procurement disputes live.

Anthropic Is Not the Hero This Story Needs

Two days before the Pentagon deadline, Anthropic quietly rewrote its Responsible Scaling Policy -- the company's flagship safety commitment -- replacing binding limits on training more capable models with "nonbinding but publicly-declared targets." TIME reported this as "dropping its flagship safety pledge." The company cited competitive pressure and an "anti-regulatory political climate." Anthropic also agreed to missile defense work, intelligence analysis, and cyber operations under the Pentagon contract. Its two red lines protect Americans from surveillance and protect all humans from autonomous weapons -- but they do not protect foreign populations from AI-assisted targeting, strike planning, or battle damage assessment. The company cooperated with Palantir to embed Claude in classified networks. Dario Amodei's own statement acknowledged that Anthropic had "never raised objections to particular operations."

This is not a pacifist organization taking a principled stand against military AI. It is a company that agreed to nearly everything the Pentagon asked, drew two specific lines for a mix of ethical and commercial reasons, and weakened its own broader safety commitments the same week. Framing Anthropic as a conscience-driven David standing up to a Pentagon Goliath risks hero-washing a corporation whose ethical commitments are, on close inspection, selective and recently downgraded. The episode risks becoming an advertisement for Anthropic's brand positioning.

Congress's Silence May Reflect Rational Institutional Behavior, Not Abdication

Our thesis frames Congressional inaction as scandalous. But there is a case that Congress is being strategically patient, not negligent. Military AI is evolving so rapidly that legislation written today might be obsolete or counterproductive within a year. The last thing Congress wants is to codify restrictions that lock in 2026 assumptions about AI capability into law that takes years to amend. Several members -- Warren, Markey, Cantwell -- have made public statements and introduced discussion drafts. The Senate Armed Services Committee has signaled interest in hearings. The FY2026 NDAA included AI-related provisions, even if they fell short of comprehensive regulation. It is possible that Congress is waiting to see how the courts resolve Anthropic's legal challenge before legislating, which would be a reasonable sequencing decision. Legislating in the middle of active litigation risks muddying the legal landscape.

This argument is real but ultimately insufficient. Congress has had years -- not weeks -- to address military AI governance. The technology is not so new that basic principles (human-in-the-loop requirements for lethal force, warrants for AI-assisted domestic surveillance) cannot be legislated now. Waiting for perfect information is a luxury that the speed of executive action does not afford.

Our Weak Points

1. The Iran strike sourcing is not rock-solid. The reports of Claude's use in the Iran strikes cite unnamed sources. Multiple outlets corroborate the story, but there is no official confirmation from the Pentagon or Anthropic. If this reporting turns out to be wrong or exaggerated, the most dramatic element of our narrative collapses. We are building our timeline around a claim that has not been officially verified.

2. We are conflating the ban announcement with the ban taking effect. The supply chain risk designation includes a six-month phase-out period. Claude running in classified systems on the same evening the ban was announced is not contradictory -- it is by design. The Pentagon did not claim Claude would stop working immediately. Our framing ("banned Friday afternoon, used Friday evening") is rhetorically powerful but slightly misleading about what the ban actually does on day one. A more precise framing would note that the ban acknowledges the military's dependence on Claude while simultaneously declaring it a national security threat -- which is still absurd, but for different reasons than "they used it after banning it."

3. The OpenAI contract comparison requires more precision than we are giving it. As noted above, the contracts are similar in principle but may differ in legal structure, enforcement mechanism, and the specific treatment of public information collection. If we overstate the identity of the two contracts, informed critics will correctly point out the differences and use that to discredit our broader argument.

4. Anthropic's own behavior undercuts the "conscience" framing. The Responsible Scaling Policy change, the willingness to support offensive military operations, and the Palantir partnership all complicate the narrative of a company standing on principle. We need to be careful not to argue "Anthropic was punished for having a conscience" when the more accurate claim is "Anthropic was punished for retaining two specific contractual restrictions on a product it was otherwise happy to supply for warfighting."

5. Our "loyalty test" thesis requires inferring motive. We are asserting that the Pentagon's purpose was to punish Anthropic for the act of negotiating rather than the substance of the guardrails. This is a plausible inference from the evidence -- especially the OpenAI comparison -- but it is an inference, not a proven fact. The Pentagon would say the substance did matter and that OpenAI's contractual approach was genuinely preferable. We cannot prove they are lying.

Recommended Handling

Address the civilian control argument directly and at length. This is the counterargument that will resonate with our audience -- center-left people who believe in democratic governance. Do not dismiss it. State it with full force, then pivot: "The civilian control argument is correct in principle. And that is exactly why Congress's silence is unforgivable. You cannot invoke democratic authority to override a company's ethics when the democratic institution responsible for writing the rules has refused to write them. In the absence of legislation, the only guardrails on military AI are whatever individual companies are willing to insist on. That is a terrible system. It is also the only system we have."

Be precise about the OpenAI comparison. Do not say "identical contracts." Say "the Pentagon accepted the same principles from OpenAI that it punished Anthropic for asserting -- with the critical difference that OpenAI also agreed to the 'all lawful purposes' framework and defined surveillance more narrowly." Then note that this distinction is exactly the loophole through which mass surveillance would occur.

Acknowledge the Iran strike sourcing limitations. "Multiple credible outlets report, citing sources familiar with the operations, that Claude was used for target identification and battle simulation during the Iran strikes. The Pentagon has not confirmed this. If true -- and we believe the reporting is credible -- it means..." This is stronger than asserting it as established fact.

Do not hero-wash Anthropic. Proactively raise the RSP change and the scope of Anthropic's military cooperation. Frame the argument as being about the guardrails, not the company: "You do not have to think Anthropic is a saint to think that punishing a company for retaining two specific ethical restrictions -- restrictions the Pentagon accepted from its competitor -- is a dangerous precedent."

Raise the "mundane procurement dispute" framing early and dismantle it. Acknowledge it has surface plausibility, then point to the supply chain risk designation as the evidence that this was not routine. A procurement dispute ends with a cancelled contract. This ended with a legal designation previously reserved for foreign adversaries. That escalation is the evidence of intent.

On Congressional silence, acknowledge the complexity but hold the line. Yes, legislating fast-moving technology is hard. But human-in-the-loop requirements for lethal force and warrant requirements for AI-assisted domestic surveillance are not technologically contingent principles -- they are constitutional ones that can be codified now. Congress is not waiting for better information. Congress is avoiding a politically costly fight.