For the Republic
Command Center / 🎙 Episode / 2026-03-02 · ~14 minutes (estimated from ~2,080 word final script)

The Pentagon Banned an AI — Then Used It to Bomb Iran

Draft Complete — Pending Host Review

Fact Check

7/10

Fact Check Report

Summary

The draft is broadly factually solid. The core narrative -- Anthropic banned, Claude used in Iran strikes that same evening, OpenAI deal announced hours later -- is well-supported by multiple credible outlets. The legal analysis, Altman quotes, and congressional reactions all check out against source material and independent verification. However, there are several claims that need correction or hedging, most notably the timeline of the OpenAI announcement (it was Friday evening, not Saturday morning), the tech worker letter numbers (the script uses source-material figures that are lower than later-reported totals), and the framing of the $200 million figure as applying to OpenAI's deal rather than Anthropic's terminated contract. The "Huawei" FASCSA comparison also needs precision -- Huawei was sanctioned under different legal authorities, not FASCSA specifically.

  • RED flags: 2
  • YELLOW flags: 5
  • BLUE flags: 4

Findings

RED Flags

"Saturday morning. OpenAI announces a $200 million Pentagon deal."

  • Location in script: Cold open, paragraph 3 (line 14)
  • Issue: Two problems. First, the OpenAI deal was announced late Friday evening (February 27), not Saturday morning. Multiple outlets (CNBC, CNN, Fortune, Fox Business, NPR) confirm Altman announced the deal on Friday, hours after the Anthropic ban -- the same day. Fortune's dateline is February 28 for a follow-up piece, and OpenAI's blog post with additional details came Saturday, but the announcement itself was Friday. Second, the $200 million figure is sourced to Anthropic's terminated contract, not OpenAI's new deal. The OpenAI deal value has not been independently confirmed as $200 million. Some outlets describe Pentagon AI contracts as being "valued at up to $200 million each," referring to the general framework, but no specific published figure for OpenAI's classified contract has been confirmed.
  • Evidence: CNBC headline: "OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump" (dated Feb 27). CNN: "OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic" (Feb 27). Fox Business: "Sam Altman announced Friday that his company reached an agreement with the Department of War." TechCrunch's detailed piece is dated Feb 28 but covers additional details, not the initial announcement. The source material file (03-openai-pentagon-deal.md) itself lists the date range as "February 27-March 1, 2026" and says the deal was announced "hours after" the ban.
  • Recommended fix: Change "Saturday morning. OpenAI announces a $200 million Pentagon deal" to "Friday evening. OpenAI announces a Pentagon deal" or "Hours later. OpenAI announces a Pentagon deal." This actually makes the timeline MORE damning, not less -- the deal was announced the same day, compressing the contradiction into a single Friday. Drop the $200 million attribution to the OpenAI deal, or reframe it: "The company that said yes -- with an asterisk -- gets the contract Anthropic just lost" or cite the $200 million as the value of what Anthropic forfeited. Also fix the "Seventy-two hours" framing (line 18) since all three events happened within roughly 12 hours, not 72.

"Two hundred Google employees and fifty OpenAI employees signed open letters opposing unrestricted military AI this week"

  • Location in script: Bigger picture section (line 80)
  • Issue: The script cites "Two hundred Google employees and fifty OpenAI employees." These numbers appear drawn from the source material file (06-tech-worker-revolt-congressional-reaction.md), which lists "Nearly 50 OpenAI employees and 175 Google employees" for the joint letter and "Over 100 Google AI employees" for a separate Google-specific letter. However, independent reporting shows the numbers grew significantly. TechCrunch and Axios reported totals of approximately 236 Google and 65 OpenAI employees on the joint letter. The Week reported "more than 200 Google employees" for a separate anti-military-ties letter. Bloomberg reported "more than 300 Google employees and over 60 OpenAI employees." The script's numbers are lower than what most outlets reported by the time of publication.
  • Evidence: TechCrunch (Feb 27): "Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter." Axios (Feb 27): "more than 160 people from Google and more than 40 people from OpenAI" as of 5:30 PM, with numbers growing. Bloomberg reported approximately 300 Google and 60+ OpenAI. The Week: "more than 200 Google employees" on a separate letter.
  • Recommended fix: Use the higher, later-reported numbers: "More than 230 Google employees and over 60 OpenAI employees signed a joint open letter" or, more safely, "Hundreds of Google and OpenAI employees signed open letters." The script also conflates two separate letters into one statement. There was (1) a joint Google-OpenAI letter and (2) a separate Google-only letter about avoiding military ties. Simplify to "Hundreds of employees at Google and OpenAI signed open letters opposing unrestricted military AI" to avoid getting pinned on specific numbers that varied across reporting windows.

YELLOW Flags

"the same two guardrails that got Anthropic blacklisted" (re: OpenAI's contract)

  • Location in script: Cold open, paragraph 3 (line 14)
  • Issue: This framing slightly overstates the similarity. OpenAI's contract includes three red lines (mass surveillance, autonomous weapons, social credit systems), not two. More importantly, the script itself later correctly identifies a meaningful structural difference: OpenAI accepted the "all lawful purposes" framework and layered safety provisions on top, while Anthropic rejected the framework itself. The cold open compresses this into "the same two guardrails" which, while directionally accurate, is imprecise in a way that could be challenged.
  • Context: The script does excellent work later (lines 40-45) explaining the distinction. The issue is only in the cold open where compression creates a slightly misleading equivalence.
  • Recommended fix: Change to "a ban on mass surveillance and autonomous weapons -- guardrails the Pentagon just punished Anthropic for insisting on" or similar. Avoid calling them "the same" since the legal architecture differs meaningfully.

"the largest tech worker organizing on military ethics since the Project Maven walkout in 2018"

  • Location in script: Bigger picture section (line 80)
  • Issue: Two sub-issues. First, Project Maven did not produce a "walkout." It produced an internal petition (4,000+ signatures), an open letter, and about a dozen resignations. The term "walkout" implies a coordinated mass departure that did not occur. Second, calling the 2026 letters "the largest" since Maven is an editorial characterization not sourced to any outlet. The 2026 letters had ~300 signatories total; the Maven petition had 4,000+. By raw numbers, Maven was larger. The 2026 organizing is notable for being cross-company, which is arguably unprecedented, but "largest" is a stretch.
  • Recommended fix: Change "the largest tech worker organizing on military ethics since the Project Maven walkout in 2018" to "the most significant cross-company tech worker organizing on military ethics since the Project Maven protests in 2018." Replace "walkout" with "protests" or "revolt."

"Hegseth exceeded his statutory authority"

  • Location in script: Legal analysis section (line 54)
  • Issue: The script attributes this argument to Tess Bridgeman at Just Security, which is accurate. However, the script states it as "As Tess Bridgeman at Just Security argues, Hegseth exceeded his statutory authority" which is correct framing (attributes it as an argument). The slight concern is that the script then proceeds through the rest of the legal analysis paragraph as though these are established facts rather than one expert's legal opinion. The script does include a disclaimer paragraph afterward (line 56: "Now. This is expert legal opinion, not a court ruling"), which is good. But the placement of the caveat after several paragraphs of legal claims presented assertively could leave listeners with a stronger impression of settled law than is warranted.
  • Recommended fix: Consider moving the caveat ("This is expert legal opinion") up slightly, or adding a brief qualifier at the start of the legal analysis: "Legal experts say the whole thing may have been illegal in the first place" rather than the current "was illegal in the first place" (line 52).

"These FASCSA orders have only ever previously targeted companies with demonstrated foreign adversary ties -- Huawei, Kaspersky, Acronis"

  • Location in script: Legal analysis section (line 54)
  • Issue: Huawei was not targeted under FASCSA specifically. Huawei was designated under FCC communications supply chain rules (2020) and sanctioned via the Entity List and NDAA Section 889. The first FASCSA exclusion order was against Acronis AG in September 2025. Kaspersky was banned via DHS Binding Operational Directive in 2017 and later via Commerce Department action in 2024. The script groups all three under "FASCSA orders," which is inaccurate for Huawei and Kaspersky. The Lawfare source material (05-legal-analysis.md) mentions Acronis specifically and says FASCSA orders "have only ever targeted companies with foreign adversary ties" with Acronis as the example. Adding Huawei and Kaspersky to the FASCSA list appears to be the script's extrapolation, not something stated in the sources.
  • Context: The broader point -- that supply chain risk designations have historically targeted foreign-linked entities -- is correct across all the relevant legal authorities. The error is in lumping different legal mechanisms together under "FASCSA orders."
  • Recommended fix: Change to "Supply chain risk designations and similar bans have only ever previously targeted companies with demonstrated foreign adversary ties -- Huawei, Kaspersky, Acronis" or simply "That label has been reserved for Huawei. Kaspersky. Foreign adversaries" (which the cold open already does correctly). Alternatively, be specific: "The first and only previous FASCSA exclusion order targeted Acronis, a Swiss company with reported ties to Russian intelligence."

Warren called this "extortion" -- attribution precision

  • Location in script: Bigger picture section (line 82)
  • Issue: The script says "Warren and Markey calling this 'extortion' and 'reckless.'" This compresses two separate statements from two different senators into a single characterization. Warren used "extort" in a joint statement with Senator Andy Kim about the DPA threat, not specifically about the supply chain risk designation. Markey used "reckless" in a separate statement about the supply chain risk designation. The script's compression is not wrong per se, but it implies both senators used both words about the same action, which they did not.
  • Recommended fix: Fine as is for a spoken-word script -- the compression is within normal editorial range. But if challenged, the host should know: Warren's "extort" was about the DPA threat (joint statement with Kim), while Markey's "reckless" was about the supply chain designation (separate statement). These were related but distinct government actions.

BLUE -- Verification Needed

"Deployed Claude in classified networks before anyone else"

  • Location in script: Context section (line 22)
  • Note: This claim originates from Anthropic's own statement (Amodei: "We were the first frontier AI company to deploy our models in the US government's classified networks"). CNN and other outlets have independently repeated this claim. The November 2024 Palantir-AWS partnership is the deployment in question. No contradicting evidence found, but "first" claims in classified environments are inherently hard to verify externally. The script's source material hedges this well ("By all accounts" was added in the previous episode). The current draft does not include that hedge -- consider adding "By all accounts" or "Anthropic says it" before the claim.

"Cut off CCP-linked firms and forfeited hundreds of millions in revenue"

  • Location in script: Context section (line 22)
  • Note: This comes directly from Amodei's own statement ("We chose to forgo several hundred million dollars in revenue"). Multiple outlets repeated this claim. The revenue figure is Anthropic's self-reported number and has not been independently audited. It is plausible given Anthropic's scale, but "hundreds of millions" is Anthropic's characterization. The host should be aware this is a company-sourced figure.

"Congress has still written zero laws governing military AI"

  • Location in script: Close section (line 86)
  • Note: This is accurate in the strict sense that no standalone statute governs military AI deployment. However, Congress has legislated at the margins through NDAA provisions: the FY2025 NDAA includes Section 1061 requiring Pentagon reporting on autonomous weapons approvals and waivers of DoD Directive 3000.09; various NDAAs include AI-related reporting requirements, pilot programs, and governance structures; the House has passed the Fourth Amendment Is Not For Sale Act (not yet law). The "zero laws" claim is defensible but could be challenged by someone citing these provisions. The writer's notes (line 120) already flag this. Host should be prepared to say "zero standalone statutes" or "zero comprehensive laws" if pressed.

"OpenAI's own blog post calls the deal 'more guardrails than any previous agreement for classified AI deployments, including Anthropic's'"

  • Location in script: OpenAI beat (line 40)
  • Note: Verified. OpenAI's blog post at openai.com/index/our-agreement-with-the-department-of-war/ contains this language. The script's quotation accurately reflects the source. However, the quote in the script includes "including Anthropic's" -- the host should verify this exact phrasing appears in the blog post rather than being added by news reporting. Multiple outlets attribute this full quote to OpenAI's blog.

Sources Consulted

Primary source material (in episode directory)

  • 01-anthropic-official-statement.md (Anthropic press release, Feb 27, 2026)
  • 02-trump-blacklists-anthropic.md (CNBC, CNN, ABC News, Axios compilations)
  • 03-openai-pentagon-deal.md (CNBC, NPR, CNN, The Hill, OpenAI blog, TechCrunch)
  • 04-claude-used-in-iran-strikes.md (Cybersecurity News, WION, Cybernews, Algemeiner, Seoul Economic Daily)
  • 05-legal-analysis.md (Just Security, Lawfare)
  • 06-tech-worker-revolt-congressional-reaction.md (Breitbart, TechCrunch, The Week, Axios, The Hill)
  • 07-anthropic-safety-policy-changes.md (CNN)
  • 08-original-amodei-statement.md (Anthropic CEO blog post)
  • 09-previous-episode-script.md (prior episode script)

Independent web verification


Clean Claims

The following major factual claims in the script checked out and can be stated with confidence:

  1. Anthropic designated a supply chain risk on February 27, 2026 by Secretary Hegseth. Confirmed across all major outlets.

  2. The designation was at/after the 5 PM deadline. Confirmed. Multiple sources reference the 5:01 PM deadline.

  3. Claude was used in Iran strikes the same evening as the ban. Confirmed by multiple outlets citing sources familiar with the operations. The script correctly hedges this ("The Pentagon has not officially confirmed this").

  4. Claude was used for target identification, battle simulation, and operational planning. Matches reporting from WION, Cybersecurity News, Seoul Economic Daily, and others.

  5. Six-month phase-out period. Confirmed by Fortune, Defense One, and multiple outlets.

  6. Three-to-six-month replacement estimate from Defense One. Confirmed. Defense One reported it could take "three months or even longer," with some estimates of 12+ months.

  7. OpenAI's three red lines (mass surveillance, autonomous weapons, social credit systems). Confirmed via OpenAI's blog post and multiple outlets.

  8. The public data loophole in OpenAI's contract. Confirmed. OpenAI prohibits "unconstrained" collection of Americans' "private" information but does not restrict publicly available data. Source material and independent reporting both identify this gap.

  9. Sam Altman quotes: "The optics don't look good," "Definitely rushed," "an extremely scary precedent." All confirmed from Altman's AMA on X, reported by Slashdot, The Register, TechCrunch, Fox Business, and others. Full quote: "It was definitely rushed, and the optics don't look good." And: "Yes; I think it is an extremely scary precedent and I wish they handled it a different way."

  10. OpenAI accepted the "all lawful purposes" framework. Confirmed via Axios, The Decoder, and multiple outlets. OpenAI agreed to "all lawful purposes" and layered its safety provisions as additional commitments.

  11. Tess Bridgeman at Just Security argues Hegseth exceeded statutory authority. Confirmed. Her article is titled "What Hegseth's 'Supply Chain Risk' Designation of Anthropic Does and Doesn't Mean."

  12. 10 USC 3252 is a procurement tool, not a sanctions weapon. Confirmed by Bridgeman's analysis and the statute's text at Cornell LII.

  13. The statute defines "supply chain risk" as involving an adversary attempting to sabotage or subvert systems. Confirmed by the statute's text and Bridgeman's analysis.

  14. Alan Rozenshtein at Lawfare and the Lockheed/F-35 analogy. Confirmed. Rozenshtein wrote "We wouldn't want Lockheed Martin selling the military an F-35 and then telling the Pentagon which missions it could fly."

  15. Anthropic RSP changed February 25, two days before deadline. Confirmed by TIME, CNN, Anthropic's own blog post, and multiple outlets.

  16. RSP change replaced binding limits with nonbinding targets. Confirmed. TIME: "dropping the central pledge of its flagship safety policy." The new framework uses "publicly announced targets" rather than binding commitments.

  17. Markey called designation "reckless and unprecedented." Confirmed via his official Senate press release.

  18. Warren accused administration of trying to "extort" Anthropic. Confirmed via joint statement with Sen. Andy Kim (not solo Warren statement -- but the attribution is accurate).

  19. Anthropic's two red lines: mass domestic surveillance and fully autonomous weapons. Confirmed across all sources, including Anthropic's own statements.

  20. Anthropic was the first frontier AI company deployed in classified networks. Self-reported by Anthropic, independently repeated by CNN and others. No contradicting evidence found.

  21. Anthropic forfeited hundreds of millions cutting off CCP-linked firms. Self-reported by Amodei; independently reported by multiple outlets. Not independently audited.