For the Republic
Command Center / 🎙 Episode / 2026-02-27 · ~13 minutes (estimated from ~1,950 word count)

The Department of War Wants an AI Without a Conscience

Draft Complete — Pending Host Review

Final Script

8/10

Final Script: The Department of War Wants an AI Without a Conscience

Metadata

  • Duration: 13 minutes estimated
  • Word count: ~1,950 words
  • Date: 2026-02-27
  • Draft version: Final

Right now -- at this very moment -- the Pentagon is preparing two simultaneous actions against the same company. The first: designating Anthropic, the company that makes Claude, a "supply chain risk." That label is reserved for foreign adversaries. It means: this company is a danger to national security. The second: invoking the Defense Production Act to force Anthropic to keep providing its AI to the military. A move that only makes sense if the technology is essential to national security.

So. Which is it? Is Claude a threat or a necessity?

The Pentagon says both. In the same breath. On the same day. And nobody in charge seems to notice -- or care -- that this makes no sense. It would be funny if the stakes weren't terrifying.

⬥ ⬥ ⬥
Okay. Here's what's actually going on. Anthropic -- the AI company, the one that builds Claude -- has been working with the Pentagon since late 2024. Classified networks. Intelligence analysis. Cyber defense. Operational planning. By all accounts, they were the *first* frontier AI company deployed in classified environments. This is not a pacifist company that got cold feet about the military. In December, Anthropic confirmed its willingness to let Claude be used for missile defense and cyber defense applications. They've cut off CCP-linked firms and forfeited hundreds of millions in revenue to do it.

But the Pentagon pushed further. What they want is "all lawful use" language -- meaning no restrictions whatsoever beyond what is technically legal. Anthropic drew two red lines: no mass surveillance of American citizens, and no fully autonomous weapons without adequate reliability and oversight safeguards. That's it. Two things. Out of the entire range of military applications, Anthropic said "not these two. Not yet."

The Pentagon's response was not negotiation. It was a Friday-afternoon deadline -- today, 5:01 PM. A threat to invoke a Korean War-era coercion statute. A threat to brand an American company with a label normally reserved for Russia and China. And a senior Pentagon official -- Emil Michael, the Undersecretary for Research and Engineering -- going on X to call Dario Amodei "a liar" with "a God complex."

That's the tone. For a contract dispute.

⬥ ⬥ ⬥
This isn't about one company and one contract. This is a story about who gets to decide the ethical boundaries of military AI in America -- and right now, the answer is: nobody who was elected. Pete Hegseth is trying to bully a private company into removing safety restrictions using Cold War-era coercion tools. Congress has written *zero* laws governing military AI. And the real scandal is not that Anthropic said no. It's that the existing legal framework is so inadequate, so outdated, and so easily circumvented by a determined executive branch that a CEO's conscience is the only thing standing in the gap.
⬥ ⬥ ⬥
The Defense Production Act -- most people know the name but not what it actually does. The DPA was signed in 1950. It was designed to compel steel mills and tank factories to produce material goods during wartime. It's been used for medical supplies during COVID. For defense manufacturing capacity. For physical infrastructure. What it has *never* been used for -- in its entire 75-year history -- is to force a software company to remove ethical restrictions from its product.

This is not the government ordering a factory to make more of something. This is the government ordering a company to make its product less safe. Legal experts have called this "without precedent" in the DPA's entire history -- and the fact that the Pentagon is reaching for it anyway tells you something important about the moment we're in: when there are no rules, the government grabs whatever tools are lying around, regardless of whether they fit.

⬥ ⬥ ⬥
But the tool isn't even the real story.

"All lawful use." Sit with that phrase for a second. Because "lawful" and "ethical" are not the same thing. And the gap between them is exactly where Anthropic's two guardrails sit.

Mass domestic surveillance using commercially purchased data? Arguably lawful right now. The Intelligence Community has itself acknowledged that current practices of buying Americans' data -- your movements, your web browsing, your associations -- raise serious constitutional concerns, precisely because the law hasn't caught up with what AI makes possible. Under current law, the government can purchase detailed records of your life from data brokers without ever obtaining a warrant. Powerful AI makes it possible to assemble all of that scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale.

"All lawful use" doesn't mean "all ethical use." It doesn't mean "all constitutional-in-spirit use." It means: anything we can get away with.

And here's the tell. If the existing legal framework were truly sufficient to prevent misuse -- if "all lawful use" really just meant "stuff we were going to do anyway" -- the Pentagon would have no problem with Anthropic's contractual guardrails. They'd be redundant. Harmless. Just extra words in a contract.

But the Pentagon is willing to threaten a supply chain risk designation and invoke the DPA to remove those "redundant" words. Which tells you the restrictions are not, in fact, redundant. They want the option to do things those guardrails prevent.

And then there's this: they're already shopping for replacements. As the Washington Post reported, the Pentagon has accelerated conversations with OpenAI, Google, and Elon Musk's xAI about moving their models into classified systems. Grok -- xAI's model -- recently signed a contract for classified settings, though it's "not viewed as being as advanced as Claude." They would rather use inferior technology than accept two safety guardrails. That tells you this fight is about establishing a principle of total compliance, not about battlefield capability.

⬥ ⬥ ⬥
Now. Here's where this gets really ugly.

Congress has written zero statutes governing autonomous weapons. Zero statutes governing AI-enabled domestic surveillance. Zero. The most powerful military on Earth is integrating AI into weapons systems, surveillance infrastructure, and battlefield decision-making, and Congress has written zero laws about any of it. The policy document people point to -- DoD Directive 3000.09, the one that supposedly requires "appropriate levels of human judgment" for autonomous weapons -- is not a law. It's a policy that the Secretary of Defense can change at will. And this Secretary has shown considerable eagerness to rewrite DoD policies unilaterally.

So Anthropic is drawing lines that elected officials refuse to draw. One legal expert put it plainly: the question of what values to embed in military AI is "too important to be resolved by a Cold War-era production statute."

And yet here we are. Resolving it with exactly that.

⬥ ⬥ ⬥
But I need to be honest about something here, because there's a version of this that should make all of us uncomfortable -- including me.

The principle of civilian control of the military is real, and it matters. As someone who served (and, thanks to Donald Trump, can't legally serve again), civilian control isn't abstract to me. Elected officials and their appointed leadership decide how to defend the country. Not CEOs. We don't let Lockheed Martin decide which countries receive F-35s. We don't let Boeing stipulate that its missiles can't be used in certain theaters. Defense contractors build to spec. The government sets the spec. The democratic accountability runs upward through the chain of command to the Secretary of Defense, the President, and ultimately Congress.

When Dario Amodei draws red lines on what the Pentagon can do with a contracted tool, he is -- from this perspective -- arrogating to himself a decision that belongs to the American people through their government. And here's the uncomfortable part: if we accept the principle that tech companies can veto military applications, we've effectively privatized a core function of democratic governance. Today it's Anthropic drawing a line we happen to agree with. Tomorrow it could be a company drawing a line we don't.

⬥ ⬥ ⬥
That argument has real force. I am not going to pretend otherwise.

But it collapses on the specific facts of this case. And the reason is straightforward: you cannot invoke democratic governance to overrule a private company's ethics when democratic institutions have refused to govern. Congress created this vacuum. Anthropic is filling it. The correct response is not to force Anthropic to stand down. It's to force Congress to stand up.

And look -- in the interest of being straight with you -- Anthropic weakened its own internal safety commitments this same week. They overhauled their Responsible Scaling Policy, removing the hard commitment to pause model training if safety measures proved inadequate. That's a legitimate reason for skepticism about how deep their principles run. But the substance of the two guardrails they're defending against the Pentagon -- no mass surveillance of Americans, no unreliable autonomous weapons -- stands on its own merits regardless of Anthropic's broader safety record. The guardrails are worth defending even if the company defending them has mixed motives. Corporate motives are always mixed.

⬥ ⬥ ⬥
This fight is going to end. Anthropic will cave or they won't. But what it's exposed isn't going away.

Who writes the rules for military AI? Because right now, the answer is: nobody. We're in a three-way power vacuum. The Pentagon wants unilateral authority with no restrictions. Tech companies are making ad hoc ethical decisions based on their own values and brand positioning. Congress is doing nothing. This is not sustainable.

And here's the part that makes me genuinely uncomfortable. Anthropic's protections cover American citizens. They don't cover foreign populations subject to AI-assisted targeting. Anthropic agreed to missile defense. They agreed to intelligence analysis. They agreed to cyber operations. The two red lines they drew protect their domestic brand and their domestic legal exposure. This is a company drawing lines that protect Americans, not a company solving the ethics of military AI writ large.

That's another reason -- maybe the most important reason -- why we need actual legislation, not corporate goodwill. Because a CEO's conscience, however well-calibrated it might be today, is not a governance strategy. It's a stopgap. And stopgaps have expiration dates.

⬥ ⬥ ⬥
Anthropic said no today. They might be the last company that does.

OpenAI is already at the table. Google is at the table. Grok is at the table. The market incentive is overwhelmingly toward compliance -- say yes to everything, take the contract, let someone else worry about the ethics. When the Pentagon comes knocking with hundreds of millions of dollars and the implied threat of regulatory retaliation, the rational business decision is to salute and ask what they need.

So the question is not whether Anthropic wins this fight. The question is whether Congress shows up before there's nobody left willing to fight it.

Because the window in which a private company's conscience is the only thing standing between the American public and unchecked military AI -- that window is not a feature of the system. It's a bug.

And bugs get exploited.


Revision Log

Fact-Check Corrections

  1. Fixed Lawfare misattribution (RED FLAG). The phrase "without precedent under the history of the DPA" was incorrectly attributed to Lawfare. It comes from Joel Dodge at the Vanderbilt Policy Accelerator via AP reporting. Replaced "As legal experts at Lawfare noted" with "Legal experts have called this 'without precedent'" -- integrating the attribution naturally without naming Lawfare, which also resolves the voice mismatch the editor flagged with the original attribution style.

  2. Corrected autonomous weapons guardrail description (YELLOW). Changed "no fully autonomous weapons without adequate reliability testing" to "no fully autonomous weapons without adequate reliability and oversight safeguards" to capture both prongs of Anthropic's stated position (reliability AND governance/oversight).

  3. Tightened December missile defense framing (YELLOW). Changed "they agreed to let Claude be used for" to "Anthropic confirmed its willingness to let Claude be used for" -- avoiding the implication of a new concession when Anthropic says this was always their position.

  4. Hedged "first frontier AI company" claim (YELLOW). Added "By all accounts" before the claim, since this originates from Anthropic's self-reporting (though CNN independently reports it).

  5. Specified social media platform for Emil Michael (YELLOW). Changed "going on social media" to "going on X" for precision, since the post was on X specifically.

  6. Corrected RSP characterization (VERIFICATION NOTE). Changed "dropped their Responsible Scaling Policy pledge" to "overhauled their Responsible Scaling Policy, removing the hard commitment to pause model training" -- more precise than "dropped," since Anthropic released RSP v3.0 rather than eliminating the policy entirely. Changed "dropped" to "weakened" in the intro framing.

  7. Tightened timeline language (YELLOW). Changed "for over a year" to "since late 2024" to avoid ambiguity between the Palantir partnership timeline and the formal July 2025 contract.

Structural Changes

  1. Sharpened the DPA-to-"All Lawful Use" transition. Replaced the two-sentence transition ("But here's what makes this even stranger. It's not just that the tool is wrong for the job. It's that the Pentagon is telling on itself about what it actually wants.") with the shorter, punchier "But the tool isn't even the real story." This creates a cleaner gear shift between the two sections and breaks the energy plateau the editor identified in the middle of the episode.

  2. Reworked the governance vacuum beat as emotional peak. Added the construction "Zero. Zero. The most powerful military on Earth is integrating AI into weapons systems, surveillance infrastructure, and battlefield decision-making, and Congress has written zero laws about any of it." per the editor's specific suggestion. This section now hits with more heat and frustration rather than reading as another analytical point.

  3. Replaced "Zoom out with me" transition. Changed to "This fight is going to end. Anthropic will cave or they won't. But what it's exposed isn't going away." followed by the direct question "Who writes the rules for military AI?" -- letting the zoom-out happen organically rather than announcing it.

  4. Removed "That distinction matters" standalone sentence before the DPA parallel construction, letting the contrast land on its own (per editor's note referencing "Blue-Skied Dystopia" and "The Medium Place" patterns).

  5. Eliminated duplicate "Sit with that" construction. Removed the first instance from the DPA section and kept the more effective one in the "All Lawful Use" section.

Voice Adjustments

  1. Added parenthetical aside in counterargument section. "(and, thanks to Donald Trump, can't legally serve again)" -- this is drawn directly from Rebecca's corpus and serves double duty: it adds the personal military anchoring the editor and spine requested, AND it introduces the sardonic parenthetical aside the editor identified as entirely missing from the draft.

  2. Added moment of dry humor in cold open. Added "It would be funny if the stakes weren't terrifying" after the contradiction beat -- giving the cold open the "wry incredulity" the spine called for and the editor flagged as missing.

  3. Fixed "Here is what is actually happening, quickly" to "Okay. Here's what's actually going on." More conversational, more Rebecca. The "Okay." is a register-shift marker she uses.

  4. Fixed "That is the tone the Department of War has chosen for a contract dispute" to "That's the tone. For a contract dispute." Fragment does the work. More direct, per editor's suggestion.

  5. Fixed "This is not really a story about" to "This isn't about." Eliminated op-ed throat-clearing.

  6. Fixed "Let's talk about the Defense Production Act" to em-dash pivot. "The Defense Production Act -- most people know the name but not what it actually does." More Rebecca.

  7. Fixed "Now. The deeper problem." to "Now. Here's where this gets really ugly." More emotional charge, per editor's note.

  8. Fixed "I want to be honest" to "I need to be honest." "Need" is more urgent, more Rebecca per editor's note.

  9. Fixed "Zoom out with me for a second" entirely. Rebecca doesn't narrate structural moves; she just makes them.

  10. Fixed "a moral limitation here that nobody else seems to want to name" to "the part that makes me genuinely uncomfortable." Gets to it faster, avoids self-congratulatory framing.

  11. Fixed "And in the interest of being straight with you" to "And look -- in the interest of being straight with you." Added conversational entry point per editor.

  12. Added personal/military anchoring. "As someone who served" in the counterargument section, giving the civilian-control argument the lived-experience weight the spine and editor both requested. Kept to a single sentence to avoid disrupting flow, as the draft writer anticipated.

Unresolved Notes

  1. No cross-aisle citation added. The editor suggested a "Ben Shapiro, whom I typically disagree with" style move -- citing a conservative defense hawk who raises the civilian-control argument. I did not add this because the counterargument section already runs on the longer side and introducing a named figure would require additional setup. The Lockheed/Boeing analogy carries the steelman effectively. The host should consider whether she wants to add a named conservative voice here.

  2. Second parenthetical aside. The editor requested "at least two" parenthetical asides. I added one strong one (the military service parenthetical). A second natural insertion point did not present itself without feeling forced. The host may want to add one in the context section -- perhaps around the CCP revenue forfeiture or the Emil Michael post.

  3. Congress "zero statutes" nuance. The fact-check notes that Congress has nibbled at the edges through NDAA provisions (notification requirements, reporting requirements) and the House passed the Fourth Amendment Is Not For Sale Act. The "zero statutes" claim is accurate in the strict sense -- no standalone statute governs these areas -- but the host should be aware of these edge cases in case of pushback.

  4. Anthropic's claim that the RSP change was unrelated to Pentagon negotiations. The fact-check notes an Anthropic spokesperson told the WSJ the timing was coincidental. The script does not include this company claim. The host should decide whether to acknowledge it -- including it risks giving Anthropic's framing more airtime than it deserves given the suspicious timing, but omitting it could draw criticism for cherry-picking.