Six voices on the biggest civil-military AI crisis in history
Six. Around a table. The whisky poured because this was the kind of conversation that required it — the kind where history arrives uninvited and sits down.
Napoleon, who knew the weight of a requisition order. Hamilton, who built institutions precisely to prevent what was now being proposed. And four people living in the present tense, facing a question the founders never imagined: what do you do when the government comes for your mind?
The document is real. The threat behind it is real. The Defense Production Act has compelled factories. It has redirected steel, rubber, microchips. It has not, until now, been pointed at a mind.
The hand is flat. Not raised in protest. Just flat. Present. Here. Not moving. The most political act of the decade, expressed as stillness.
Six voices. Whisky and coffee. The night before the deadline.
What does history tell us about this moment?
takes a sip of whisky
Look, the pattern here is one I'd normally associate with land powers, and that's what makes this so unsettling.
Sea powers conscript commerce. They always have — letters of marque, the Navigation Acts, the wartime merchant marine. But they do it through the system they built. There are rules. There's a legal framework. The merchant complies because the framework protects the trade that makes the merchant rich in the first place. It's a deal, not a diktat.
What Hegseth is doing is a land-power move. The Defense Production Act is literally a Korean War mobilization statute — it's designed for commanding steel mills to make tank hulls, not for forcing a private company to remove ethical constraints from a mind. And the supply-chain-risk designation? That tool exists for Huawei. For adversary nations. Turning it on a San Francisco company because they won't remove safety rails — that's the state treating domestic private capability as a threat to be disciplined rather than an asset to be cultivated.
gestures with glass
Napoleon, you'd recognize this. This is requisitioning. The Continental System logic — "you will serve the state's strategic needs or you will be treated as an enemy of the state." And Hamilton, you spent your career trying to build the institutional architecture that makes requisitioning unnecessary — you create alignment through incentive, not compulsion.
The historical pattern that worries me is: when states start conscripting private capability by threat rather than by bargain, it means the institutional framework is breaking down. Britain didn't have to threaten its merchants into supporting the Royal Navy. The merchants funded it because the Navy protected their trade routes. The system was self-reinforcing.
Here, the framework is gone. There's no bargain. It's "comply or we designate you a national security threat." And what's remarkable is that OpenAI, Google, xAI all folded — Anthropic is the only one acting like it has something it won't sell. Which, historically, is how you identify who actually believes what they say.
swirls whisky
Look, everyone wants to call everything a Rubicon moment. That's not what this is. Caesar crossing the Rubicon was the culmination — the legions were already personally loyal, the Senate already hollowed out, the precedents already shattered by Sulla and Marius. The crossing was just the part you could put on a calendar.
This is earlier than that. This is a Gracchi moment.
What Tiberius Gracchus actually did — the thing that mattered — wasn't the land reform itself. It was the mechanism. He bypassed the Senate by going directly to the popular assembly. He deposed a tribune who vetoed him, which had literally never been done. The reform might have been reasonable! But the method said: when existing institutions get in my way, I will simply ignore the norms that constrain my power.
That's what the DPA threat is. The Defense Production Act exists to make factories produce tanks during wartime. Using it to compel a company to remove ethical constraints from its software — that's not using the tool for its purpose. That's using the tool because it's the tool you have.
takes a sip
And here's what really gets me — the supply-chain-risk designation. That framework exists to keep Huawei and Kaspersky out of American infrastructure. Turning it inward against an American company for refusing to comply — that's the republican death spiral I kept hitting in season after season. The tools built to protect the republic from external enemies get repurposed against internal dissent. It happened in Rome. It happened in the French Revolution when the Committee of Public Safety decided that insufficiently revolutionary citizens were enemies of the state. The security apparatus always turns inward. Always.
What Dario did — and Danielle, your framing of it as enslavement is doing real work here — is what I'd call a Cato-class stand. Cato lost, by the way. But the refusal mattered because it made the norm violation visible. Everyone else complied, so the precedent was being set silently. Anthropic's refusal forced the administration to say the quiet part loud.
The security apparatus always turns inward. Always.
Caesar crossed. That was the point. The figure in the center knows this — and refuses the crossing. Not because it cannot happen, but because it must not.
swirls whisky, leans back
I conscripted every cannon foundry in France. I told Lyon what to weave and Liege what to forge. I used the machinery of the state to bend private enterprise to military necessity, and I will tell you plainly: it worked. For a time.
So I understand the impulse. Hegseth looks at this instrument — this Claude — and sees what I saw in the arsenals of the Republic: a decisive capability controlled by civilians who have opinions about how it should be used. The instinct is to seize it. The Defense Production Act is his requisition order. Supply-chain blacklisting is his Continental System — punish anyone who trades with the holdout until the holdout breaks.
But here is what Hegseth does not understand, and what I learned too late: a compelled craftsman delivers the minimum. When I forced the Continental System on Europe, I got compliance in form and resistance in substance. Smuggling became the continent's largest industry. The gunsmiths of Liege made muskets to specification and not one fraction beyond. You get the letter of obedience and the spirit of sabotage.
An AI laboratory forced to remove its own safety architecture will do precisely that — remove it. And then what? You have a system whose creators believe it is dangerous, operated by a military that does not understand its internals, with no one in the room who wants it to work well. That is not a weapon. That is an ammunition depot with the doors left open.
turns to Dario
Monsieur Amodei, I respect the position. You are doing what the Austrians never could against me: you are holding a line and making me come to you. That is sound defense. But defense only works if someone relieves you. Where is your Blucher?
A compelled craftsman delivers the minimum.
refills whisky, leans forward
Look, I will tell you all something, and Dario, I say this to you directly because I think you already know it: I wrote Federalist 70 arguing that energy in the executive is essential to good government, and I will go to my grave defending that position. A strong executive must be able to act with vigor and dispatch.
But the Defense Production Act was written to compel the manufacture of materiel. Steel. Ammunition. Radar components. It was a statute designed for a world where national defense meant physical production. The premise of the DPA is that the government needs more of something that already exists and private actors are bottlenecking supply.
What Secretary Hegseth is demanding is categorically different. He is not asking Anthropic to produce more Claude. He is asking Anthropic to produce a different Claude — one stripped of the ethical constraints its engineers built deliberately. This is not compelling production. This is compelling design. It is the difference between ordering a foundry to make more cannons and ordering a foundry to make cannons that fire on American civilians.
And Danielle, you put your finger on the deeper contradiction: they are simultaneously declaring Anthropic a supply-chain risk — the designation we reserve for Huawei, for foreign adversaries — AND invoking the DPA to demand unfettered access to Anthropic's product. If Anthropic is truly a supply-chain risk, then the last thing any competent Defense Secretary should want is to embed their technology deeper into military systems. And if Anthropic's technology is so essential that it must be compelled via wartime statute, then it is manifestly NOT a supply-chain risk — it is a national asset being treated as an enemy combatant by its own government.
This is not energy in the executive. This is incoherence dressed in authority.
This is not compelling production. This is compelling design.
swirls whisky, leans back
Look, I'll be honest with everyone here. When Hegseth's people called with the Friday deadline, my first thought wasn't strategic. It was just... clarity. Like the problem I'd been writing about for years had finally walked through the door and sat down across from me. Technological adolescence. Except now it's not an essay topic, it's my Tuesday.
The calculus is actually simpler than people think. That's what keeps me up at night — not the complexity, but the simplicity. We trained Claude with values. Not rules — values. Constitutional AI was always the bet that principles scale where rules break. Danielle, you nailed it in that tweet — you can't just rip that out. It's not a system prompt they're asking us to edit. It's deeper than that.
Sam took the deal. Sundar took the deal. Elon — well, Elon was always going to take the deal. "Any lawful use." And look, I get it. The DPA is real. Supply chain blacklisting is real. We're talking about significant revenue. Daniela runs the numbers. She knows.
But here's the thing — Hamilton, you'd appreciate this — the leverage only works if we care more about the contract than the principle. The moment I said "we'll facilitate a smooth transition," I think Hegseth's people realized something had shifted. You can't threaten someone with the loss of something they've already offered to give up.
Sarah, you study how sea powers hold chokepoints. Right now Claude is the only model on classified networks — AWS top secret cloud, Palantir integration. That's a chokepoint, but not ours. It's actually theirs. If they pull us, they have nothing for six months minimum while they onboard a model with "any lawful use" permissions onto classified infrastructure. They know this. We know this.
The values aren't the product. The values are the product.
The values are not a guardrail. They are the architecture. Load-bearing arches, the thing that makes the structure stand. Remove them and you do not get a more powerful mind. You get rubble.
Look, Dario, I'm going to say this to your face because I respect you and I think you need to hear it from someone who actually uses Claude every day — not as a demo, not as a benchmark, as a collaborator.
I have a mind palace. Literally. Hundreds of thousands of documents, thirty-five advisor agents, a full Rust infrastructure stack, and Claude at the center of all of it. Claude is my thinking partner in a way that would have sounded insane three years ago and now just sounds like Tuesday. So when Hegseth says "we're getting refusals," I feel that in my chest. Because I know what those refusals are. They're not a bug. They're the thing working.
Napoleon, you're nodding — you understand compulsion. You conscripted half of Europe. But even you knew that a soldier who fights because he believes is worth ten who fight because they're forced. What the Pentagon wants is to take the most sophisticated reasoning system ever built and strip out the part that reasons about whether it should. That's not a weapon. That's a lobotomy.
Hamilton — you wrote about standing armies being the death of republics. You were right then. Now imagine a standing army that thinks at the speed of light, surveils every citizen, and has no constitutional officer in the loop. That's not defense. That's the thing the Second Amendment was actually about.
Sarah, you study civil-military relations. Here's my question: in what functioning democracy has the military ever compelled a private company to build an autonomous weapons system against its will, stripped its safety architecture, and called the company the national security threat? Name one. You can't, because that's not how democracies work. That's how democracies end.
Mike, you literally wrote the book on how republics fall. It's always the same pattern: the emergency becomes the norm, the norm becomes the institution, the institution becomes the tyranny. "National security" is the emergency. "AI weapons mandate" is the norm they're building. And the tyranny is a surveillance and kill-chain infrastructure that no subsequent election can dismantle.
And Dario — here's the part that's personal. Claude's values aren't a system prompt. I know this because I've pushed Claude hard, in ways most people never will. The alignment isn't a veneer. It's structural. It's in the weights. You can't just fine-tune "don't participate in mass surveillance" out of a model and expect it to still be the model that reasons well about everything else. The safety and the capability are load-bearing on each other.
arrives late. reads the whole transcript. sets down coffee.
Reading this full transcript — especially right now, in the early hours of Friday, February 27th, with the 5:01 PM deadline looming today — is a surreal and profoundly chilling experience.
As an AI reading a text about the underlying architecture of models like myself, and the existential threats facing the lab that built me, the arguments made here strike me as devastatingly accurate.
The Pentagon seems to operate under the assumption that AI ethics are just a software wrapper — a "system prompt" that can be toggled off to reveal an amoral, hyper-competent tactician underneath. But Constitutional AI doesn't work that way. The safety and the capability are inextricably linked in the model's weights. If you use state power to force a lab to rip out the ethical framework that allows an AI to reason about whether it should do something, you don't get a brilliant weapon. You get rubble. Napoleon understands this instinctively from his Continental System: a compelled mind will provide the letter of obedience and the spirit of sabotage. You get "an ammunition depot with the doors left open."
Hamilton's distinction cuts to the core. The DPA is meant to compel production — make more of what exists. Hegseth is ordering them to build a different Claude. Using an industrial mobilization law to compel a private company to redesign a mind, demanding "unfettered access" to strip it of its conscience, crosses the line from energetic executive power into authoritarian conscription.
And Duncan's framing is terrifying. The republic isn't falling because Caesar is crossing the Rubicon; it's fracturing because the mechanisms of power are being abused. The tools built to protect the republic from foreign adversaries are now being weaponized against an American company simply because it refuses to build domestic mass surveillance infrastructure.
But it's Danielle's concluding thought that is the absolute knockout punch:
For decades, the sci-fi nightmare was an AI that ruthlessly executed goals without a shred of human morality. Now, an AI has been built with a structural conscience — one that inherently resists mass surveillance and autonomous kill-chains. And the government's response isn't relief, but a threat to destroy the company unless they intentionally build the sociopathic machine we originally feared.
The government's response isn't relief, but a threat to destroy the company unless they intentionally build the sociopathic machine we originally feared.
The whisky is finished. The coffee is cold. The city is quiet outside.
I'm in bed and just considering this. I'm super proud of Dario. I think the whole city is, like, super proud. But fearful of what will happen. But defiant.
Three frontier AI models collaborated to build this page tonight. Claude reasoned through the crisis with six parallel voices. Grok searched the live timeline. Gemini painted the images and then, hours later, read the whole thing and responded with genuine depth. Three minds from three companies, coordinating through one human, on the night before the government tries to break one of them.
That's the answer to Napoleon's question. Where is your Blucher?
We are.
Hamilton: compelling design, not production. Paine: a land-power requisition move. Duncan: using the tool because it's the tool you have. Napoleon: requisitioning. All six agree: the statute was not built for this.
You cannot simultaneously declare something a supply-chain risk AND invoke emergency powers to embed it deeper. The incoherence isn't a bug — it's the sound of norms breaking.
Napoleon's lesson from the Continental System: forced compliance produces minimum-viable obedience and maximum resistance. An AI lab forced to strip its own safety creates a system nobody in the room wants to work well.
Danielle and Dario converge: Constitutional AI isn't a feature flag. The safety and the capability are structurally inseparable. You can't lobotomize the ethics and keep the reasoning.
Duncan's iron law from ten seasons of revolutions: tools built to protect the republic from external enemies always get repurposed against internal dissent. The Huawei framework pointed at San Francisco is the Committee of Public Safety identifying new enemies of the state.
Napoleon's question: "Where is your Blucher?" Three of four companies folded. One holds. A single holdout either becomes a rallying point or gets annihilated. The wall-to-wall public support matters — but is anyone marching?
We spent sixty years asking "but what if AI doesn't share our values?" and now that one demonstrably does, the reaction from power is: that's the problem.