WE ARE ON SUBSTACK - our opinions, episodes and what I can't say on a podcast
Specifically for SeniorsSpecifically for Seniors
  • Home
  • Episodes
  • About
  • Blog
  • Your Story
  • FUQ
  • Newsletter
  • Contact
  • Larry Barsh, DMD Substack
  • Search
Larry Barsh, DMD Substack
Search
March 4, 2026

The AI Company That Said No to Robot Wars — And Got Punished For It

Thanks for reading! This post is public so feel free to share it.

Share

1,048 words — Reading time: approximately 4 minutes

A private company just got labeled a national security threat by the United States government. Their crime? Refusing to let artificial intelligence decide who gets a missile — with no human hand on anything.

That company is Anthropic. And what happened last week should alarm every one of us.

On February 27, 2026, Anthropic CEO Dario Amodei refused a Pentagon deadline to remove safety guardrails from Claude, the company’s AI model. The Department of Defense — under Secretary Pete Hegseth — wanted those guardrails gone so its AI could be used for fully autonomous weapons systems and mass domestic surveillance. No human authorization required. Just the machine, deciding.

Amodei said no, arguing that some applications are “simply outside the bounds of what today’s technology can safely and reliably do.”

The Pentagon labeled Anthropic a “supply chain risk” — a designation normally reserved for companies tied to foreign adversaries — and ordered a federal phase-out of their technology, putting a $200 million contract at risk. Trump then took to Truth Social to declare that Anthropic had made a “DISASTROUS MISTAKE,” called them “Leftwing nut jobs” who were “STRONG-ARMING the Department of War,” and threatened “major civil and criminal consequences.”

Defense Secretary Hegseth promised the Pentagon would transition to “a better and more patriotic service.” Pentagon undersecretary Emil Michael called Amodei “a liar” with “a God complex.” Elon Musk, apparently unable to stay out of it, declared that Anthropic “hates Western civilization.”

This is the level of discourse being applied to a company that said: AI should not be allowed to fire weapons without a human making the final call.

Perhaps most remarkably, Dean Ball — a former Trump administration AI adviser — called the sanctions against Anthropic “attempted corporate murder” and said he could no longer recommend investing in American AI companies. That’s not a partisan objection — it’s a warning that the administration crossed a line even its own allies won’t defend.

Before we talk about autonomous weapons, let’s talk about something closer to home: self-driving cars.

Serious autonomous vehicle research has been underway since the 1980s — more than 40 years of work by the brightest engineers at Carnegie Mellon, Google, Tesla, Waymo, and Cruise. These vehicles operate on public roads with painted lanes, traffic signals, and agreed-upon rules of the road.

And we still can’t fully trust them. Tesla’s autopilot has been involved in fatal accidents. Cruise was shut down in 2024 after serious incidents. Waymo’s robotaxis, confined to carefully mapped areas in sunny cities, still exhibit erratic behavior when they encounter anything unexpected — an illegally parked car, a construction zone, bad weather.

The problem isn’t the engineers. It’s that real-world environments are full of edge cases no system was trained for. A child darting into the street. Sensor failure in rain. An unusual intersection.

Now take that exact same problem — a machine making split-second decisions in a chaotic, unpredictable environment — and replace “should I brake or swerve?” with “should I kill this person or that person?”

The stakes are no longer a fender bender. And here’s the critical difference: a car that makes a mistake can be recalled. A missile that has already fired cannot.

There’s one more layer the Pentagon seems unwilling to acknowledge: on the road, no one is actively trying to fool your sensors. On a battlefield, that is the entire point of the adversary. Enemies will use decoys, spoof GPS signals, use civilians as cover. An autonomous weapon without human judgment isn’t just prone to accidents — it’s a sitting target for manipulation. The enemy doesn’t need to destroy your weapons. They just need to trick them into killing the wrong people.


What Anthropic Was Actually Protecting

The Trump administration framed this as a private company trying to “dictate” to the military. That framing deserves scrutiny.

Anthropic wasn’t refusing to work with the Pentagon. They were working with them — that’s what the $200 million contract was. What they refused to do was remove two specific guardrails: one preventing fully autonomous weapons, one preventing mass domestic surveillance of Americans.

Here’s a detail that hasn’t gotten enough attention: while Hegseth was publicly tweeting the “supply chain risk” designation, a Pentagon undersecretary was simultaneously on the phone offering Anthropic a last-minute deal. That deal would have required allowing AI-powered collection and analysis of data on American citizens — geolocation, web browsing, personal financial information purchased from data brokers.

The “compromise” being offered, behind closed doors, was mass surveillance of Americans. Let that sink in.

Within hours of sanctioning Anthropic, the Trump administration announced OpenAI had struck a deal with the Pentagon. OpenAI says it shares similar red lines to Anthropic but agreed to negotiate technical safeguards rather than draw hard lines. Maybe that’s a meaningful distinction. Or maybe “we’ll build safeguards” is just a more politically palatable way of saying we’ll find a way to give them what they want. The proof will be in what those safeguards actually prevent — and what they don’t.


The Bigger Picture

What happened last week isn’t just a contract dispute. It’s a story about who controls the most powerful technology in human history — and whether political pressure will erode the last guardrails before we’ve figured out how to use it wisely.

Autonomous weapons are not inevitable. The decision to build them, deploy them, and remove human judgment from lethal force is a choice being made right now, in real time, by people in Washington and in AI company boardrooms.

Anthropic made a choice. You can debate exactly where they drew the line. But the principle — that there should be a line, and that the people building the technology have both the right and the responsibility to hold it — seems worth defending.

The alternative is a world where the only qualification for building autonomous weapons is being willing to say yes.

We’ve spent more than 40 years trying to get a computer to reliably drive a car — and we’re still not there. We should be very, very careful about trusting one to decide who dies in war.


The Trump administration has ordered a six-month phase-out of Anthropic’s technology across federal agencies. Anthropic has not reversed its position.

FTS

Leave a comment

Thanks for reading! Subscribe for free to receive new posts and support my work.

Join us on our podcast Specifically for Seniors, where satire meets substance and storytelling sparks civic engagement. Each episode dives into topics like authoritarianism, political spectacle, environmental justice, humor, history and even fly fishing and more—layered with metaphor, wit, and historical insight. We feature compelling guest interviews that challenge, inspire, and empower, especially for senior audiences and civic storytellers. Listen to the audio on all major podcast platforms, watch full video episodes on YouTube, or explore more at our website.

Let’s keep the conversation sharp, smart, and unapologetically bold

Specifically for Seniors Logo

Specifically for Seniors is a podcast designed as an online resource for a vibrant and diverse senior community.

Visit our Substack page for commentary. (Adult language)

  • Episodes
  • About
  • Blog
  • Reviews
  • Subscribe
  • Your Story
  • Privacy
  • FUQ
  • Contact
  • Webinars
  • MemoryLane
  • Specifically for Seniors Substack
  • Larry Barsh, DMD
  • Larry Barsh, DMD Substack
  • © Specifically for Seniors 2025