Defense Secretary Pete Hegseth had announced the planned cutoff last week, but Anthropic only received official notification days later, a senior Pentagon official confirmed to CBS News. Hegseth said the military would phase out Anthropic’s technology over six months, though a source familiar with the situation told CBS News that no specific offboarding timeline was included in the formal designation.
At the heart of the conflict: Anthropic wants written guarantees that the U.S. military won’t use its Claude model for two specific purposes — mass surveillance targeting American citizens and fully autonomous weapons systems that select and engage human targets without a person in the loop. The Pentagon’s response? Those uses are already illegal or restricted by existing policy, so writing them down is unnecessary.
“We have these two red lines,” Amodei said. “We’ve had them from day one. We are still advocating for those red lines. We’re not going to move on those red lines.”
Emil Michael, the Pentagon’s chief technology officer, pushed back in a CBS News interview: “At some level, you have to trust your military to do the right thing.” He also added: “We’ll never say that we’re not going to be able to defend ourselves in writing to a company.”
The Pentagon insists it needs the ability to use Claude for “all lawful purposes” — a phrase broad enough to make Anthropic uncomfortable. Amodei has argued that AI grants governments surveillance powers that existing law wasn’t designed to address. In his view, the technology has outpaced the legal framework.
Despite the escalating tension, Amodei told investors at a Morgan Stanley conference this week that he was still in talks with the Pentagon “to try to deescalate the situation,” according to audio exclusively obtained by CBS News. He described the two sides as having “much more in common than we have differences.”
The designation arrived after those conciliatory remarks — not exactly a goodwill gesture.
Two sources familiar with the matter previously told CBS News that the U.S. military used Claude in its recent strikes on Iran that began last weekend. The exact role of the AI model in those operations remains unclear.
Amodei said in a statement that “we do not believe this action is legally sound, and we see no choice but to challenge it in court.” He also noted that most Anthropic customers are unaffected. The designation, he wrote, does not bar military contractors from using Anthropic technology for non-military work — it only applies to uses directly tied to Defense Department contracts.
So the Pentagon wants to break up with the only AI company wired into its classified systems, while that same company’s technology was reportedly just used in active military strikes.
Written by Alius Noreika
February 25, 2026
January 30, 2026
July 14, 2025
4 days ago
February 4, 2025
August 14, 2025
February 12, 2026
January 13, 2026
November 8, 2024
3 days ago
Founded in 2012, this project provides science and technology news from authoritative sources on daily basis.
Pentagon Tags Anthropic as Supply Chain Risk Over AI Weapons and Surveillance Fight – Technology Org
