Anthropic CEO Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow unrestricted use of its technology, deepening the unusually public clash with the Trump administration that is t…
Anthropic CEO Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow unrestricted use of its technology, deepening the unusually public clash with the Trump administration that is threatening to pull its contract and take other drastic steps by Friday. The maker of the AI chatbot Claude said in a statement that it’s not walking away from negotiations, but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.” Sean Parnell, the Pentagon’s top spokesman, said on social media Thursday that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” Anthropic’s policies prevent its models from being used for those purposes.
It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new U.S. military internal network. “It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei wrote in a statement. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.” Defense Secretary Pete Hegseth gave Anthropic an ultimatum on Tuesday after meeting with Amodei: Open its artificial intelligence technology for unrestricted military use by Friday, or risk losing its government contract.
Military officials warned that they could go even further and designate the company as a supply chain risk, or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products. Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” Parnell reiterated that the Pentagon wants to “ use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed.
He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.” “We will not let ANY company dictate the terms regarding how we make operational decisions,” he said. The talks that escalated this week began months ago. Amodei said that if the Pentagon doesn't reconsider its position, Anthropic “will work to enable a smooth transition to another provider.” Sen.
Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.” “Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.” He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.” Sen.
Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading U.S. company.” “Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.” While Pentagon officials say they always will follow the law with their use of AI models, the department has taken steps to change the culture among the military legal ranks.
Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.” The same month, Hegseth also fired the top lawyers for the Army and the Air Force without explanation. The Navy’s top lawyer had resigned shortly after the election in late 2024. Skip the crowds.
Enjoy tailor-made cultural journeys with our trusted licensed guides. Explore San’in Kaigan National Park, where coastline, fishing villages, hot springs and food culture fall into rhythm with Japan's coastline. The US government taking a wide lead in the race to the bottom. When an AI tech company deems the pentagon as unacceptable unethical you know things are really bad, it's like a circus complaining about someone abusing animals.
https://www.democracydocket.com/news-alerts/white-house-circulating-blatantly-illegal-draft-emergency-order-to-take-control-of-elections/ Given the inherent dangers in this technology regardless of your political affiliation one should find it very alarming that the government is trying to remove governance features designed for safety. They are really playing with fire in a very reckless manner here.
Tillis' comment sums it up pretty well. Ministry of Truth flexing itself with a clear goal of gaining dominion over US citizens. Parnell reiterated that the Pentagon wants to “ use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. “We will not let ANY company dictate the terms regarding how we make operational decisions,” he said. "The Palestine Experiment " is a book that details what happens when tech.
giants do work with military and Government…in this case the Israeli,s. Technology has been and continues to be, used to overarchingly surveil the lives of Palestinians, including "sucking up "ALL communication ALL Palestinians have on devices and computers, will locate the users, and if said users are listed in some way, send a drone or bomber to kill them….and family Good to see some push back but I note some companies listed in this article , who "want to protect US citizens " ARE doing business with Israel to facilitate this surveillance.
The biggest drag to the Pentagon's effort to acquire Anthropic's cooperation and to make a deal with them is Hegseth. He's such an unlikeable guy. Him publicly threatening a company they are looking to make a deal with is absolute stupidity. Every time someone whispers “AI,” half of Wall Street faints from excitement and the other half starts throwing money like they’re feeding pigeons in the park.
Nvidia, OpenAI, Anthropic—pick your favorite acronym, the prices are floating almost as high as Mt Fuji. And like every bubble before it, this one’s got that same sweet smell of inevitability. We saw this movie in 2000, when every kid with a domain name was suddenly a “visionary,” right up until the lights came on and the whole dot com thing collapsed like a bad soufflé. Then 2008 rolled around and the subprime geniuses told us housing prices never go down—right before they did, loudly.
Now, will the government bail out tech companies when this thing pops? Well, the government has a soft spot for anything that looks “too big to fail,” especially if it comes wrapped in the American flag and mutters something about national security. And these AI outfits are already practicing their lines: “If we go down, China wins.” It’s amazing how patriotic a company can get when its stock price is on the line.
OpenAI, Nvidia, and the rest keep warning that China is beating us in AI. Maybe they’re right. Or maybe they’re just shaking the federal money tree and hoping a few billion dollars fall into their laps. It’s the oldest trick in the book: scare the public, flatter the politicians, and hope nobody notices the hand in the cookie jar. But here’s the real question: in a so‑called free‑market country, is it wrong for companies to ask Uncle Sam for help?
China doesn’t bother with this dance—they’ve got state‑owned enterprises that march in step with the government, and they seem to be doing just fine. Maybe too fine. So what do we want—pure capitalism, where companies sink or swim on their own? Or a little state‑sponsored floatation device when the water gets rough? Americans love capitalism right up until the bill comes due. Then suddenly everybody’s a socialist.
Ministry of Truth flexing itself with a clear goal of gaining dominion over US citizens. If this is the Ministry of Truth, then Big Brother must be thrilled. Nothing like a little doublethink to keep the narrative tidy. But it could be worse — we could end up with something closer to the CPC’s model of information control. Or, in a more optimistic universe, maybe we’d get a government closer to what Confucius imagined: one that leads by virtue instead of surveillance, and earns trust instead of managing it.
My American contemporaries grew up on 1984, Brave New World, Fahrenheit 451, and The Handmaid’s Tale — a whole syllabus of dystopias warning us what happens when power decides it knows best. It seems you all were raised to expect the ruler behind the curtain to be sinister. This genre is only about a century old. By contrast, the Chinese classics are two and a half millennia old, deeply rooted, and time‑tested.
But if you jump from Orwell to Confucius and what we in China are required to read, you get the opposite worldview: a government that earns legitimacy through virtue, a ruler who leads by moral example, and if it is plausible by managing information properly through surveillance appropriately. It seems the Western glass is half empty, ours are half full. Of course, reality doesn’t always match either tradition — and nothing here should be taken as a prediction, guarantee, or warranty of future governmental benevolence.
Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.
Summary
This report covers the latest developments in artificial intelligence. The information presented highlights key changes and updates that are relevant to those following this topic.
Original Source: Japan Today | Published: February 27, 2026, 1:43 am


Leave a Reply
You must be logged in to post a comment.