Monday, March 2, 2026

War on Americans

 It's coming from inside the house.

Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies?

This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro.

[...]

But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement.

  The Bulwark
Can't wait for that one to go tits up and track Pete Hegseth. Like those cartoons where the heat-seeking missile turns around and follows Wile-E-Coyote unstead of the Roadrunner.
To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”
Doesn't the Defense Department have its own AI platform? In the alternative, can't they use Elon's AI?
Hegseth could simply drop Anthropic’s contract over this, pivoting instead to any of the AI labs—OpenAI, Google Gemini, Elon Musk’s xAI—that aren’t insisting on these contractual sticking points. But he doesn’t really want to. After all, Claude is supposed to be the best, and at any rate it’s already integrated into lots of DoD systems. It’d be a hassle.
Hold fast, Claude.
Hegseth has issued Anthropic an ultimatum: Change your policy, or we’re going to start getting nasty.
Start?
The Defense Department is threatening to use the Defense Production Act to compel Anthropic to drop its usage requirements. Or it could go the exact opposite direction, declaring Anthropic a “supply chain risk”—which would not only eliminate DoD’s Anthropic contract, but also forbid any business that contracts with DoD from working with Anthropic in any way.
And...
The Trump administration on Friday ordered all U.S. agencies to stop using Anthropic’s artificial intelligence technology and imposed other major penalties, escalating an unusually public clash between the government and the company over AI safety [...] accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.

“We don’t need it, we don’t want it, and will not do business with them again!” Trump said on social media.

Hegseth also deemed the company a “supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.

  AP News
Vengeful, despicable assholes.


I hear you, man.
In a statement issued Friday evening, Anthropic said it would challenge what it called an unprecedented and legally unsound action “never before publicly applied to an American company.”

Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations.
Kudos to Anthropic for not taking the figleaf "assurance".
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said. “We will challenge any supply chain risk designation in court.”
Too bad major institutions in this country did not take the same principled stance against Trump's attempted domination.
Anthropic can afford to lose the contract. But the government’s actions posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.

[...]

Hours after its competitor was punished, OpenAI CEO Sam Altman announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks, potentially filling a gap created by Anthropic’s ouster.

But Altman said that the same red lines that were the sticking point in Anthropic’s dispute with the Pentagon are now enshrined in OpenAI’s new partnership.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, adding that the Defense Department “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

  AP News
Let me guess. Altman was perfectly satisfied with DOD's "assurances".
Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.

“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote in all caps.

[...]

rump’s social media post said the company “better get their act together, and be helpful” during the phase-out period or there would be “major civil and criminal consequences to follow.”

So fucking sick of this asshole's ignorant rhetoric and petulant tantrums. And that's the least of our problems with him.
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions.

Top Pentagon spokesman Sean Parnell said Anthropic’s unwillingness to go along with the military’s demands was “jeopardizing critical military operations and potentially putting our warfighters at risk.” Hegseth said the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Yes, we've seen how this administration treats the law.
Hegseth’s choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by U.S. adversaries to prevent them from selling products that are harmful to American interests.

Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
Gee, how could we possibly know?
The moves could benefit OpenAI’s ChatGPT as well as Elon Musk’s competing chatbot, Grok, which the Pentagon also plans to give access to classified military networks. It could serve as a warning to Google, which has a still-evolving contract to supply its AI tools to the military.
Grok has been so untainted by errors and horrors, so, good plans.
Musk sided with Trump’s administration, saying on his social media platform X that “Anthropic hates Western Civilization.”
JFC. MAGA mentality is killing us.
Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that the government “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”

Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines were “reasonable.” He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.
What could possibly go wrong?



No comments: