Date: February 28th, 2026 1:11 AM
Author: Mainlining the $ecret Truth of the Univer$e (One Year Performance 1978-1979 (Cage Piece) (Awfully coy u are))
https://www.nytimes.com/2026/02/27/technology/anthropic-trump-pentagon-silicon-valley.html
By Sheera FrenkelCade Metz and Julian E. Barnes
Sheera Frenkel and Cade Metz reported from San Francisco and Julian Barnes from Washington.
Feb. 27, 2026
More than 100 employees at Google signed a petition this week calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.
Employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.
And technologists at companies across Silicon Valley said that A.I. should not be used for purposes such as mass surveillance of Americans.
Silicon Valley has rallied behind the A.I. start-up Anthropic over its dispute with President Trump and the Pentagon about how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”
Mr. Trump and his officials, in contrast, want the military to use whatever A.I. it buys however it wants, as long as it complies with the law. On Friday, Mr. Trump called Anthropic a “radical Left AI company run by people who have no idea what the real World is all about,” and Defense Secretary Pete Hegseth labeled the start-up a “supply chain risk,” a move that would sever ties between the company and the U.S. government.
Now what began as a whisper of support for Anthropic in the tech industry has crescendoed into a shout. The support — voiced by top leaders at Anthropic’s rivals, as well as rank-and-file engineers at Google and other large companies — stood out because Silicon Valley had largely appeared to be in lock step with the Trump administration.
But the Pentagon’s actions appear to have driven a wedge between Washington and Silicon Valley. Coalescing behind Anthropic was in many ways a throwback to a pre-Trump Silicon Valley, when tech workers often spoke up against what they viewed as dangerous or inappropriate uses of powerful technologies that they had worked on.
“Now it is like we are going back to a time about eight years ago,” said Jack Poulson, one of the employees who protested Google’s work with the military in 2017. “There is a lot more activism now.”
The rallying behind Anthropic was tinged with opportunism. Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons,” which is the same stance as Anthropic’s.
But late Friday, after Mr. Trump had ordered federal agencies to stop using Anthropic’s technology, OpenAI said it had reached its own agreement with the Pentagon to provide its A.I. for classified systems. OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
Anthropic’s unwillingness to accede shows how the Department of Defense cannot easily force Silicon Valley firms to comply. Unlike defense contractors that have worked with the Pentagon for decades and are reliant on longstanding military contracts, the A.I. companies are contending with different internal pressures and external factors.
Many of them depend on highly skilled work forces of A.I. technologists who are hard to recruit and harder to retain. Disaffected employees can easily jump ship to other companies if they are unhappy with what they are hearing from their corporate leaders. In the last year, Meta, OpenAI, Google and others have spent millions — some say billions — of dollars to land and keep top talent.
For many A.I. companies, government contracts are only one piece of an expanding pipeline of business. The $200 million contract that Anthropic had been negotiating with the Pentagon for A.I. use in classified systems, which precipitated the fight, would most likely be only a small percentage of the company’s revenue. Anthropic primarily sells A.I. software to other businesses and last year hit a monthly pace of $8 billion to $10 billion in annual revenue, Dr. Amodei said in December.
Image
Current and former defense officials said the Trump administration had misread how strongly Anthropic felt about getting assurances on how its A.I. would be used. Pentagon officials believed Anthropic would fall in line after they threatened to either cut the company off from government business or force it to provide its A.I. model without restrictions, they said.
The Pentagon and Anthropic did not respond to requests for comment.
(The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to A.I. systems. The companies have denied those claims.)
Anthropic, Google, OpenAI and xAI have been working with the Pentagon in a pilot program to bring A.I. to the Defense Department. That meant that as the Pentagon ramped up its threats against Anthropic, other Silicon Valley workers saw how the situation could apply to them. If Anthropic was cut off from government business for not capitulating to the Pentagon’s demands, the same tactics could be used on them.
Some employees at large A.I. companies soon signed proposals calling on their managers to support Anthropic’s position. On group chats and private messaging boards, engineers pointed out that if the Pentagon carried out its threat, nothing was stopping it from using the same tactics to force other companies to work with it.
At OpenAI, Mr. Altman contacted Defense Department officials on Wednesday to discuss how his company might work on classified projects and to express his concern over the Pentagon’s spat with Anthropic, two people with knowledge of the conversations said.
On Thursday, Mr. Altman sent a memo to employees saying A.I. should not be used for mass surveillance or autonomous lethal weapons, while agreeing with the Pentagon’s stance that private companies should not control U.S. government policy.
On Friday, Mr. Altman appeared on CNBC and more strongly backed Anthropic, which was founded by former OpenAI employees. “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” he said.
Then he struck his own deal with the Department of Defense.
(http://www.autoadmit.com/thread.php?thread_id=5839225&forum_id=2\u0026show=week#49701557)