Photo Credit: Ahmet Serdar Eser/Anadolu via Getty Images

Google has signed a classified artificial intelligence (AI) deal with the United States Department of Defense (DoD), giving the Pentagon access to its Gemini AI models for, in the contract’s own words, “any lawful government purpose.” More than 600 employees at Google DeepMind and Cloud responded with an open letter to CEO Sundar Pichai, urging him to reject the terms. He did not.

The classified deal is an amendment to a $200 million contract Google had already signed with the Pentagon, under which Gemini was being used on unclassified government systems, including through the Pentagon’s GenAI.mil platform. The new layer extends that access into classified networks used for sensitive operations, including mission planning and weapons targeting.

What the Employees Said

More than 600 employees from Google’s AI and Cloud divisions, including many from the elite DeepMind lab, signed a letter urging Pichai to pull back from the deal. The employees expressed deep concern that once the AI enters a classified environment, the company will lose the ability to monitor how it is actually used. In the letter, workers wrote that they want to see AI benefit humanity, and warned that making the “wrong call” could cause “irreparable damage” to Google’s reputation and role in the world.

The letter argues that on air-gapped classified networks like this one, Google cannot monitor how its AI is used, making “trust us” the only guardrail against autonomous weapons and mass surveillance. Research scientist Alex Turner at Google DeepMind wrote on X that Google “affirms it cannot veto usage, commits to modify safety filters at government request, and offers only aspirational language with no legal restrictions,” calling the outcome “shameful.”

How Google Got Here

In the wake of the 2018 Project Maven backlash, Google published a set of AI principles that included a public pledge not to pursue weapons or surveillance technology. That commitment stood for nearly seven years before Google quietly removed it from its publicly published AI Principles in February 2025. A blog post co-authored by Demis Hassabis, CEO of Google DeepMind, cited “a global competition taking place for AI leadership” as justification for the change.

It is also important to note Google has been in cahoot with the Pentagon for years now. In December 2022, Google won a share of the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract alongside Amazon, Microsoft, and Oracle. In December 2025, the Pentagon launched GenAI.mil, a platform powered by Google’s Gemini chatbot, made available to all defense personnel. 

And now in March 2026, Google has deployed Gemini AI agents to the Pentagon’s three-million-strong workforce at the unclassified level. 

The Anthropic Factor

The deal follows the Pentagon’s blacklisting of Anthropic after the AI company refused to grant unrestricted access to its models, a move that resulted in a “supply chain risk” designation and ongoing litigation. 

The relationship fractured when the Pentagon pushed to renegotiate Anthropic’s contract terms, demanding that its Claude models be used for all lawful purposes without limitation. That confrontation created an opening that Google stepped into, with OpenAI and xAI also securing defense contracts on similar terms.

For Google, the deal with the Pentagon gave the DoD access to its Gemini models for a wide range of lawful government purposes, with stated limits on domestic surveillance and fully autonomous weapons.

A Familiar Fight With a Different Outcome

In 2018, roughly 4,000 employee signatures and a dozen resignations were enough to reverse Google’s course on a single Pentagon program. This time, more than 580 signatures met a different response. Rather than pulling back, Google’s spokesperson Jenn Crider said the company was “proud” to be among the AI labs supporting national security.

On the other hand, a Google DeepMind researcher told Fortune that many staff were still unaware of the deal because Google never clearly communicated that it was negotiating or had signed the contract. “There was a pride in doing AI for good for a very long time,” the researcher said. “Suddenly, the things I have pushed to improve might be used in very different ways with not enough oversight to harm people.”

Google, for its part, has maintained its position. That its model will be used, but it still remains committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight. What that position means, especially with a Pentagon classified AI deal where Google has no visibility into how its models are deployed, is the question its own employees say leadership has not answered.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version