Photo Credit: Klaudia Radecka/NurPhoto via Getty Images

OpenAI has tightened the rules around its controversial artificial intelligence (AI) deal with the U.S. Pentagon, adding new protections meant to limit government surveillance and address an escalating backlash, even from some of its own users. 

The amended agreement explicitly bars the use of OpenAI’s systems for domestic mass surveillance of Americans and restricts access for U.S. intelligence agencies. 

What Changed in the OpenAI Pentagon Deal

The original agreement, announced in late February, allowed the Pentagon to use OpenAI’s models for any lawful purpose on its classified networks. This immediately triggered concern because it included intelligence and surveillance work. 

OpenAI argued at the time that it had embedded “layered protections,” including limits on autonomous weapons and domestic mass surveillance, but many critics said the language was too vague for such sensitive uses of AI. 

After a weekend of scrutiny, OpenAI revised the contract to make some of those protections far more explicit. The new wording states that the company’s systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” 

This framing aligned the deal more clearly with U.S. laws that govern intelligence gathering and civil liberties.

A central addition is a constraint on how the U.S. Department of Defense can use OpenAI’s tools. CEO Sam Altman said OpenAI has worked with the Pentagon to confirm that its services under this agreement will not be used by agencies such as the National Security Agency and that any future access would require a separate amendment. 

The amended pact also reiterates earlier commitments that OpenAI’s technology cannot be used for domestic mass surveillance or to remove human responsibility in the use of autonomous weapons systems. 

OpenAI’s Response to Backlash

The changes came after sustained criticism that the initial deal was rushed and overly opportunistic, especially given that rival Anthropic had clashed with the Pentagon over similar terms and lost access to a major government contract. 

Altman acknowledged in an X post that the first version of the agreement “looked opportunistic and sloppy” and said the company “shouldn’t have rushed” the process. 

By tightening the language and publishing more detail about the safeguards, OpenAI is attempting to reassure both the public and the wider AI community that it can work with the military without abandoning its stated safety principles. 

Why This Matters for Tech and Policy  

The amended deal highlights how quickly AI companies are being drawn into national security work, and how much pushback they face when they are seen as weakening privacy or human control. 

It also highlights a new negotiating pattern where governments demand broad rights to use advanced AI for “all lawful purposes,” and AI providers insist on an exemption on the surveillance of citizens and development of weapons. 

For the wider tech industry, the OpenAI‑Pentagon agreement is likely to serve as a reference point for future military AI contracts, both in the United States and abroad. How well these new safeguards hold up in practice may shape the public trust in OpenAI, as well as the standards other AI firms adopt when national security and civil liberties collide.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version