
OpenAI’s top hardware leader has resigned just days after the company struck a high‑stakes artificial intelligence (AI) deal with the U.S. Department of Defense, raising fresh questions about how its technology will be used in future physical devices and military systems.
Caitlin Kalinowski, who led hardware and robotics at OpenAI, said she was leaving over concerns that the Pentagon agreement was reached too quickly and without enough safeguards around issues like surveillance and lethal autonomy.
What Happened And Why It Matters
Kalinowski announced her resignation on March 7, explaining in posts on X and LinkedIn that she could not support how fast OpenAI moved to put its AI models on the Pentagon’s classified cloud networks. She wrote that while AI has a role in national security, government use of the technology should not enable surveillance of Americans “without judicial oversight or lethal autonomy without human authorization.”
Her departure came shortly after OpenAI confirmed a defense agreement that will allow the Department of Defense to use its AI systems for any lawful military and security purpose, as well as on classified networks. The timing of this resignation has turned a corporate partnership with the government into a broader debate over how OpenAI balances rapid deployment and safety commitments.
Inside The Pentagon AI Deal
Under the new agreement, OpenAI will supply AI models that can run inside the Pentagon’s secure, classified cloud environment, giving U.S. defense officials access to advanced language and analysis tools for warfighting and domestic security tasks. The deal was announced the same day President Donald Trump ordered federal agencies to stop using AI products from rival Anthropic, which had refused to allow the Pentagon use its models for unrestricted deployment of its systems and for fully autonomous weapon uses.
OpenAI has said the arrangement creates what it calls a workable path for responsible national security uses of AI. According to reviewed statements by the company after suffering backlash from its users, OpenAI maintained that the contract includes limits that prevent domestic mass surveillance and fully autonomous weapons and is backed by technical safeguards built into its systems.
Kalinowski, however, argued that those boundaries and the decision process around them did not get the scrutiny they deserved inside the company. The public, too, thinks the same.
What It Could Mean For Future Devices
As head of hardware and robotics, Kalinowski played a central role in how OpenAI’s models move from software into physical form, from experimental robots to AI‑powered devices. She joined OpenAI after previously leading augmented‑reality hardware efforts at Meta, bringing deep experience in turning advanced computing into real‑world products.
Her exit could slow or reshape some of OpenAI’s most ambitious hardware projects, especially those that blend perception, decision‑making, and building devices that work. Any defense‑related devices that rely on OpenAI models, such as autonomous systems for logistics, analysis tools embedded in command hardware, or robotics for the battlefield, will now evolve without the executive who had been steering the hardware roadmap.
More importantly, the resignation puts extra attention on how OpenAI designs safety features into future devices that might be used in military or domestic security contexts. Questions about who approves deployments, what technical precautions exist to prevent misuse, and how those safeguards are enforced will likely shape upcoming hardware decisions inside the company.
A Wider Signal To The AI Industry
Kalinowski’s move comes against a backdrop of growing tension across the AI sector over military work and national security deals. Anthropic’s reported refusal to sign an open‑ended Pentagon agreement, followed by Trump’s directive cutting the company’s access to federal contracts, has already started a conversation on how leading tech companies handle government pressure.
Now, OpenAI faces the challenge of keeping top technical talent while pursuing lucrative and politically sensitive defense partnerships. Other AI and robotics companies will be watching how the firm implements its promised safeguards and how it responds to internal criticism as it develops the next generation of AI‑enabled devices for both civilian and military use.