
Robotics labs operate on a basic assumption that the machines inside them take instructions only from authorized sources. However, that assumption immediately falls apart when the software running those machines can be reached and hijacked by anyone with a network connection and the right payload. That is exactly the situation researchers have now exposed with LeRobot, Hugging Face’s open-source robotics platform.
A critical vulnerability, tracked as CVE-2026-25874 with a CVSS score of 9.3, was publicly disclosed on April 28, 2026. The flaw allows unauthenticated attackers to execute arbitrary code on systems running LeRobot, with no login credentials required. The platform has nearly 24,000 stars on GitHub.
What the Flaw Does
The vulnerability sits in LeRobot’s asynchronous inference pipeline, where the platform uses Python’s pickle.loads() function to deserialize data received over gRPC channels. Those channels have no Transport Layer Security and require no authentication, meaning any attacker who can reach the server over a network can send a crafted payload and trigger code execution on the host machine.
The attack path runs through specific gRPC handlers, including SendPolicyInstructions and SendObservations, which process raw byte streams and pass them through pickle before any validation runs. Because deserialization happens first, a malicious payload executes before the system checks whether the incoming object is even the right type.
And when a successful exploit happens, it gives an attacker full control of the PolicyServer host. From there, they can steal API keys, SSH credentials, and proprietary model files, move laterally across internal networks, corrupt machine learning models, and in production environments where LeRobot controls physical robots, manipulate or disrupt those machines directly.
The Irony Baked Into the Code
It is also important to note that Hugging Face previously developed Safetensors, a serialization format built specifically to eliminate the security risks that come with using pickle. LeRobot does not use it. Instead, the codebase uses pickle.loads() and includes nosec comments placed directly next to those calls. Those comments instruct automated security linting tools to stay quiet, even when the tools correctly flag a problem.
Security researcher Valentin Lobstein, who publicly disclosed additional information about the flaw, described the contradiction plainly, saying, “Hugging Face created Safetensors – a serialization format designed specifically because pickle is dangerous for ML data. And yet their own robotics framework deserializes attacker-controlled network input with pickle.loads(), with # nosec comments to silence the tool that was trying to warn them.”
How Long the Team Has Known
A private report about the same flaw was submitted in December 2025 by a researcher using the alias “chenpinji.” The LeRobot team responded in early January 2026, acknowledging that parts of the codebase needed significant refactoring. Steven Palma, the project’s tech lead, confirmed that deployment security had not been a priority during the platform’s research phase. A fix is planned for version 0.6.0, but no release date has been confirmed.
Until a patch is available, security experts recommend restricting network access to LeRobot instances and using firewalls or VPNs to limit exposure to trusted networks only.
The LeRobot case is a clear signal that as open-source AI frameworks move from research labs into environments where they influence or control physical systems, building in security from the start is not optional.
