
The artificial intelligence (AI) systems powering self-driving cars and humanoid robots are becoming the next major target for hackers, SentinelOne CEO Tomer Weingarten has warned.
Speaking to Axios last week, Weingarten said that while the cybersecurity industry remains focused on protecting AI models from known threats like data poisoning and prompt injections, a larger and more dangerous wave of attacks on physical AI systems is forming.
What Weingarten Is Warning About
Most current discussions around AI security focus on protecting large language models (LLMs) from prompt injections, where hidden instructions in text cause an AI to behave in unintended ways, or from data poisoning, where hackers corrupt the training data that a model learns from. Weingarten’s concern goes further. He is pointing to the AI systems embedded inside physical machines, the multimodal models that process text, video, audio, and images simultaneously to operate vehicles and robots in the real world.
“We forget that there are more and more real-world applications of these models,” Weingarten told Axios.
His specific examples include self-driving Waymo vehicles that navigate city streets in San Francisco, and the humanoid robots that multiple technology companies are currently developing and deploying. Both types of systems depend on multimodal AI models to function. That makes them vulnerable through any of the data channels those models process, including their cameras.
“You can inject malicious commands through visual processing and through audio processing, so the moment we open up our systems to receive inputs that are not just textual, suddenly there’s a whole new class of threats,” the CEO added. “That is very, very worrisome.”
Despite the billions of dollars invested in cybersecurity, Weingarten said there are still not enough researchers studying what multimodal threats against physical AI systems could look like. The industry’s attention has largely stayed on protecting cloud-based AI tools and software environments, while the physical layer, which are the machines that AI now controls, has received comparatively little scrutiny.
That gap is part of what makes this category of risk harder to address quickly. Unlike a software vulnerability that can be patched remotely, an attack that manipulates what a vehicle’s camera perceives requires strong defenses.
SentinelOne Response to AI Threats
Alongside Weingarten’s warnings about future physical-world AI attacks, SentinelOne also moved to address a more immediate AI security threat. As a leading company in securing AI applications, the company published ClawSec, a free, open-source security suite built to protect deployments of OpenClaw, an easily accessible, rapidly growing autonomous AI agent that can execute tasks, access local files, and connect to internal systems on behalf of users.
Weingarten also said he expects AI model poisoning attacks, where hackers tamper with the underlying data that trains AI models, to increase over the coming year, alongside security incidents connected to vibe coding, a practice where developers use AI to generate code with minimal human review or oversight.
Why This Matters Beyond the Security Industry
The consequences of a successfully attacked physical AI system are very different from a data breach or a compromised enterprise account. In an example provided by Weingarten, a Waymo that misinterprets a visual cue could behave dangerously and “present a complete new way to basically compromise the Waymo, just by the interpretation of the camera.”
These are hypothetical outcomes that security professionals need to prepare for, and they are also risks that regulators, manufacturers, and the general public will need to engage with as autonomous physical systems expand into more areas of daily life.
SentinelOne, which built its reputation in AI-powered endpoint security before ChatGPT brought AI tools to mainstream attention, has a direct business interest in the broader adoption of AI security practices. While that context is worth noting, Weingarten’s core observation that the physical deployment of AI models creates a new class of vulnerabilities and that the security industry has not yet fully addressed, remains valid.
