
Hackers have turned unsecured AI systems into their own profit machine through an Operation Bizarre Bazaar campaign. Between December 2025 and January 2026, security researchers at Pillar Security tracked more than 35,000 attacks targeting companies running AI chatbots and language models, stealing computing power, building an entire underground business, and selling cheap access to hijacked AI systems.
The campaign, called Operation Bizarre Bazaar, saw criminals scan the internet for AI systems left open to the public, test them to make sure they work, then resell access to other hackers at a discount. Termed “LLMjacking,” it is the first documented case of hackers creating a full-service marketplace around stolen AI infrastructure.
How the LLMjacking Campaign Works
LLMjacking refers to hijacking Large Language Model infrastructure, essentially stealing AI systems for unauthorized use. The term closely mirrors “cryptojacking,” where criminals steal computing power to mine cryptocurrency.
In this case, attackers target AI systems to steal expensive computing resources, resell access to others, pull out sensitive data, or use compromised systems as doorways into company networks.
The attacks focus on organizations running their own AI infrastructure rather than using cloud services like ChatGPT. This is because companies host their own AI for various reasons such as keeping sensitive data private, customizing models for specific needs, or avoiding per-use fees from commercial providers.
These self-hosted systems include tools like Ollama and vLLM, which let developers run powerful language models on their own servers. The problem arises when these systems get exposed to the internet without proper security.
Between December 2025 and January 2026, Pillar Security’s monitoring systems captured 35,000 distinct attack sessions, which represents an average of 972 attempts every single day.
Operation Bizarre Bazaar: Three Players, One Criminal Pipeline
Operation Bizarre Bazaar runs on a simple but effective model, LLMjacking, with three key players. First, automated scanners sweep the internet looking for AI systems that anyone can access without a password. These scanners then hunt for popular AI tools like Ollama, which developers use to run language models on their own servers, and vLLM, another common AI hosting service.
Once the scanners find vulnerable systems, a service called silver.inc steps in to verify they actually work. The attackers send test requests to each AI system, trying fake passwords to see which systems don’t bother checking credentials. They also test what kinds of AI models each system runs, whether it’s GPT-style chatbots, code generators, or other AI tools.
The final player is the marketplace itself. Silver.inc operates openly, calling itself “The Unified LLM API Gateway” and advertising on Discord and Telegram. The service claims to offer access to more than 30 different AI providers at prices 40-60% cheaper than legitimate services, and accepting both cryptocurrency and PayPal.
What This Means
AI‑driven workloads have become central to how organizations build products, automate operations, and interact with customers, but security practices around those workloads have lagged behind. Traditional perimeter thinking often treats AI endpoints as “just another API,” even though their compute cost, connectivity, and access to sensitive context make them high‑value, high‑impact targets.
The Operation Bizarre Bazaar campaign highlights how quickly opportunistic actors will exploit that gap, turning innocent configuration mistakes into ongoing revenue streams.
The operation also illustrates how AI is now both a target and an enabler of cybercrime. On one side, exposed model endpoints are abused for unauthorized inference and data access. On the other, AI tools are increasingly used by attackers to speed up reconnaissance, exploit development, and social engineering.
For security and engineering leaders, this means AI infrastructure can no longer sit on the borders of threat modeling or cloud security reviews, as it now has to be treated as part of the critical attack surface.
