
In August 2025, LunaLock, a ransomware group launched with a unique extortion tactic. They threatened to use stolen digital art to train AI models and leak secure and personal data of users. This situation set the stage for a new era of cybercrime where attackers don’t just steal data, they exploit identity itself as a leverage and Artificial Intelligence drives that shift.
LunaLock attacked a website called Artists&Clients and demanded a ransom of $50,000 in cryptocurrency. This group claimed to have stolen internal source codes, payment records, user messages, user identities and commissioned artwork. By targeting both the website’s infrastructure and creative output, they were able to maximize pressure on the platform and users.
This attack was not just about money, it marked a turning point in how ransomware groups weaponize technology.
How LunaLock Turned AI Into an Extortion Multiplier
Traditionally, ransomware groups focus on encrypting files and threatening public leaks. LunaLock took it a step further by using AI as a form of intimidation. Their threat to feed user’s artworks into AI datasets fed into the widespread anxiety about unauthorized data use in generative AI systems.
The LunaLock threat definitely carried weight at the time. It weaponized the irreversible nature of AI training. Once data is absorbed into a model, it is nearly impossible to extract or delete, making the consequences of non-payment far more enduring than a typical dark web leak.
Unlike traditional ransomware that targets corporations or institutions likely to pay, LunaLock went after freelancers and creatives, a demographic already fighting to protect their work from both hackers and AI scraping.
The Broader Landscape: AI, Identity and Extortion Trends
Across the cybercrime ecosystem, attackers like LunaLock increasingly use AI to scale identity-focused attacks. For example, criminals now deploy voice cloning, deepfake imagery, and AI-generated messages to support impersonation scams and blackmail. As these tools improve, it becomes increasingly difficult for victims to differentiate between what is real and what is fake.
At the same time, security researchers and law-enforcement agencies warn that AI-driven identity abuse continues to erode traditional safeguards.
Attackers bypass voice verification, exploit trust signals, and automate highly personalized threats. Consequently, criminals now target creative ownership, reputation, and personal control rather than system access alone.
In this context, LunaLock signals a clear direction for cyber extortion. By weaponizing fears around AI training and data misuse, the group showed how criminals can extract payment without relying on data leaks.
As AI lowers the cost of imitation and replication, cybercriminals will increasingly treat identity itself as the ransom.