
Artificial intelligence is once again being used to carry out sophisticated threats and this time, crypto firms are the target. In early 2026, the ISACA warned that AI-assisted hacking groups are combining deepfake calls, social engineering and automated risk assessment to infiltrate crypto firms and steal money or credentials.
Consequently, this development represents a new stage in social engineering where attackers no longer use just phishing emails. It has instead evolved into a multi-stage AI-enhanced operation that manipulates employees and largely exploits human trust.
How AI Transformed Social Engineering
AI has always been a significant threat that continues to evolve. Generative AI tools are now being used to create hyper-realistic phishing and vishing scripts, use public data to tailor scams and create deepfakes impersonating trusted figures with near perfect accuracy,
For example, a North Korean hacking group called UNC1069 has been observed to be targeting the crypto sector to steal sensitive data and money. The attack relied on a social engineering scheme involving a compromised telegram account, a deepfake impersonating an executive and a spoofed version of zoom to trick the victim into installing malware or sharing credentials.
In a recent case involving these attackers, they convinced the victim they had audio issues and offered to fix it using Clickfix troubleshooting style. Once the victim ran this command, it deployed unique and barely detectable pieces of malware like HYPERCALL and WAVESHAPER.
Moreover, these attacks combine so many things like voice cloning, bespoke emails and automated target profiling, making them so much harder to detect.
Why Crypto Firms are the Target
Crypto companies have unique operational challenges that increase risk. Many teams work remotely on platforms like Discord and Telegram and this makes them very easy targets.
At the same time, crypto firms manage high-value individuals, have direct access to funds and the stolen funds are often very difficult to recover.
Furthermore, Mandiant has increasingly warned that hackers like the North Korean group are intensifying social engineering and credibility attacks against crypto firms and Fintechs.
This further reinforces that AI-driven scams and identity theft has surged significantly across all sectors and has pushed social engineering to the top of the threat landscape in 2026.
The Escalating Impact and Industry Response
The consequences of these AI-assisted attacks are very dire. Crypto scams and fraud attacks are expected to hit record levels in 2026 with AI enabling impersonation tactics and more convincing theft.
In response, firms are now adopting multi-factor verification processes and AI-powered threat analysis to defend against adaptive AI threats.
Additionally, cyber intelligence teams encourage the use of deepfake detection tools and advanced employee training that explicitly covers AI-generated deception techniques.
Ultimately, as attackers continue to leverage AI to automate social engineering and personalized attacks at large, defense techniques must also evolve as fast. The crypto industry should focus more on building a strong internal governance and the use of AI-driven protection systems.