
With the increase in the usage of AI-powered chatbots, and even more funding to build and democratize Artificial Intelligence in general, AI-powered cyber threats are now on the horizon.
For quite some time now, Cybersecurity professionals have been asking important questions, one of which is how AI companies & startups manage sensitive data. There’s also the worry of AI being used to proliferate criminal activities in the darknet, as well as the worry of arming individuals with the know-how of using AI-powered phishing and malware content to target individuals and large corporations.
So it is understandable that with AI gaining more and more traction with each passing year, it has grown with industry professionals demanding for it to be regulated – they recognize that it can be used for innovation, but it also comes with the realization that it can be used for far more dangerous purposes.
A scenario of an AI-powered cyber attack provided by Christopher Kalodikis, a Computing Studies tutor, in a Youtube Video is this: “In an AI-based phishing attack, the attackers use an AI algorithm to analyse social media profiles, email patterns and other publicly available data to craft highly personalized phishing emails for key employees. They would then generate emails appearing as a CEO or employer, instructing managers to approve a large transfer of funds.”
“The email is so well crafted and personalised that the managers would follow the instructions, resulting in a significant financial loss for the company. AI-powered malware could also be embedded in the phishing email, infiltrating the network, gathering sensitive data and adapting its behaviour to evade detection by the institution’s cybersecurity defences.”
Another example of AI-powered Cyber threats is the creation of AI-powered websites such as WormGPT and FraudGPT – these apps are known for creating phishing and malicious content.
But it is also important to note that AI can be used to fight back. According to a paper published in The Hill by Victor Benjamin, an Assistant Professor of Information Systems at ASU, he says, “Our best weapon is to fight fire with fire by using AI as a defensive cybersecurity tool. One of its greatest strengths is pattern recognition, which can be used to automate monitoring of networks and more easily identify potentially harmful activity. AI can compile emerging threats in a database and generate summaries of attempted attacks.”
An example is IBM using AI-powered solutions to accelerate threat detection and mitigation, as well as protect user identity and datasets.
So while AI poses a great risk, it still can serve as a more effective defensive tool in the Cybersecurity industry as before the advent of AI, it was a “whack-a-mole game: A new threat emerges, humans update software to address the threat, another threat emerges, and so on,” says Victor. There was a heavy reliance on white-hat hackers to check for vulnerabilities one by one.
Now, in the midst of Cybersecurity Professionals demanding for a robust regulation for artificial intelligence, one can stay safe by applying the AI Risk Management Framework by NIST (National Institute of Standards and Technology), the leading cybersecurity regulatory body – for both individual and organizations:
- Stay Educated: Keep up with the latest AI threats and cybersecurity trends by following NIST updates and other trusted sources.
- Secure Your Digital Identity: Use strong, unique passwords and enable multi-factor authentication on all your accounts.
- Be Vigilant with Communications: Carefully scrutinize unexpected emails, messages, and links – even if they seem well-crafted – to avoid AI-enhanced scams.
- Implement Strong Access Controls: Organizations should continue to enforce multi-factor authentication and role-based access in their workforce, and strict user permissions to secure AI systems and data.
- Conduct Regular Risk Assessments and Continuous Monitoring: Organizations should perform periodic assessments of AI systems, monitor for anomalies, and update risk management protocols as needed – establish comprehensive cybersecurity policies and incident response procedures that specifically address AI-related risks.
Adopt and Integrate Trusted Security Tools: Invest in advanced, AI-driven cybersecurity solutions that help detect, prevent, and respond to threats.