Generative AI is reshaping cyber security - for better and worse. On one hand, it introduces new capabilities for detecting threats and securing systems. On the other, it’s equipping cybercriminals with powerful new attack methods.
Cybercriminals are exploiting generative AI to create deepfake scams, launch sophisticated phishing attacks, and even automate hacking. AI-generated deepfakes have fooled finance professionals into wiring millions to scammers. Phishing websites that perfectly mimic legitimate ones can now be generated in minutes. Malicious AI models, designed specifically for cyberattacks, are being sold on the dark web. Even governments are getting involved - state-sponsored hackers are using AI to enhance their cyber warfare tactics. Studies show a 210% rise in AI-driven cyberattacks in 2024 alone.
Fortunately, generative AI is not just for hackers. Cyber security professionals are leveraging it for advanced threat detection, automated vulnerability scanning, and deepfake detection. Large language models (LLMs) can search through massive amounts of security data, identifying patterns and potential threats faster than humans could. Major players like Microsoft and Google are integrating AI-powered security systems that help organizations anticipate and mitigate cyber threats. They also introduced AI-powered red teaming, where the AI attacks itself or systems to expose weaknesses. This could be becoming a key tool for improving cybersecurity defenses.
However, it is important to put emphasis on the fact that AI driven systems also introduce new risks, which is especially important as companies feel the need to integrate generative AI in their offerings and might rush its introduction and therefore putting their systems at risk. Attackers can manipulate AI models by poisoning training data, embedding malicious code, or tricking AI into revealing sensitive information through prompt engineering. Even unintentional user errors - such as Samsung employees accidentally leaking trade secrets via ChatGPT - highlight the dangers of unchecked AI usage.
Aware of these risks, governments, companies and organizations are launching initiatives to set up safeguards. The EU’s AI Act imposes stricter rules on high-risk AI applications, while private initiatives like Google’s Secure AI Framework and MITRE’s AI threat database are working to strengthen AI security. At the same time though, the Trump administration vows to deregulate AI. It will be interesting to see how this will affect the security of generative AI in the future.
As AI continues to evolve, cyber security will be locked in an arms race between defenders and attackers. One thing is clear: understanding and managing the risks of generative AI is now a top priority for individuals, businesses, and governments alike.
Read more in the paper: Download