Cybersecurity creative image

Oscar Wong/Getty Images

As generative AI technologies advance, cyberattacks become more advanced as well. This is the conclusion of research from Microsoft and OpenAI, which unveiled findings on the malicious use of large language models (LLMs) by nation-state-backed adversaries.

Microsoft published its Cyber Signals 2024 report to detail the nation-state attacks it has detected and disrupted alongside OpenAI. The attacks were carried out by Russian, North Korean, Iranian, and Chinese-backed adversaries. The report also provides recommendations for individuals and organizations to prepare for potential attacks.

Also: Don’t tell your AI anything personal, Google warns in new Gemini privacy notice

The research tracked state-affiliated adversary attacks from Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon. Each attack used LLMs to augment its cyber operations, including assistance with research, troubleshooting, and content generation.

For example, North Korean threat actor Emerald Sleet leveraged LLMs for various activities, including researching think tanks and experts on North Korea, generating content for spear-phishing campaigns, understanding publicly known vulnerabilities, troubleshooting technical issues, and gaining familiarity with different web technologies.

Also: The best VPN services (and how to choose the right one for you)

Similarly, Iranian threat actor Crimson Sandstorm used LLMs for technical assistance, such as supporting social engineering and troubleshooting errors.

To read more about each nation-state threat, including their affiliation and their use of LLMs, the report includes a section dedicated to individual threat briefings.

Microsoft also warns about the emerging and increasingly concerning threat of AI-powered fraud, such as Voice Synthesis, which allows actors to train a model to sound like anyone with as little as a three-second sound bite.

Also: 5 reasons why I use Firefox when I need the most secure web browser

While the report highlights the use of generative AI by malicious actors, it also emphasizes that the technology can be used by defenders, such as Microsoft, to develop smarter protection and stay ahead in the constant cat-and-mouse chase of cybersecurity.

Every day, Microsoft detects over 65 million cybersecurity signals. AI ensures that these signals are analyzed for their most valuable information to help stop threats, the report notes.

Also: I tested iOS 17.3.1: What’s inside, who needs it, and how it affected my iPhone

Microsoft also outlines other ways it is using AI, including AI-enabled threat detection, behavioral analytics, machine learning (ML) models to detect risky sign-ins and malware, Zero Trust models, and device health verification before a device can connect to a corporate network.

In conclusion, Microsoft emphasizes the importance of continued employee and public education to combat social-engineering techniques and that prevention, whether AI-enabled or not, is key to fighting all cyber threats.



LEAVE A REPLY

Please enter your comment!
Please enter your name here