Vietnam Hackers Use Fake AI Videos to Spread Malware
Over the past year, cybersecurity researchers have uncovered a sophisticated campaign by a Vietnam-linked hacking group, tracked as UNC6032, leveraging the hype around AI-generated video tools to deliver malware to unsuspecting users. By creating a network of fraudulent websites impersonating popular services like Luma AI and Canva Dream Lab, and promoting them through social media ads, the attackers have managed to infect thousands of systems worldwide with information-stealers, backdoors, and remote-access Trojans. Despite takedown efforts by platforms and security firms, the campaign continues to evolve, underscoring the importance of vigilance when interacting with AI-themed content online.
The story begins with the rapid rise of AI video generation tools in 2024, which promised users the ability to transform simple text prompts into engaging video clips with minimal effort. Cybercriminals saw an opportunity in this craze and set up dozens of lookalike sites offering “free” or “unlimited” AI video creation. Once a visitor attempted to generate a clip, they were asked to download what seemed like a necessary software package—a ZIP file containing an executable that, when run, installed malicious payloads instead of the promised AI application.
In many cases, these ads appeared on Facebook and LinkedIn, where they mimicked official marketing materials, complete with polished screenshots and user testimonials. Victims clicking through were funneled to pages with domain names resembling legitimate brands, such as luma-ai[.]app or dreamlab-canva[.]com The Record from Recorded Future. Once the malware was in place, it could log keystrokes, steal saved credentials, harvest cryptocurrency wallets, and in some variants, establish a persistent backdoor for ongoing access.
The Rise of AI Video Generators and the Lure of Fake Tools
Why AI Videos Became a Prime Target
When platforms like Luma AI and Canva introduced text-to-video features in early 2024, they tapped into a powerful trend: user-generated content. For marketers, educators, and hobbyists alike, the promise of quickly producing polished videos was irresistible. As usage numbers climbed into the millions by mid-2024, threat actors recognized that any ad or site promising “free AI videos” would draw significant traffic.
Setting the Trap: How Fraudulent Sites Imitate the Real Deal
The bogus platforms often featured near-identical interfaces to their legitimate counterparts, complete with AI-generated demo videos playing in the page header. Some even offered limited free credits—until the download step, when users were prompted to install a supposedly required “video rendering engine”. Unpacking the installer revealed malware families such as StarkVeil (a dropper), Grimpull (an infostealer), and FrostRift (a remote-access tool).
How the Malware Works
Initial Infection and Privilege Escalation
Once executed, the malware dropper would typically check for virtualized or sandboxed environments to evade analysis tools like Wireshark or OllyDbg. It then unpacked a second-stage payload, establishing persistence by creating scheduled tasks or modifying registry keys.
Data Theft and Credential Harvesting
Victims soon found that their saved passwords, browser cookies, and even SSH keys were exfiltrated to attacker-controlled servers. Some infostealers targeted specific file types—like .docx
and .xlsx
—hoping to grab sensitive business documents.
Social Media Ads: The Primary Delivery Mechanism
From Facebook to LinkedIn: Platforms Abused
The attackers bought sponsored posts on both Facebook and LinkedIn, selecting demographics interested in AI, video editing, and digital marketing. According to Google Cloud’s Threat Analysis Group, more than 5,000 unique ads were detected between November 2024 and May 2025.
Ad Content and Targeting Strategy
Many ads highlighted “unlimited free trials” or “exclusive beta access,” complete with low-resolution sample videos to entice clicks. Targeting parameters included age (25–45), interests (AI, video production), and regions (primarily North America and Europe).
Victimology: Who’s Getting Hit?
Industries and Regions at Risk
While initial infections were heavy in tech-savvy sectors like software development firms and digital marketing agencies, the campaign has since widened its net. Recent telemetry shows infections in healthcare, finance, and education, with victims in over 30 countries.
Case Study: A Marketing Agency Breach
In one incident, a small marketing agency downloaded a fake “Canva Dream Lab” tool. Within hours, the attackers had harvested client login credentials for multiple social media accounts, leading to brand impersonation and fraudulent ad spending totaling $50,000 before discovery.
Detection and Takedown Efforts
Platform Responses
Meta and LinkedIn have protocols for rapid ad removal once malware is flagged. However, UNC6032’s strategy of constantly rotating domains and ad creatives allows them to stay one step ahead.
Security Vendor Actions
Firms like Mandiant and Check Point Research collaborate with Google to sinkhole malicious domains and issue public alerts. In May 2025, dozens of domains were seized, though new ones appeared within days.
Technical Deep Dive
Malware Anatomy
STARKVEIL dropper: Low-level loader that unpacks additional payloads.
GRIMPULL infostealer: Harvests saved passwords and cookies.
FROSTRIFT RAT: Offers remote desktop capabilities and keylogging.
Each component communicates with a command-and-control (C2) server using HTTPS POST requests, often blending in with legitimate traffic.
Obfuscation and Evasion
Payloads use packers like .NET Reactor and UPX, and employ code injection techniques to hide within trusted processes like InstallUtil.exe
.
Prevention Tips for Users
Verify the Source
Before downloading any AI tool, check the official website URL. Beware of domains with subtle misspellings or extra words like “app,” “pro,” or “studio” attached.
Use Up-to-Date Security Software
Modern antivirus and endpoint protection platforms can flag suspicious installers. Enable real-time scanning and regularly update threat definitions.
Best Practices for Organizations
Employee Training: Run phishing simulations that include fake “AI tools” to gauge employee awareness.
Ad Blockers: Consider enterprise-wide ad-blocking solutions to reduce exposure to malicious sponsored posts.
Network Monitoring: Inspect outbound traffic for unusual patterns or connections to known malicious C2 servers.
The Future of AI-Themed Malware Campaigns
As AI tools become more integrated into everyday workflows, attackers will continue exploiting user trust. Emerging threats may involve deepfake videos tailored to specific targets or AI-driven phishing lures.
Conclusion
The UNC6032 campaign highlights how quickly cybercriminals adapt to popular trends—this time by weaponizing the excitement around AI video generation. By recognizing the telltale signs of fraudulent AI tools, keeping security software current, and educating users on safe download practices, individuals and organizations can mitigate the risk. Stay cautious, verify before you click, and treat unexpected “free AI” offers with a healthy dose of skepticism.
FAQs
1. How can I tell if an AI video tool website is genuine?
Check for official domain names (e.g., luma.ai). Look for HTTPS and valid certificates, read user reviews, and verify links from trusted tech news sources.
2. What should I do if I’ve already downloaded a fake AI video tool?
Immediately disconnect from the internet, run a full antivirus scan, change passwords on potentially affected accounts, and consider restoring your system from a clean backup.
3. Are macOS users at risk from these malware campaigns?
While most installers target Windows, attackers are increasingly bundling macOS binaries. macOS users should also verify downloads and use security tools.
4. How often do these malicious domains reappear after takedown?
UNC6032 typically registers new domains every few days. Continuous monitoring by security vendors helps identify and disable them quickly.
5. Will deepfake videos be the next big malware delivery vector?
It’s likely. As deepfake creation tools mature, we can expect more personalized and convincing social engineering attacks that leverage AI-generated audio and video.