Google Uncovers AI-Powered Malware Targeting Crypto Wallets

Google Uncovers AI-Powered Malware Targeting Crypto Wallets
This article was prepared using automated systems that process publicly available information. It may contain inaccuracies or omissions and is provided for informational purposes only. Nothing herein constitutes financial, investment, legal, or tax advice.

Introduction

Google has identified five new malware families that leverage large language models to generate malicious code and target cryptocurrency wallets. State-linked groups including North Korea’s UNC1069 are using AI models like Gemini to craft sophisticated phishing scripts and locate wallet data. The tech giant has disabled malicious accounts and implemented enhanced safeguards against model abuse.

Key Points

  • North Korean hacking group UNC1069 used Gemini AI to generate scripts for locating cryptocurrency wallet data and creating targeted phishing content
  • Malware families PROMPTFLUX and PROMPTSTEAL automatically rewrite their code every hour using AI models to evade security detection
  • Google has implemented new safeguards including refined prompt filters and enhanced API monitoring to combat AI-powered cyber threats

The Rise of AI-Powered Malware

Google’s Threat Intelligence Group has uncovered a disturbing new trend in cybercrime: at least five distinct malware families now actively use large language models during execution to modify or generate malicious code. This marks a significant evolution in how state-linked and criminal actors deploy artificial intelligence in live operations, creating what Google describes as “just-in-time code creation.” Unlike traditional malware where malicious logic is hard-coded into the binary, these new variants dynamically generate malicious scripts and obfuscate their own code to evade detection.

The malware families leverage external AI models such as Gemini or Qwen2.5-Coder during runtime to create malicious functions on demand. This approach represents a fundamental shift from conventional malware design, allowing the malicious software to continuously make changes to harden itself against security systems designed to deter it. By outsourcing parts of its functionality to AI models, the malware becomes more adaptive and difficult to detect using traditional security measures.

North Korea's UNC1069 Exploits Gemini for Crypto Theft

Among the most concerning developments is the activity of North Korean group UNC1069, which Google describes as “a North Korean threat actor known to conduct cryptocurrency theft campaigns leveraging social engineering.” The group specifically misused Google’s Gemini model to advance its cryptocurrency theft operations, demonstrating how AI tools are being weaponized in the digital asset space.

According to Google’s technical brief, UNC1069’s queries to Gemini included specific instructions for locating wallet application data, generating scripts to access encrypted storage, and composing multilingual phishing content aimed at crypto exchange employees. The group notably used “language related to computer maintenance and credential harvesting” in its operations, with these activities appearing to be part of a broader attempt to build code capable of stealing digital assets.

PROMPTFLUX and PROMPTSTEAL: The New Malware Families

Two of the identified malware families, PROMPTFLUX and PROMPTSTEAL, demonstrate the sophisticated integration of AI models into malicious operations. PROMPTFLUX runs what Google calls a “Thinking Robot” process that calls Gemini’s API every hour to rewrite its own VBScript code, creating a constantly evolving threat that can adapt to security measures in near real-time.

PROMPTSTEAL, which has been linked to Russia’s APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands on demand. This capability allows the malware to create bespoke functions as needed during execution, making it particularly challenging for traditional security systems to identify and block. The use of multiple AI models across different malware families suggests this approach is becoming standardized among sophisticated threat actors.

Google's Response and Enhanced Safeguards

In response to these emerging threats, Google has taken immediate action by disabling the accounts tied to these malicious activities. The company has also implemented new safeguards to limit model abuse, including refined prompt filters and tighter monitoring of API access. These measures represent Google’s attempt to stay ahead of threat actors who are increasingly leveraging AI capabilities for malicious purposes.

The findings point to a new attack surface where malware queries LLMs at runtime to locate wallet storage, generate bespoke exfiltration scripts, and craft highly credible phishing lures. This development could fundamentally change approaches to threat modeling and attribution in the cybersecurity industry, as AI-powered malware creates new challenges for detection and prevention.

Related Tags: Google
Other Tags: Gemini
Notifications 0