Google recently revealed that North Korean cyber actors under the UNC2970 moniker are utilizing Gemini to automate target profiling and streamline their long-running phishing campaigns. By synthesizing open-source intelligence and researching specific defense industry job roles, these hackers are effectively blurring the line between professional research and malicious reconnaissance to accelerate their attack cycles.
Google reported that the North Korean threat group UNC2970 has integrated the Gemini AI model into its operations to conduct sophisticated reconnaissance against high-value targets. This group, which is linked to the notorious Lazarus Group, primarily focuses on the aerospace and defense sectors. By using AI to synthesize public information and map out technical roles and salary data within specific companies, the actors can create highly convincing personas for their Operation Dream Job phishing campaigns. This shift represents a significant evolution in how state-sponsored groups use large language models to identify soft targets and plan initial compromises with greater efficiency.
The activity involving UNC2970 is part of a broader trend where various global hacking groups are weaponizing generative AI to enhance their technical workflows. For instance, Chinese-linked groups like Mustang Panda and APT31 have used the tool to compile dossiers on targets and automate vulnerability analysis. Other actors have leveraged the technology to troubleshoot exploit code, develop web scanners, and even research proofs-of-concept for specific software flaws. These diverse applications demonstrate how AI is being utilized across different stages of the cyber attack life cycle, from information gathering to the refinement of malicious scripts.
Iranian threat actors, specifically APT42, have also been observed using Gemini to facilitate social engineering and develop specialized technical tools. This group has used the AI to craft engaging personas and build custom scrapers and management systems in languages like Python and Rust. Google's report highlights that these actors are not just asking simple questions but are using the model to outsource complex tasks like code development and the creation of targeted testing plans. This allows even small teams to operate with the technical depth and speed typically reserved for much larger, better-funded organizations.
Beyond state-sponsored espionage, Google identified instances of AI-driven tools being used for financial gain and automated malware generation. The discovery of a phishing kit called COINBAIT, which was built using an AI development platform to impersonate cryptocurrency exchanges, illustrates the growing accessibility of high-quality fraudulent tools. Furthermore, a new malware strain known as HONESTCUE has been found leveraging Gemini's API to dynamically generate functional code for its secondary stages. This indicates that attackers are beginning to integrate AI directly into the architecture of their malicious software to make it more adaptable and harder to detect.
The tech giant's findings suggest a permanent shift in the threat landscape where AI tools serve as a force multiplier for both reconnaissance and technical development. By reducing the time required to research victims and debug code, these models allow hackers to move from the planning phase to active targeting at a much faster pace. While Google continues to monitor and disrupt these activities, the versatility of the tool across various threat clusters—ranging from North Korean intelligence to financially motivated cybercriminals—underscores the ongoing challenge of preventing legitimate AI models from being repurposed for malicious ends.
Source: Google Warns State Hackers Use Gemini AI For Recon And Attack Support


