
CrowdStrike released its 4 Threat Hunting Report on Monday (2025), revealing a new phase of cybercrime: the widespread use of Generative AI by adversaries to accelerate and escalate attacks. The study shows how threat actors have exploited vulnerabilities in autonomous AI agents, solidifying these systems as one of the main risk vectors for companies.
Among the highlights, the report points out that groups like the North Korean-linked FAMOUS CHOLLIMA have automated their entire attack cycle—from creating fake profiles with deepfakes to technically executing intrusions—using GenAI. Other adversaries, such as EMBER BEAR (Russia) and CHARMING KITTEN (Iran), are also leveraging LLMs to power disinformation and targeted phishing campaigns.
See also: Leadership that protects: the role of the CEO in information security
The document also warns that AI-generated malware is already a reality, with groups like Funklocker and SparkCat creating malicious code from generative models. Furthermore, the SCATTERED SPIDER group reappeared in 2025 with increasingly rapid and aggressive identity-based attacks, while Chinese adversaries are driving a 136% increase in cloud intrusions.
“The offensive use of GenAI has dramatically lowered the barrier to entry for advanced attacks,” says Adam Meyers, global head of adversary operations at CrowdStrike. “And now, the very AI agents that automate processes within enterprises have become prime targets. Each AI identity represents a new point of attack—fast, integrated, and high-value.”
The company reinforces the need for specific protection for AI-based infrastructures, considering that autonomous agents are already being treated as critical assets by adversaries, alongside cloud consoles, SaaS platforms, and privileged accounts.
The full report can be accessed on the CrowdStrike website, along with additional resources on adversaries and defense best practices.
Follow us TI Inside on LinkedIn and stay up to date with the main market news.