

A recently fixed critical vulnerability in Microsoft’s Copilot AI tool could have let a remote attacker steal sensitive data from an organization simply by sending an email, researchers say.
The vulnerability, dubbed EchoLeak and assigned the identifier CVE-2025-32711, could have allowed hackers to mount an attack without the target user having to do anything. EchoLeak represents the first known zero-click attack on an AI agent, according to researchers at Aim Security, which released the findings in a Wednesday blog post.
“This vulnerability represents a significant breakthrough in AI security research because it demonstrates how attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot’s context without requiring any user interaction whatsoever,” Adir Gruss, co-founder and CTO at Aim Security, told Cybersecurity Dive via email.
An EchoLeak attack could have exploited what researchers call an “LLM scope violation,” in which untrusted input from outside an organization can commandeer an AI model to access and steal privileged data.
Vulnerable data could potentially include everything to which Copilot has access, including chat histories, OneDrive documents, Sharepoint content, Teams conversations and preloaded data from an organization.
Gruss said Microsoft Copilot’s default configuration left most organizations at risk of attack until recently, although he cautioned that there was no evidence any customers were actually targeted.
Microsoft, which has been coordinating with researchers about the vulnerability for months, released an advisory on Wednesday that said the issue was fully addressed and no further action was necessary.