Preloader Image

Generative AI is no longer a novelty for businesses, but an essential utility – and that means headaches for cybersecurity professionals.

According to a new report from Palo Alto Networks, generative AI traffic rocketed in 2024, rising by more than 890%. Analysis by the security firm found the technology is mostly being used as a writing assistant, accounting for 34% of use cases, followed by conversational agents at 29% and enterprise search at 11%.

Popular apps identified in the study include ChatGPT, Microsoft 265 Copilot, and Microsoft Power Apps.

While use of the technology continues at pace, this boom is also giving rise to significant security issues, with cyber professionals reporting a sharp increase in data security incidents.

Data loss prevention (DLP) incidents related to Generative AI more than doubled in early 2025. Meanwhile, the average monthly number of generative AI-related data security incidents rose by two-and-a-half times, now accounting for 14% of all data security incidents across SaaS traffic, the company found.

“Organizations are grappling with the unfettered proliferation of GenAI applications within their environments. On average, organizations have about 66 GenAI applications in use,” the researchers said.

“More importantly, 10% of these were classified as high risk,” researchers added. “The widespread use of unsanctioned GenAI tools, coupled with a lack of clear AI policies and the pressure for rapid AI adoption, can expose organizations to new risks.”

Researchers said a key problem here lies in a lack of visibility into AI usage, with shadow AI making it hard for security teams to monitor and control how tools are being used across the organization.

It’s also hard to control unauthorized access to data, the study noted, raising further concerns.

Jailbroken or manipulated AI models can respond with malicious links and malware, or enable its use for unintended purposes, while the proliferation of plugins, copilots, and AI agents are creating an overlooked ‘side door’.

Heightening the risk is a rapidly evolving regulatory landscape where non-compliance with emerging AI and data laws can land organizations with severe penalties.

“The uncomfortable truth is that for all its productivity gains, there are many growing concerns – including data loss from sensitive trade secrets or source code shared on unapproved AI platforms,” the researchers said.

“There’s also the risk in using unvetted GenAI tools that are vulnerable to poisoned outputs, phishing scams, and malware disguised as legitimate AI responses.”

How to address AI security risks

Organizations need to tighten up their processes, according to Palo Alto Networks.

They should implement conditional access management to limit access to generative AI platforms, apps, and plugins, and guard sensitive data from unauthorized access and leakage, using real-time content inspection.

Similarly, the study advised implementing a zero trust security framework to identify and block what are often highly sophisticated, evasive, and stealthy malware as well as threats within generative AI responses.

“The explosive growth of GenAI has fundamentally altered the digital landscape for enterprise organizations,” said the team.

“While GenAI unlocks innovation and accelerates competition, the proliferation of unauthorized AI tools is exposing organizations to greater risk of data leakage, compliance failures and security challenges.”

MORE FROM ITPRO