
By Avivah Litan, Distinguished VP Analyst at Gartner
As organizations increasingly invest in tailored generative AI applications for enterprise automation, AI agents are emerging as pivotal components in digital transformation strategies. These agents, whether operating autonomously, semi autonomously, or within multiagent systems, leverage artificial intelligence to perceive, make decisions, and execute actions to achieve diverse goals. While AI agents offer promising advancements, they also introduce new risks alongside existing threats from AI models and applications. Gartner predicts that by 2028, 25% of enterprise breaches will be traced back to AI agent abuse, from both external and malicious internal actors.
The exponential increase in the currently invisible attack surface created by AI agents necessitates the development of cutting-edge security and risk management strategies. This heightened vulnerability is likely to attract external bad actors and malicious insiders, urging enterprises to act promptly in implementing robust controls to counter potential threats.
To effectively address these challenges, enterprises must prioritize identity governance and administration that encompasses both human and nonhuman identities. This involves isolating sensitive content and data from AI processes and entities that should not have access. Additionally, enterprises should explore emerging techniques from new vendors offering runtime data protection, which supports contextual dynamic access management and data classification while enforcing least-privilege access. These techniques should complement existing identity and access management and information governance systems to safeguard enterprise data and access.
As AI agent activity escalates, organizations that fail to secure these activities will become easy targets for hackers and malicious insiders exploiting the expanding and unprotected threat surface.
To prepare for the influx of AI agents, enterprises should invest in educating employees about the specific risks associated with these agents, which are increasingly prevalent in enterprise products. They should leverage homegrown or third-party tools to manage AI agent risks, fulfilling three main requirements:
- Provide all relevant organizational participants with a comprehensive view and map of agent activities, including processes, connections, data exposure, information flows, and outputs generated by agents, to detect anomalies.
- Detect and flag anomalous AI agent activities and those that violate specific preset enterprise policies.
- Autoremediate flagged behavior anomalies and attacks in real time, because humans cannot scale the oversight and remediations required. Ensure humans manually review any outlier transactions for appropriate remediation.
Furthermore, enterprises must expand end-user behavior monitoring and analysis capabilities to include detecting and alerting on anomalous activity from AI agents, including unauthorized collaboration with external entities.
By taking these proactive steps, enterprises can effectively manage the risks associated with AI agents, ensuring the security of their digital transformation initiatives.
Avivah Litan is a Distinguished VP Analyst in Gartner Research. Ms. Litan is currently a part of the AI Strategy team at Gartner and has a strong background in many aspects of cybersecurity and fraud, including the integration of AI with these domains.