

Access to sensitive enterprise data
Contrary to large language models (LLMs) like ChatGPT, which typically lack exposure to sensitive data, current agentic models do engage with customer details, financial records, intellectual assets, legal documentation and supply chain information – alarming the tech community.
A striking 92% of SailPoint survey respondents insist that governing AI agents is vital for enterprise security. Additionally, the study unveiled troubling occurrences, with 23% indicating that their AI agents were duped into divulging access credentials.
Moreover, 80% of companies encountered AI agents executing unintended tasks, such as unauthorised system access (39%), sensitive data dissemination (33%) and inappropriate content acquisition (32%).
Chandra underscores the necessity for stringent governance guidelines to deter repeated incidents of this nature.
“As organisations expand their use of AI agents, they must take an identity-first approach to ensure these agents are governed as strictly as human users, with real-time permissions, least privilege and full visibility into their actions,” he states.