

Interview Before AI becomes commonplace in enterprises, corporate leaders have to commit to an ongoing security testing regime tuned to the nuances of AI models.
That’s the view of Chatterbox Labs CEO Danny Coleman and CTO Stuart Battersby, who spoke to The Register at length about why companies have been slow to move from AI pilot tests to production deployment.
“Enterprise adoption is only like 10 percent today,” said Coleman. “McKinsey is saying it’s a four trillion dollar market. How are you actually ever going to move that along if you keep releasing things that people don’t know are safe to use or they don’t even know not just the enterprise impact, but the societal impact?”
He added, “People in the enterprise, they’re not quite ready for that technology without it being governed and secure.”
In January, consulting firm McKinsey published a report examining the unrealized potential of artificial intelligence (AI) in the workplace.
The report, “Superagency in the workplace: Empowering people to unlock AI’s full potential,” found growing interest and investment in AI technology, but slow adoption.
…what you have to do is not trust the rhetoric of either the model vendor or the guardrail vendor, because everyone will tell you it’s super safe and secure.
“Leaders want to increase AI investments and accelerate development, but they wrestle with how to make AI safe in the workplace,” the report says.
Coleman argues that traditional cybersecurity and AI security are colliding, but most infosec teams haven’t caught up, lacking the background to grasp AI’s unique attack surfaces. He pointed to Cisco’s acquisition of Robust Intelligence and Palo Alto Networks’ acquisition of Protect AI as examples of some players that have taken the right approach.
Battersby said the key for organizations that want to deploy AI at scale is to embrace a regime of continuous testing based on what the AI service actually does.
“So the first thing is to think about what is safe and secure for your use case,” he explained. “And then what you have to do is not trust the rhetoric of either the model vendor or the guardrail vendor, because everyone will tell you it’s super safe and secure.”
That’s critical, Battersby argues, because even authorized users of an AI system can make it do damaging things.
“What we’re trying to get across to you is that content safety filters, guardrails are not good enough,” said Coleman. “And it’s not going to change anytime soon. It needs to be so much more layered.”
While that may entail some cost, Battersby contends that constant testing can help bring costs down – for example, by showing that smaller, more affordable models are just as safe for particular use cases.
The complete interview follows…