Preloader Image
Darktrace warns businesses against cyber threats
Darktrace warns businesses against cyber threats

As artificial intelligence rapidly reshapes the cyber security landscape, many businesses remain dangerously under-prepared, missing foundational practices while rushing to adopt AI tools.

At London Tech Week on Tuesday, top industry and government voices in the space warned that without getting the basics right, even the most advanced AI solutions won’t be enough.

This announcement follows a string of cyber crime hitting UK retail, with homegrown giants like M&S and Harrods being hit with damaging data breaches.

While attackers are leveraging AI to automate phishing, accelerate intrusion, and generate convincing malware, many organisations still haven’t implemented fundamental controls like user access management or system segmentation.

This disconnect, leaders from Darktrace, the National Cyber Security Centre and the government argued, could turn AI from a promising shield into a dangerous false sense of security.

“One of those basics is user access management, especially for privileged users”, said David Palmer, Director of Technology at cybersecurity heavyweight Darktrace.

“Attackers shouldn’t be able to become system administrators just because someone clicked on a phishing link.”

Part of the problem, the panel suggested, lies in outdated infrastructure and a reluctance to invest in long-term resilience.

Legacy systems continue to be exploited, and even organisations that survive attacks often revert to short-term thinking.

“So many organisations, after a cyber attack, go back to investing in flashy tools rather than the culture change and resilience needed,” said Palmer.

“We never really saw the investment in foundational upgrades – and that’s a problem.”

Despite increased awareness of cyber risk, boards are often still ill-equipped to evaluate AI-driven security tools, and procurement decisions are made based on hype rather than substance.

“It’s hard to defend against the next big thing when every cybersecurity company has A slapped on the box,” said Palmer. “How do you distinguish what will actually give you the edge?”

AI can be a powerful multiplier, the panel repeated, but only if deployed with care and in context.

What’s more, a few speakers cautioned against over-reliance on general-purpose AI models such as those designed for chat or search to make high-stakes cyber security decisions.

“General AI models can be confidently wrong,” Palmer added. “We’ve seen examples in other sectors, like aerospace, where chat-based models convinced engineers they were mistaken, when they weren’t. That’s dangerous.”

Instead, businesses need domain-specific AI trained on relevant data and scenarios. It’s also essential that these systems are transparent and interpretable.

“Responsible AI in security means interpretability” said Palmer. “We need tools that explain what they’re doing and why, so our experts can trust them and intervene if needed”.

Feryal Clark MP echoed the call for smarter, more grounded deployment of AI in cyber defence, emphasising that while AI has huge potential, its value depends entirely on how well organisations integrate it with people and processes.

“AI is exciting, but it must be seen as part of a lifecycle approach,” she said. “If I were buying new windows and doors for my house, I’d want to know they fit properly, and no one else has a key. The same should apply when we’re thinking about securing our systems.”

She also stressed that cybersecurity investment needs to match the sophistication of the threat – and the economic stakes involved.

“We’ve got a flourishing cyber security sector in the UK. Businesses should partner with targeted AI innovators who understand the space,not just adopt generic tools because they’re popular.”

Ultimately, the panel’s consensus urged that AI is only going to aggravate cyber threats, but that without strong foundations, deep domain expertise, and the humility to question the tools being deployed,UK businesses may be solving the wrong problem.

“You can’t solve cybersecurity with a single AI model,” Palmer added. “But you can’t start without the domain experts either. You need both”, he said, comparing to a firm’s need of a variety of employees.