

The use of generative AI in a cybersecurity context is providing examples of how it can both help and hinder security. In some cases, it seems to do both at once.
Earlier this year, Microsoft unveiled a suite of Security Copilot-branded products that aim to help security teams respond to incidents. Microsoft used generative AI to augment incident management processes to add more context, helping security operators to better understand what happened, where, when, and to whom. It’s a genuine improvement, though more incremental than revolutionary, not that there’s anything wrong with that.
As I noted at the time, augmentation of an existing process is fine for what it is, but it lacks ambition. There are plenty of existing tools for automating well-understood processes. The variability inherent to generative AI and large language models wasn’t being used to best advantage. Given Microsoft’s close alignment with generative AI companies, and its substantial resources, it seems only fair to expect more.
Yet when that variability is embraced with too much enthusiasm, we get the opposite of improved security. Invariant Labs recently demonstrated how GitHub’s MCP server can be used to expose private data using fairly straightforward prompt poisoning attack. It is barely a surprise that poorly sanitized input from uncontrolled sources might prove risky. The GitHub MCP example demonstrates that much use of generative AI is either entrenching existing poor practice or, in some cases, taking a backward step and re-introducing whole classes of sub-optimal security practice.
By way of contrast, Crogl’s knowledge engine takes full advantage of what machine learning, retrieval augmented generation (RAG) and large language models (LLMs)
are good at. It goes beyond merely annotating existing processes and discovers what the existing process is by analyzing past incident response tickets. By connecting other security systems into the engine, Crogl is able to uncover what a highly-automated incident response should look like.
Unlike Microsoft Security Copilot, Crogl is able to use the variability of generative AI to come up with a probably-good response plan for new incident types. The machine learning pattern recognition is able to detect the rough ‘shape’ of a potential attack and do what a human operator would do: check various systems to look for suspicious activity that indicates a likely compromise.
This is just one example of differences in approach, but the key is that the technology itself is merely an enabler. While Microsoft, and GitHub, both push the Copilot brand and AI technology generally as a major selling point, Crogl uses the technology to deliver benefits to the customer. LLMs and machine learning are merely the conduit through which benefits to the customer are delivered.
Microsoft’s approach, and GitHub’s to an extent, has focused on automating existing practices, some of which should probably not exist in the first place. Automating them entrenches poor practice and makes it difficult to remove. Crogl shows that automation can be used to uncover better ways of doing things and help put those in place instead. This is what cybersecurity desperately needs more of.
It is frustrating that so much focus is placed on the technology of LLMs. The novelty of the tech can only do so much to overcome the limitations of what a product can deliver. As the market matures, we expect that companies that understand when using generative AI makes sense and, crucially, when it does not, will enjoy much greater success than those who remain fascinated by their new toy.
Customers need outcomes, not just products. Hopefully that is where the focus will shift after the current AI excitement fades.