Preloader Image

Astra Security presented its latest research findings on vulnerabilities in Large Language Models (LLMs) and AI applications at the prestigious Cybersecurity Conference called CERT-In Samvaad 2025, bringing to light the growing risks of AI-first businesses face from prompt injection, jailbreaks, and other novel threats.

This research not only contributes to the OWASP Top 10: LLM & Generative AI Security Risks but also forms the basis of Astra’s enhanced testing methodologies aimed at securing AI systems with research-led defense strategies. From fintech to healthcare, Astra’s findings expose how AI systems can be manipulated into leaking sensitive data or making business-critical errors—risks that demand urgent and intelligent countermeasures.

AI is rapidly evolving from a productivity tool to a decision-maker, powering financial approvals, healthcare diagnoses, legal workflows, and even government systems. But with this trust comes a dangerous new frontier of threats.

“The catalyst for our research was a simple but sobering realisation—AI doesn’t need to be hacked to cause damage. It just needs to be wrong, so we are not just scanning for problems—we’re emulating how AI can be misled, misused, and manipulated,” said Ananda

Krishna, CTO at Astra Security.

Through months of hands-on analysis and pentesting real-world AI applications, Astra uncovered Multiple new attack vectors that traditional security models fail to detect. The research has been instrumental in building Astra’s AI-aware security engine that simulates these attacks in production-like environments to help businesses stay ahead of AI-powered risks.