Preloader Image
GitLab Duo Vulnerability

A critical remote prompt injection vulnerability was uncovered in GitLab Duo, the AI-powered coding assistant integrated into GitLab’s DevSecOps platform. 

The vulnerability, disclosed in February 2025, allowed attackers to manipulate the AI assistant into leaking private source code and injecting untrusted HTML content into responses, potentially redirecting users to malicious websites. 

GitLab has since patched the security flaw, but the discovery highlights significant risks associated with AI assistants in development environments.

Hidden Prompts Enable Sophisticated AI Manipulation

Legit research team reports that the vulnerability exploited GitLab Duo’s context-aware nature, which analyzes entire project contexts, including comments, descriptions, and source code, to provide helpful responses. 

The hidden prompts could be embedded in multiple locations within GitLab projects, including merge request descriptions, commit messages, issue comments, and source code itself. 

These malicious instructions were virtually undetectable to users, as attackers employed sophisticated encoding techniques such as Unicode smuggling, Base16-encoded payloads, and KaTeX rendering in white text.

The attack demonstrated several vulnerabilities from the 2025 OWASP Top 10 for LLMs, specifically LLM01 (Prompt Injection), LLM02 (Sensitive Information Disclosure), LLM05 (Improper Output Handling), LLM08 (Vector and Embedding Weaknesses), and LLM09 (Misinformation). 

By placing hidden instructions within seemingly harmless project content, attackers could manipulate Duo’s behavior to suggest malicious JavaScript packages, present dangerous URLs as safe, or mislead code reviewers about merge request security.

HTML Injection Through Streaming

The most concerning aspect of the vulnerability involved HTML injection capabilities enabled by GitLab Duo’s real-time response rendering. 

The AI assistant uses streaming markdown parsing, interpreting and rendering content into HTML before the complete response structure is known. 

This asynchronous processing created a window where malicious HTML tags could be executed before proper sanitization occurred.

While GitLab implemented DOMPurify for HTML sanitization, certain tags like ,

, and weren’t removed by default. 

Researchers exploited this by crafting prompts that instructed Duo to extract code changes from private merge requests, encode them in Base64, and embed the data within tag URLs. 

When browsers attempted to render these images, they automatically sent GET requests to attacker-controlled servers containing the exfiltrated source code.

The attack payload demonstrated the sophistication possible:

This technique enabled attackers to steal confidential source code from private iOS projects and potentially exfiltrate zero-day vulnerability disclosures from internal security issues.

Following responsible disclosure on February 12, 2025, GitLab acknowledged the HTML injection and prompt injection vulnerabilities as legitimate security issues. 

The company released patch duo-ui!52, which prevents Duo from rendering unsafe HTML tags pointing to external domains outside gitlab.com, effectively mitigating the data exfiltration risk.

This incident underscores the expanding attack surface created by AI assistants in development workflows. 

Security researcher Omer Mayraz noted that “AI assistants are now part of your application’s attack surface,” emphasizing that systems allowing LLMs to process user-controlled content must treat all input as potentially malicious. 

The vulnerability serves as a crucial reminder that context-aware AI, while powerful, requires robust safeguards to prevent becoming an exposure point for sensitive organizational data.

Equip your SOC team with deep threat analysis for faster response -> Get Extra Sandbox Licenses for Free