Preloader Image

SafeBreach researchers have revealed how a malicious Google Calendar invite could be used to exploit Gemini—the AI assistant that Google has built into its Workplace software suite, Android operating system, and search engine—as part of their ongoing efforts to determine the dangers posed by the rapid integration of AI in tech products.

The researchers dubbed an exploit like this “promptware” because it “utilizes a prompt—a piece of input via text, images, or audio samples—that is engineered to exploit an LLM interface at inference time to trigger malicious activity, like spreading spam or extracting confidential information.” The broader security community has underestimated the risks associated with promptware, SafeBreach said, and this report is meant to demonstrate just how much havoc these exploits can wreak.

  • Perform spamming and phishing
  • Generate toxic content 
  • Delete a victim’s calendar events
  • Remotely control a victim’s home appliances (e.g., connected windows, boiler, lights)
  • Geolocate a victim 
  • Video stream a victim via Zoom
  • Exfiltrate a victim’s emails

Check out the full report for a step-by-step breakdown of how the exploit worked. The researchers said they disclosed the flaws to Google in February and that Google “published a blog that provided an overview of its multi-layer mitigation approach to secure Gemini against prompt injection techniques” in June. (It’s not clear at what point those mitigations were introduced between the disclosure and the blog post.)

This kind of back-and-forth has been a mainstay of computing for decades. Companies introduce new technologies, people find ways to exploit them, companies occasionally come up with defenses against those exploits, and then people find something else to take advantage of. So, in that sense, the SafeBreach research just reveals another problem to add to the seemingly infinite array of such issues.

But a number of factors combine to make this report more alarming than it might be otherwise. Those include SafeBreach’s point about security pros not taking promptware seriously, the “move fast and break things” approach companies are taking with their “AI” deployments, and the incorporation of these chatbots into seemingly every product a company offers. (As highlighted by Gemini’s ubiquity.)

“According to our analysis, 73% of the threats posed to end users by an LLM personal assistant present a High-Critical risk,” SafeBreach said. “We believe this is significant enough to require swift and dedicated mitigation actions to secure end users and decrease this risk.”

Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.