Preloader Image

If you want a job at McDonald’s today, there’s a good chance you’ll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and résumé, directs them to a personality test, and occasionally makes them “go insane” by repeatedly misunderstanding their most basic questions.

Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald’s applicants—including all the personal information they shared in those conversations—with tricks as straightforward as guessing the username and password “123456.”

On Wednesday, security researchers Ian Carroll and Sam Curry revealed that they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald’s website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with a long track record of independent security testing, discovered that simple web-based vulnerabilities—including guessing one laughably weak password—allowed them to access a Paradox.ai account and query the company’s databases that held every McHire user’s chats with Olivia. The data appears to include as many as 64 million records, including applicants’ names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants’ information because he was intrigued by McDonald’s decision to subject potential new hires to an AI chatbot screener and personality test. “I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that’s what made me want to look into it more,” says Carroll. “So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that’s ever been made to McDonald’s going back years.”

When WIRED reached out to McDonald’s and Paradox.ai for comment, a spokesperson for Paradox.ai shared a blog post the company planned to publish that confirmed Carroll and Curry’s findings. The company noted that only a fraction of the records Carroll and Curry accessed contained personal information, and said it had verified that the account with the “123456” password that exposed the information “was not accessed by any third party” other than the researchers. The company also added that it’s instituting a bug bounty program to better catch security vulnerabilities in the future. “We do not take this matter lightly, even though it was resolved swiftly and effectively,” Paradox.ai’s chief legal officer, Stephanie King, told WIRED in an interview. “We own this.”

In its own statement to WIRED, McDonald’s agreed that Paradox.ai was to blame. “We’re disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us,” the statement reads. “We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection.”

Image may contain Text Person and Page

One of the exposed interactions between a job applicant and “Olivia.”

Courtesy of Ian Carroll and Sam Curry

Carroll says he became interested in the security of the McHire website after spotting a Reddit post complaining about McDonald’s hiring chatbot wasting applicants’ time with nonsense responses and misunderstandings. He and Curry started talking to the chatbot themselves, testing it for “prompt injection” vulnerabilities that can enable someone to hijack a large language model and bypass its safeguards by sending it certain commands. When they couldn’t find any such flaws, they decided to see what would happen if they signed up as a McDonald’s franchisee to get access to the backend of the site, but instead spotted a curious login link on McHire.com for staff at Paradox.ai, the company that built the site.

On a whim, Carroll says he tried two of the most common sets of login credentials: The username and password “admin,” and then the username and password “123456.” The second of those two tries worked. “It’s more common than you’d think,” Carroll says. There appeared to be no multifactor authentication for that Paradox.ai login page.

With those credentials, Carroll and Curry could see they now had administrator access to a test McDonald’s “restaurant” on McHire, and they figured out all the employees listed there appeared to be Paradox.ai developers, seemingly based in Vietnam. They found a link within the platform to apparent test job postings for that nonexistent McDonald’s location, clicked on one posting, applied to it, and could see their own application on the backend system they now had access to. (In its blog post, Paradox.ai notes that the test account had “not been logged into since 2019 and frankly, should have been decommissioned.”)

That’s when Carroll and Curry discovered the second critical vulnerability in McHire: When they started messing with the applicant ID number for their application—a number somewhere above 64 million—they found that they could increment it down to a smaller number and see someone else‘s chat logs and contact information.

The two security researchers hesitated to access too many applicants’ records for fear of privacy violations or hacking charges, but when they spot-checked a handful of the 64-million-plus IDs, all of them showed very real applicant information. (Paradox.ai says that the researchers accessed seven records in total, and five contained personal information of people who had interacted with the McHire site.) Carroll and Curry also shared with WIRED a small sample of the applicants’ names, contact information, and the date of their applications. WIRED got in touch with two applicants via their exposed contact information, and they confirmed they had applied for jobs at McDonald’s on the specified dates.

The personal information exposed by Paradox.ai’s security lapses isn’t the most sensitive, Carroll and Curry note. But the risk for the applicants, they argue, was heightened by the fact that the data is associated with the knowledge of their employment at McDonald’s—or their intention to get a job there. “Had someone exploited this, the phishing risk would have actually been massive,” says Curry. “It’s not just people’s personally identifiable information and résumé. It’s that information for people who are looking for a job at McDonald’s, people who are eager and waiting for emails back.”

That means the data could have been used by fraudsters impersonating McDonald’s recruiters and asking for financial information to set up a direct deposit, for instance. “If you wanted to do some sort of payroll scam, this is a good approach,” Curry says.

The exposure of applicants’ attempts—and in some cases failures—to get what is often a minimum-wage job could also be a source of embarrassment, the two hackers point out. But Carroll notes that he would never suggest that anyone should be ashamed of working under the Golden Arches.

“I have nothing but respect for McDonald’s workers,” he says. “I go to McDonald’s all the time.”

Updated at 5 pm ET, July 9, 2025 to make clear that the phishing risks would only be possible if someone had the opportunity to exploit the data.