Preloader Image

Deepfakes, originally used for mere digital tomfoolery, have come into their own as a bona fide cybersecurity threat, boosted by advances in AI and online tools that make them easy to create. But they can also be countered with vigilance and new technologies.

That was the message at a recent Fordham presentation about deepfakes—photos, videos, and audio recordings manipulated to show people doing or saying things they didn’t do or say. They’ve often been used for fun or entertainment, “but unfortunately, the trend we have seen over the past few years [is that] they have been used by the hackers in much worse ways,” said University Professor Thaier Hayajneh, Ph.D., director of the Fordham Center for Cybersecurity.

Deepfakes have been used for bank fraud, blackmail, political disinformation, and more, he said on July 16 at Fordham’s International Conference on Cyber Security, held at the Lincoln Center campus.

Deepfakes and Financial Crimes

Research has shown that people are worse at detecting deepfakes than they think, he said. And the results can be costly: Last year, someone at a multinational firm was tricked into paying $25 million to fraudsters because a video call that included deepfake versions of the chief financial officer and other employees made the transaction appear legitimate.  

There was an “alarming escalation” in deepfake attacks worldwide this year, with losses of more than $200 million in the first quarter alone, according to the voice technology company Resemble AI. Deepfakes have spread across all continents, affecting developed and developing nations alike, as well as regular citizens, not just celebrities and politicians, the company said. 

Co-presenter Maria Chano, PCS ’21, GSAS ’24, an AI and cybersecurity strategist at ICF International, noted that sometimes deepfakes can be given away by out-of-sync lip movements or telltale eye movements. But deepfakes are growing more sophisticated because of online labs, originally developed for research purposes, that use new AI techniques to make the fake images harder to recognize, she said.  

Fighting Back Against Cyberfraud

And it’s not all about technology—fraudsters also play on our emotions, Chano said, giving the example of a deepfake video of an old lady that was used to secure a bank loan via a video call because her infirmities supposedly kept her from visiting in person.

Such advanced deepfakes can be countered with tools from tech companies like Hive, recently awarded a contract for guarding the Defense Department against fake images, video, and audio, she said. Also important, she said, is top-to-bottom awareness in organizations about deepfakes—from the company leadership to the chief information security officer and all employees.

“Make everybody at your organization aware of deepfakes,” she told the audience.

ICCS, jointly organized by the Gabelli School of Business and the FBI, brought together members of law enforcement, the private sector, and academia for three days of panels and workshops about countering cyberthreats.