In February 2024, a finance worker at a multinational firm received a message from the company’s U.K.-based CFO stating a need for a secret financial transfer of $200 million in Hong Kong dollars ($25.6 million U.S.).
At first, this worker assumed it was a phishing attempt, but a video call with who he thought were several other colleagues was enough to put aside his early skepticism — only they weren’t real. Despite looking and sounding like other members of staff he had come to know, each were deepfake recreations made by generative AI.
In this elaborate scam, one could easily conclude that it was an inside job and some of that money made its way back to that finance worker. Regardless, there is no denying the massive surge in deepfake video and audio being used for nefarious purposes.
Deepfake technology isn’t new. But adding generative AI capabilities to tools that are already powerful, easy-to-use, and readily available is rapidly changing the threat landscape.
“Generative AI based on large language models is a force multiplier,” explained Joshua Liberman, president of Net Sciences, talking about how AI can realistically impersonate human voices — down to the minute flaws in someone’s speech — with as little as a few seconds of audio captured.
As we learned from the finance worker in Hong Kong, deepfake video is quickly following suit. According to Ian Richardson, principal consultant at Fox & Crow Group, everyone is at risk. “If you’re visible in the marketplace, doing public appearances, webinars, podcasts, live streams, or a YouTube channel, you’re sharing yourself with the world. Not just your likeness, but your body mannerisms, the way you dress, your hairstyle, and your voice.”
With very little effort required, bad actors can feed those samples into an AI model. The resulting avatar can convincingly “request system access, data, money, you name it. And that’s the ballgame,” said Richardson.
That means almost anyone with malicious intent can create deepfakes, even those that aren’t particularly smart or tech savvy.
“What used to take a small band of people months to do might be executable in weeks or even days. You could be some 400-pound kid in his grandmother’s basement running these scams that would previously take a nation-state level of capabilities,” Liberman said.
The Threat to MSPs and Their Customers
The ability to digitally turn nearly anyone into a virtual puppet has opened the door to new forms of attack. But criminals are starting with the “low-hanging fruit,” per Liberman, to cash in on opportunities for quick and easy money.
By leveraging effective infiltration techniques like email compromise, a threat actor can build detailed profiles of an MSP’s staff, customers, and communication patterns. That represents everything needed to put together a customized and convincing deepfake campaign.
The approach “targets the very core of the IT managed services space,” said Richardson.
Social Engineering as the Next Digital War
Businesses are facing more than just financial risks from deepfakes. Shidarion Clark, chief information officer at Lannan Technologies, sees the reputational damage from social engineering and misinformation as having the largest impact.
“Business growth often hinges on word of mouth. If somebody wanted to use deepfakes to paint your company in a negative light, it’s going to be that much harder [to recover from it].”
Clark’s insight should rattle any business owner. Anyone with a grudge against a company or even a competitor could wield deepfakes and disinformation as a weapon. To make matters worse, it’s an incredibly effective one.
An example of this is the recent onslaught of convincing deepfake videos of political and public figures. Even when the content is proven to be fake, the damage is already done.
“Disinformation is the most powerful, destructive force there is, and it’s really hard to grasp how profound and pervasive it is in some societies,” noted Liberman.
“Most people reason quite emotionally and make a quick decision, and later facts don’t have much impact on them,” he added. “Because this misinformation has the force multiplier of the internet and social media, it stays out there. So sure, it’s a fake video, but it may still get played another billion times.”
Legal consequences can serve as a deterrent for those who use deepfakes to commit crimes, such as fraud, but when it comes to social engineering, there’s little on the books to dissuade them.
“Assuming you even knew who was doing it, I’m not aware of any strict remedies under the law that would prevent that type of stuff,” said Joseph Brunsman, best-selling author and managing member at Brunsman Advisory Group. “You start running into fair-use and First Amendment issues.”
Using an example of making a video of someone saying something outrageous, Brunsman went into the complex myriad of questions deepfakes cause. “It’s parody, but then parody is in the eye of the beholder. Was that made with malicious intent or not? Was there malice or forethought to make this to specifically lie to people to construct some final goal? Or is it just, ‘I thought this was funny?’”
Brunsman says most of the generative AI litigation on the books or introduced deals with the use of AI for things like hiring and firing, but nothing that touches on deepfakes.
The Next Level of Zero Trust – and the Opportunity for MSPs
At a societal level, deepfakes that can’t be distinguished from reality threaten to shatter the fundamental hierarchy of human trust. If you can’t believe what you see, then what can you believe?
That has severe implications for how businesses will need to operate, taking the concept of Zero Trust to a whole new level and adding verification to nearly every business process — which undoubtedly will impact the speed at which businesses run. “From a workflow standpoint, things might have to slow down just so you can vet where these things are coming from and the sources,” said Clark.
Resetting customer expectations will be a part of that shift, as Richardson noted, “We’ve got to slow down the immediacy in the world that was brought via email and start to reset expectations for clients where no matter what you get on your computer in any way, shape, or form, you need to verify as the new default instead of trust as the default.”
As a digital threat, your customers will be looking to you to help protect them, which is a new opportunity to bring value to that relationship and generate additional revenue.
The place to start is awareness. Customers need to understand that businesses of all sizes are targets for this form of attack.
“Educating people on what deep fakes are is the first step. You can’t trust what you see or hear or read on a computer by default; if it’s coming in off of a computer, it could be manipulated,” Richardson said.
Both Clark and Liberman highlighted user training as key to the fight against deepfakes. Common sense could go a long way. Liberman emphasized the importance of slowing down and analyzing situations from a calm, rational perspective. “You have to maintain situational awareness as a potential ‘dupee.’ Don’t fall for it. Do you normally pay for things by gift card? Do you normally rush through a six-figure order to a new bank?”
Deepfakes could also be an opportunity for MSPs to work with customers on shoring up their operational maturity to stay compliant with cyber liability coverage. When asked whether cyber insurance would cover damages from deepfakes, or if he expects to see new policies specifically for loss against social engineering or deepfakes, Brunsman responded, “It could. It depends on what your exact policy says. For example, some policies may require you to have a pre-arranged callback number to a known entity before you send any money over X amount, and you have to demonstrate you did that before the insurance company will reimburse you.”
Richardson believes the industry will develop and use new security technologies, opening potential revenue opportunities around generative AI defense. One such opportunity he said will be focused on defending against the appropriation of a person’s likeness. It can be compared to the evolution from antivirus to EDR to detect and respond in real-time, he said.
“There’s going be some sort of generative AI detection and response stuff or an AI operations center. First it was the NOC, then the SOC, next it will be the AIOC. You heard it here first.”
Image: iStock