Most people have heard the term “deepfake” by now, but many don’t know what it means or how deepfakes are used. Unfortunately, cybercriminals have added deepfake videos, images, and audio to their rapidly growing arsenal with the goal of tricking unsuspecting users. In this guide, we’ll examine what a deepfake is, how it works, the signs of a deepfake, and how cybercriminals use deepfakes.
What is deepfake? The meaning and how it works
A deepfake is a piece of faked content (photo/video/audio clip) someone creates using sophisticated AI. The goal is to make the media in question look and sound completely legitimate.
Deepfaked audio can use AI to create a convincing fabrication of an individual’s voice, making it sound like they said something they never said. A deepfake video may feature a famous person like a politician, movie star, or celebrity doing something they never did.
Creators of deepfake audio and video use existing pieces of content to help AI learn, allowing it to piece together something unique. These videos are incredibly realistic, and often, the public mistakes them for the real thing.
Many deepfakes are harmless and used as hoaxes, but some can be very damaging and unavoidably spread fake news. For example, in 2022, hackers released a fake video of Ukrainian President Volodymyr Zelenskyy urging his troops to surrender.
Deepfake creators use two AI algorithms when creating content: a generator and a discriminator. The first creates the content, and the second refines it to make it almost indistinguishable from the original. The generator creates data that the AI learns from as it refines the image. This process is continued until the image, audio, or video is perfected.
Recently, companies have been using deepfake technology for relatively harmless reasons, such as customer support, video games, and entertainment. But criminals have begun to use them for other, more nefarious purposes.
When did deepfake technology first emerge?
Deepfake technology burst into the public arena back in 2017 when a Reddit user coined the phrase and used AI technology to put the faces of celebrities into pornographic videos.
Since then, deepfake technology has improved in stunning ways, and in many cases, people can’t see the difference between real footage and deepfake footage.
Potential dangers of AI deepfake technology
AI deepfake technology is not a toy that can be played with. It’s having severe real-world consequences.
Misinformation
In elections, where results are often decided by what each candidate says, AI deepfake technology is increasingly being used to portray opponents in a negative light. Videos are made showing them saying things they didn’t say. When people can’t tell the difference, it could destroy their faith in the system.
Outside of politics, bad actors can spread false beliefs and propaganda by making AI deepfakes that show someone in a position of authority agreeing with them and urging others to do the same. Salespeople can sell scam products by making deepfakes of who they claim are satisfied customers.
National security implications
Imagine you’re a nuclear submarine commander out at sea, and you get a message from the president ordering you to fire a nuclear missile. How do you know if it’s really your leader ordering a nuclear strike? How can they know for sure that it isn’t a deepfake AI made by a foreign adversary?
Destroyed reputations
It’s not just politicians who this can affect. Anyone can be the victim of a deepfake. Celebrities can be faked saying and doing things. Businesspeople can make deepfake videos to get rid of their business partners. Divorcing spouses can make AI deepfakes to win divorce and custody cases, or fake nude photos could be made for the purpose of blackmail. The list goes on.
Identity theft
Deepfake voices can be used by scammers and other criminals to persuade people to hand over sensitive data, which in turn leads to identity theft.
In a world where seeing is believing, deepfakes can blur that line and make people doubt their own eyes.
Why do people engage in deepfake scams?
Why do people do deepfake scams? What’s in it for them? Let’s look through the main motivations behind it.
Money
Obviously, this is the biggest motive of all. Financially motivated deepfake scams may take the form of blackmail, extortion, or identity theft. In cases of sextortion, threats are made to expose secrets of a secular nature unless money is paid. It’s a good old-fashioned shakedown.
Using AI deepfakes, videos can be made of people engaging in sexual acts (legal or illegal), as well as pornographic photos. Therein lies the ultimate nightmare. Even if you’ve never taken a single photo or video of yourself engaging in sexual acts, it doesn’t mean you won’t be the leading star in a fake one.
Gain an advantage
This can cover a wide variety of factors. It can refer to a political advantage in which a figure smears their opponent with false allegations. It can be a business advantage where a scheming partner or even a regular employee attempts to gain financial advantage or revenge over someone else in the company. A company could also attempt to take down a competitor by damaging its reputation.
In any scenario where there is a “battle” going on between two sides, with one side trying to gain an edge over the other, a deepfake video or audio recording can tip the scales.
Play out fantasies
Sometimes, money or advantage isn’t the motive at all. Quite frequently, people want to play out their fantasies, and deepfakes help them do that.
Whether this is a pornography-related example, a celebrity endorsement, or a salesman mounting an imaginative marketing campaign, deepfakes can indulge anyone’s ego and desires.
Bullying
Bullying is also another non-financial motive to consider. In cases of cyberbullying, attackers may place victims in embarrassing videos or create fake audio of them saying something that can ruin their reputation.
How AI deepfakes are generated
We’re not going to lay out how to make your own AI deepfakes. That would be deeply unethical and irresponsible. What we can do is talk in general terms about the tech behind deepfakes and how AI is able to make them so convincing.
Like any other AI model, deepfake technology relies on data — a LOT of data. The more data, the better.
If someone wanted to make a deepfake video of a politician saying something outrageous, it requires feeding in a lot of the politician’s public statements so AI can master the target’s voice. TV appearances can be fed in so the AI learns how to make a properly convincing video based on facial structure and mannerisms.
In the case of fake porn photos, the AI can take someone’s face and put it on the body in an existing video. It’s like using Photoshop to fake an image — but a hundred times more realistic.
Key questions about deepfake technology
Now it’s time to run through some common questions asked about deepfake technology.
For a start, AI deepfakes are far more effective than Photoshop at producing realistic-looking images. Even with the best Photoshop users in the world, you’ll always see something that gives it away as fake. Another difference is that doing something in Photoshop can require hours of manual tweaking by a human. AI can do the same job in minutes — and do it better.
At present, there are no actual laws that directly take on deepfakes. Instead, prosecutors can use charges like harassment and stalking. If it involves a child, other crimes can be charged, such as child endangerment or sexual abuse. At the very least, the victim could take the perpetrator to a civil court and sue them for defamation, loss of earnings due to termination of employment, etc.
Yes, a deepfake can be considered identity theft if that deepfake is used to impersonate someone for the purpose of stealing money or taking over some aspect of their life. But since deepfakes are not illegal, legal action has to be creative and related specifically to the deepfake.
What are the consequences of creating AI deepfakes?
We’ve already alluded to some of the ways AI deepfakes can have serious consequences, but let’s summarize them:
- Irreparable harm to a person or company’s reputation
- Job loss
- Divorce or loss of child custody
- Political interference
- Endangering national security
- Loss of trust in public institutions and authority
- Chaos and confusion, as people don’t know who to trust
- Death, if people are given the wrong advice from AI deepfakes
How to protect yourself from deepfake scams
One school of thought claims you can’t entirely protect yourself from deepfakes. Thanks to smartphones and social media, there are plenty of photos and videos of you out there for bad actors to steal. And others will pump this material out relentlessly. But there are still some things you can do to lessen the risk.
Review the privacy settings on your social media posts
On platforms like Facebook, you can specify if an image or video you share is public or restricted only to your followers. Restricting content to followers only will limit the amount of material that can be used to create a deepfake.
In the same vein, review your followers lists and remove anyone who shouldn’t be there anymore. And never accept follower requests from people you don’t know. They could just be there mining for data to use.
Don’t believe everything you see and hear
Once upon a time, if you saw a video or heard an audio recording of someone, you could believe it. After all, there they are! Sadly, those days are now over. So, before you jump to conclusions and assume the worst, stop for a minute.
If a video involves someone in the news, check other sources — reputable sources. Watch and listen for inconsistencies in the video and audio. Is the lip sync out of whack? Does the lighting look out of place? Is the video or audio quality poor? Always question and verify.
Get your information from reputable sources
The beauty of the internet is that it has given everyone a public voice. But this also has a major downside. Anyone can set up a website and call themselves an authoritative source. Ironically, they can use deepfake AIs to establish that authority and then say whatever they want.
At a time when trust in news organizations is rapidly declining, they remain the only reliable option. True, these organizations have their own agendas, so their coverage may be skewed. But it’s still better than taking the word of a blog or a Facebook post.
How can you tell a deepfake video or photo from a real one?
Some deepfakes are so convincing that it’s almost impossible to distinguish them from the real thing. That said, you can sometimes tell a deepfake photo or video from a real one if you know where to look.
The following are some of the signs to watch out to help you spot a deepfake.
1. Unnatural lighting or other elements
When AI generates photos and videos, it tends to get lighting effects wrong, resulting in an unnatural look. AI is also notorious for mangling fingers and hands. You may also notice blurred features, unnatural skin tones, and parts that appear “melted” into the image.
2. Unrealistic setting
Consider the image as a whole. Does the setting fit the person featured? Often, AI replaces many of the faces but not the environment and other details, so things don’t quite match up. For example, a normally busy city street with very few pedestrians might be a clue that the photo or video is faked.
3. Background details
This one goes hand in hand with the setting. AI uses a compilation of features to sew together something fabricated, and that doesn’t always result in a convincing background. If the background doesn’t fit the theme, be wary. Look closely at the background of the image and scan for anything that looks “off.”
4. An unverified source
Before you jump to any conclusions, ask yourself where the video you’re watching came from. Check the news and see if it matches up with the latest story. Typically, deepfakes are found online and are less likely to be featured on popular news outlets and publications that stake their reputation on the quality of their content.
5. Eye contact and out-of-sync audio
Other signs of a deepfake video are unnatural eye movements and sometimes out-of-sync audio. The result is a clip that looks strangely unnatural, or like it has been dubbed, a bit like a foreign film.
As an added tip, you can also use AI sniffing software to detect deepfakes. For example, while AI can make a person’s front-facing image look convincing, it has difficulty with varying angles. If you submit someone’s profile picture to an AI sniffing software and compare it to a deepfake, it can pick out the illegitimate one.
How deepfakes are used in cyberattacks and scams
Cybercriminals mainly use deepfakes to deceive individuals and companies for personal or financial gain. They can use deepfakes as propaganda to sway public opinion on certain subjects, and some experts are worried that dangerous groups may be using deepfakes to affect political outcomes.
Overall, cybercriminals manipulate digital media for many reasons, but some of the most popular attacks and scams include the following.
1. Identity theft
Threat actors can use deepfake videos and audio clips to impersonate company or financial institution executives to steal large sums of money. In 2020, cybercriminals stole $35 million from a Hong Kong bank using deepfake technology. In another instance, scammers used a deepfake hologram of a cryptocurrency CCO on a Zoom call, tricking executives into providing confidential information.
One hacker even used a deepfake to gain employment as a tech support team member at a large company. The hacker then used their position to steal confidential information and gain unrestricted access to the company’s network.
2. Stealing charitable donations
Even charitable giving has been leveraged by cybercriminals using deepfake technology. One recent attack utilized fake voicemails from a CEO urging employees, vendors, and suppliers to donate to a cause that turned out to be fake.
3. Fooling authentication
Now that multi-factor authentication has become more common, cybercriminals have stepped up their game. Hackers are now using deepfake audio and video clips to gain access to locked apps, accounts, and resources. Bad actors will target facial and voice recognition with very realistic fakes that allow them to breach secure accounts.
4. Phishing attacks
Scammers also use deepfake content in voicemail, email, and SMS phishing attacks to trick employees, clients, and others into making unauthorized payments or disclosing personal information. This data may then be used to access accounts and steal information or money.
5. Shallowfakes
Some criminals don’t have access to the kind of AI tools required to create high-quality deepfakes. But lower-quality derivatives, also known as “shallowfakes,” sometimes work just as well. A scammer may simply slow down an audio recording to make it seem like the speaker is intoxicated. Or if the objective is to insinuate an erratic or violent nature, scammers may speed up a real video to distort the content.
Cybercriminals are advancing their knowledge and technical expertise in creating deepfakes, but cybersecurity experts are also stepping up their game. As the two go toe-to-toe, average users must educate themselves on the dangers of deepfakes, verify content before taking any action, and rely on common sense, safety, and security first.