Deepfake technology consists of replacing human faces using video technology. It is an artificially created way of showing a person’s face in a video, even without their consent.
The AI can change the face of one person for that of another. This technology can not only change a person’s appearance, but as technology advances, it can mimic voice as well.
Deepfake Technology: What are Deepfakes?
A deepfake is a generated video, image, or sound that mimics the appearance and voice of a person. Also known as «synthetic media,» they are so convincing in mimicking the real thing that they can fool both humans and algorithms.
Deepfakes are generated by AI in real time and are most commonly used in the form of videos or augmented reality filters. «Deepfake» as a term comes from a combination of the words «deep learning» and «fake» and refers to something fake that is the result of deepfake technology.
While there is a growing market for consumer apps (such as FaceSwap) that use deepfake technology for fun, it can also be used for malicious purposes as it becomes more widespread and accessible. In fact, it already does to some extent.
How Does Deepfake Technology Works?
Deep fakes are generated using machine learning; more specifically deep learning, and general adversarial networks (GANs). In a nutshell, this means that two neural networks are competing against each other, where the goal of one is to generate an image that the other is incapable of distinguishing from its training data, and the goal of the second is to avoid being fooled by the first.
As explained in a 2014 article: A generative model, G, that captures the distribution of data, and a discriminative model, D, that estimates the probability that the sample came from the training data instead of G. G is trained to increase the probability that D is incorrect. GAN is a pioneering technique in computer visualization and has been very successful in producing human-like images. There are also commercial GAN services if you don’t have the computing power to run these models at home.
The biggest challenge for deepfake technology in the past was the availability of training data for specific people. Due to the limited amount of data available, celebrity fakes became very popular at first. However, as deepfake technology has improved, it has become easier to create deepfakes from single images or short audio recordings.
Why Are Deepfakes Dangerous?
Because deepfakes convincingly distort reality, they present a number of dangers, including online fraud, misinformation, hoaxes, and revenge pornography.
However, we must first clarify that not all deepfake apps are dangerous or of a questionable legal and ethical nature. In fact, most of the known apps are popular for entertainment and as a way to express creativity.
But being able to impersonate a person in a streaming video or over the phone will be a challenge for online verification systems that rely on facial or voice recognition, whether done by a human or automatically by software.
In fact, researchers at Sungkyunkwan University demonstrated in March 2021 that most of the current facial recognition APIs can be bypassed using deepfake technology. According to the Deepfake report, researchers at Amsterdam-based company Deeptrace were able to identify a whopping 96 percent of fake pornographic videos online.
This may seem relatively innocent in terms of financial fraud. But not when you consider that it may lead to extortion, as the deepfake’s creator threatens to post a fake private video of someone who seems very trustworthy. Victims have no choice but to pay.
Another issue highlighted by journalists and researchers such as the BBC, Wired, and Forbes is that deepfakes can cause political and social damage through misinformation and fake scandals, as public figures can be manipulated to look like something they said something they never did.
Two EU Observer researchers reported that on April 21, 2021, the foreign affairs committee of the Dutch parliament had a phone conversation with what they believed to be a Russian opposition politician. But was actually a voice deepfake. Commenting on the incident, they added: «When deepfakes enter the political disinformation market, the problems we have now can be child’s play.»
What Is a Shallowfake?
Shallow fakes are deceptive images and videos that are manipulated for advertising or profit, not using sophisticated machine learning techniques, but rather more traditional and affordable image, audio, and video processing tools.
For example, it could slow down or speed up the video, change its sound, or simply rename the file to suggest something more malicious.
Although it’s a new term to reflect the term «deep fake,» shallow fakes have been around for a long time, with tools emerging to support each type. They may use older technology, but they are just as attractive.
Regardless of the type of impersonation used (video, audio, or even still images), impersonating primarily uses deepfake technology that allows scammers to impersonate someone else. This could be the victim’s boss or manager (CEO fraud), a distant relative, or someone they know and don’t see often.
When the voice or image of people in the victim’s life is manipulated, scammers direct them to transfer money to their accounts or perform other questionable actions.
In fact, there have been several blatant examples of this type of fraud. For example, a fake voice created through voice cloning convinced a CEO to wire $240,000 to a service provider in another country that he did not recognize.
How to Spot Deepfakes?
Whether it is a video or a photo, you should pay attention to the following points to detect a Deepfake:
- Inconsistency of the skin (too smooth or wrinkled, or their age seems odd compared to the hair);
- Soft shadows around the eyes;
- Glare error for glasses;
- Unrealistic facial hair;
- Unreal birthmarks on the face;
- Blinks too much or too little;
- The color of the lips does not match the color of the face;
- Unreal movements in and around the mouth. Most deep fakes give the impression that something is wrong, and that is because there is a flaw in the process that leaves a residue.
How Does Deepfake Fraud Affect Companies?
Deepfakes are basically hoaxes, which is why it is a popular method of using synthetic voices to fool managers.
Consumers and employees can be affected by deepfake attempts, and can become victims of deepfake identity theft, as scammers may attempt to use deepfake technology to pass KYC verifications based on in impersonation, facial matching, and other biometric identity verifications.
We must also remember that criminals and scammers are constantly finding new ways to use deep fakes, such as extortion, blackmail, and industrial espionage. Businesses, organizations and individuals must remain vigilant.
What Are the Advantages and Disadvantages of Using Deepfake Technology?
Speaking of cinema, you can bring dead performers back to life or change dialog without having to reshoot scenes.
For marketing departments, this is new territory to create new ad campaigns with deep fakes and opens up countless opportunities for new activities.
Video games are also seeking more investment in the development of deepfake technology.
Regarding the disadvantages, we point out the lack of privacy and respect for people, such as making videos with famous faces for people over 18 years of age, and creating fake news using deepfakes. The person they impersonate has no way to prove the video is fake as there is a lack of control to determine if a video uses said technology.
With this in mind, Facebook and Microsoft began to investigate the detection of these types of videos and launched the Deepfake Detection Challenge, a challenge to create technology that can warn about the presence of deepfakes in videos. Facebook provided the project with funding of $10 million.
For deepfake technology to be used correctly, dignity and privacy limits must be set so that it does not negatively affect people’s lives (such as making viral videos that humiliate others without their consent). An obvious example is the state of California, which prohibits the distribution of videos that denigrate political candidates.
You can find more information about this and other related topics on our website Para Hombre.
