Just when we thought it couldn’t get any more interesting (or potentially problematic) with all the fake news that is rolling around out there, now we have deepfakes. What’s a deepfake? This…
Fun? Yes. Interesting? Most definitely. Creepy? You got it. Trouble down the road? Ding, ding, ding! What you just saw was called a Deepfake and as technology continues to advance at its rapid pace, so will our ability to manipulate video in ways that may be undetectable in the near future.
Our ability to manipulate photos has been around for quite some time. Since the 19th century, photographers have had the ability to play around with a picture to create the optic they wanted. As technology got better, this ability became easier and easier.
When computer programs such as Photoshop were built, even amateur photographers could get into the act, adding people to pictures, removing people from pictures, adding this or that, subtracting this or that. While it was easy to do on a still picture, it was a different story for videos. Even those in Hollywood had a difficult time trying to make this happen.
A Deepfake is false media. It can be of anyone, in any setting. This media will contain a person, usually famous, and they are replaced by another person’s likeness and typically another famous person. While on the surface this may sound like a fun thing (and some truly are) the reality of Deepfakes can be quite worrisome.
The name “deepfake” comes from a combination of the terms “deep learning” and “fake”. Deep learning, by itself, is an artificial intelligence function. It is a subset of machine learning which can imitate how the brain processes data and creates patterns for use in decision making.
WHEN DID IT START?
This whole concept of Deepfake started in 2017 by a Reddit user (actually named “deepfakes”) who began to produce sleazy videos of celebrities engaged in various pornographic adventures. The user would take the faces of famous people and transpose them onto the bodies of the actors in adult movies. It became such a hit that more and more people began to post similar videos on Reddit until it became too much and those users were banned.
That was just the beginning. In the few years since “deepfakes” began his seedy career, technology has advanced ten-fold. No longer must one have intimate knowledge of artificial intelligence concepts or techniques. Now you only need to have a decent working knowledge of a computer and a decent enough computer to make it happen.
HOW DEEPFAKES ARE MADE
The technology needed to pull off a good deepfake is already among us. According to John Villasenor, who is a professor of electrical engineering at the University of California, Los Angeles, “anybody who has a computer and access to the internet can technically produce deepfake content.”
There are a few steps you would need to take to make a deepfake video. But first things first. If you want a good deepfake, a standard computer most likely won’t cut it. Most of the extremely realistic looking deepfakes have been created on high-end desktops that use very powerful graphic cards. Even better would be to use the computer power in the cloud.
Using the cloud reduces processing time from days and possibly weeks to hours. Expertise is not only needed to create the face swap of the deepfake but also any touch up needed on the completed video to reduce flicker or any other visual defects.
As for those few steps. A little leg work is necessary if you want a a convincing deepfake. This leg work is all about collecting enough images of your face swapping targets. This could be the trickiest part, finding the proper number of images of each. It has been said that as few as 300 images could be used to make a good deepfake. Others have said as few as 500.
Once you have your collected images of the two people, the face shots need to be run through an encoder, which is an AI algorithm. What this encoder does is find and learns similarities between faces, then reducing them to shared common features. While the encoder does this, it also compresses the images.
A decoder, a second AI algorithm, is then used and is taught to go through the compressed images to recover the faces. With the faces being different, you train the decoder for one face, then use another decoder for the other face. To complete the face swap, you would simply put the encoded images into the wrong decoder. For instance, you would take the compressed image of person A and feed it into the decoder that is trained on person B. The decoder will then reconstruct the face of person B with the orientation of face A along with its expressions. For a very convincing deepfake, this process is done on each frame.
There is another way to create a deepfake and that is by using a GAN (generative adversarial network). A GAN is where two AI algorithms go against each other. The first, called a generator, is given random noise to turn into an image. This synthetic image is added to real images – perhaps a celebrity – that are in turn fed in the second algorithm called the discriminator. This process is repeated a countless number of times because at first, the synthetic images produced look nothing like faces. This repeating also receives feedback, allowing both the generator and discriminator to improve. With enough cycles and feedback, the generator begins to produce completely realistic faces. A seasoned GAN has the ability to create a video clip with as little as one image.
Research was released by Samsung’s AI Center that spoke about the science behind this GAN approach. “Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”
POPULAR DEEPFAKE APPS
Fear not, those of you who wish to jump on the deepfake train but may not have the ability to do so yourself. There are apps out there to help, they even make them for you. Zao is a deepfake app that allows the user to upload images and have the Zao AI engine swap the face of the user to a wide selection of celebrity video clips. Zao is only for iOS.
Deepfakes web β is a web service that allows you to create deepfake videos, but the learning curve here is a little more than what you’d find in something like Zao. The training time could be up to 4 hours with another 30 minutes or so to swap faces with help from the training model.
AvengeThem. Marvel fans, this could be one for you. This website allows the user to swap their face with a Marvel character. While this is not a true deepfake, it is a cool app that is close enough to one and one kids can have fun with.
There are plenty more deepfake apps out there. A little Googling can go a long way, depending on just how realistic you want your deepfake to look.
WHAT ARE DEEPFAKES DOING NOW
Try this amazing Deepfake which puts Tom Holland and Robert Downey Jr. in Back to the Future…
The technological advances we have seen in the past year has been amazing and has really contributed to the deepfake agenda. Unfortunately, much of this agenda has not been a positive thing.
In September 2019, an AI firm called Deeptrace searched and found over 15,000 deepfake videos online. This nearly doubled over a nine-month period. Of those 15,000 plus videos, 96% were pornographic and 99% of those were famous female celebrities whose faces were put on porn stars.
That said, there are plenty of spoof or satire deepfake videos out there…
…but as Boston University professor of law Danielle Citron puts it, “Deepfake technology is being weaponized against women.”
HOW TO TELL WHAT’S REAL
And while the technology is making it more and more difficult for viewers to tell what’s real or not, there are ways for the human eye to catch the fakes. One way, the faces on deepfakes don’t blink normally. The reason for this is that all the images that get run through the AI have their eyes open. So the AI couldn’t figure out how to make the character blink. Viola, right? Wrong. Technology advances. AI’s adapt and change.
Poor quality, on the other hand, most definitely can be spotted. Patchy skin tone, poor lip-synching. The fine details can also be a giveaway such as hair or poorly rendered jewelry or teeth. Lighting effects such as reflections on the iris can be another giveaway.
HOW WORRIED SHOULD WE BE?
It depends on who you ask. University of Southern California Professor Hao Li told the BBC, “We are already at the point where you can’t tell the difference between deepfakes and the real thing.” He added, “It’s scary.”
He then went on to talk about something we’ve all seen firsthand. “Just think of the potential for misuse and disinformation we could see with this type of thing,” says Prof Li. Nancy Pelosi, anyone?
As with most things, when given to the wrong people, bad things can happen. The potential for this to become the norm is quite high, so how do we combat it?
Well, if AI is going to be the problem, then AI is also going to be the answer. Governments, tech firms and universities across the globe are funding research to better detect deepfakes. In fact, Microsoft, Facebook, and Amazon backed the very first Deepfake Detection Challenge that included research teams from around the world to compete for the number one spot in the deepfake detection game. The importance of this should go without saying. The effect this could have across the globe could be detrimental to so many.
But it’s not all doom and gloom. There have been some pretty fun and funny videos to come from this. Some are just creepy. Take a look at a few below and while you do, ponder the good and bad from this technology. Remember that saying, don’t believe everything that you hear. Well, we can now toss in, seeing isn’t believing.