Scientists Say Humans Think AI-Generated Faces Are Better Than Real Ones

An alarming new study suggests that humans actually prefer AI-generated faces to real ones. This is bad news for deep fake issues.

By Doug Norrie | Published

This article is more than 2 years old

ai-generated faces

When it comes to deepfakes on the internet, there’s no doubt that the technology is improving to the point where it is becoming increasingly difficult to tell the real thing from AI-generated faces. Heck, sometimes this is just a fun exercise with entrepreneurial folks imagining what certain actors and actresses would look like in iconic roles. But there are other examples in which folks are using deepfakes to just cause confusion or even put out disinformation to large numbers of people. That’s when things start to get real concerning and it seems as if we are getting to the point in which humans are starting to “trust” the fake faces more than the real ones. 

In a study from Proceedings of the National Academy of Sciences USA(via Scientific American), in a paper titled “AI-synthesized faces are indistinguishable from real faces and more trustworthy” researchers put a number of deepfake and AI-generated faces in front of a number of participants to see if the latter could spot the fake. The researchers also wanted participants to look at the faces to determine the “trustworthiness” of what they saw. The results were far from encouraging. In fact, it would seem that there is some technology that is now able to produce new faces that are almost indiscernible to the human eye. This could have far-reaching implications for what we see online, and what we can trust when it comes to new content. 

At the outset, two different neural networks communicate back and forth in building out different kinds of faces. Essentially, one of the networks created the faces and the other network would provide feedback in terms of how “real” those AI-generated faces looked. This was done by comparing against pictures of actual humans. As time went on, and more information was shared back and forth. The original network was able to get down to an individual pixel level to produce faces that passed the muster when it came to the second network, known as a discriminator. 

The first part of the study involved showing the AI-generated pictures to a group of testers. With 400 fake faces and 400 real ones to choose from, the studies participants were broken into three groups. The first group just tried to differentiate between real and fake. Pretty easy right? Not so much. More than 300 people couldn’t get better than 50% when it came to choosing the right one. Closing your eyes and just randomly saying real or fake would have theoretically yielded a similar result. 

In the second study, more than 200 people were coached up by the researchers in what to look for when it came to spotting AI-generated faces. Essentially, in this way, they were given a cheat sheet in terms of where to focus to see if they were being duped. This one had slightly better results, but not by a ton. That group only picked correctly 58% of the time. Still probably not enough to think the general public could start distinguishing what was a deepfake in real-time. 

And then, arguably the most disturbing part of the study was the third group. Another 200+ people were chosen to rate the “trustworthiness” of different faces from the larger sample size. On a scale of 1 to 7 (least to most trustworthy), the group scored each pic. The results? The fake faces scored 4.8 compared to 4.4 for the real ones. Basically, fake beat real by 10%! 

Look, if we are making Millie Bobby Brown into Princess Leia, no big deal. But with deepfake technology, there is all sort of ways things can go haywire. And if humans like the fake faces more than the real ones then we really might be in trouble.