Research Finds AI Deepfaked Faces Look More Real Than Genuine Photos

These faces may look realistic, but they were computer generated.

Research reveals that artificial intelligence (AI) generated deepfake faces look more real than genuine photos.

According to the study, people cannot reliably distinguish photos of real faces and images that have been AI-generated.

As AI deepfake technology, like “generative adversarial networks” (GAN), become more widely available, fake “photos” could erode social trust and change the way people communicate online.

These faces were computer generated.

The findings were published in the paper “On the Realness of People Who Do Not Exist: The Social Processing of Artificial Faces” in iScience.

In an article for Reaction, Manos Tsakiris, who is one of the authors behind the study, says it remains unclear why humans find deepfake faces more real-looking that actual photos.

However, this fact does highlight the major advancements in the AI deepfake technology used to create these images.

Curiously, the research found that the faces that people rated as less attractive were also regarded as more real.

Tsakiris suggests that these GAN-generated faces could look more real to people “because they are more similar to mental templates that people have built from everyday life.”

Potential Consequences

However, this shift to a cultural landscape where it becomes impossible to distinguish between real faces and AI-generated faces could have repercussions on “social trust.”

Social trust describes how people extend trust to a group of unfamiliar people.

“In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy,” Tsakiris writes in Reaction.

“The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this ‘truth default’ state, eventually eroding social trust.”

The potential ubiquity of realistic yet fake online content could require people to start thinking and responding differently.

Tsakiris explains: “If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity.”

He says that people need to be more critical when they encounter a digital face — employing reverse image searches to check the authenticity of a photo and questioning social media profiles with little personal information.

Tsakiris says that the next online frontier could be improved algorithms for detecting fake digital faces so that people can identify what is real and what is fake on the internet.

Image credits: All photos by