A Little Over 50% of People Can Recognize AI Images from Real Photos

A man wearing a dark jacket and scarf holds a camera while standing on a city street. The background is blurred with lights and people. A red "AI-GENERATED" label appears in the bottom left corner.
AI-generated photographer.

Just yesterday, PetaPixel reported that AI-generated models have started appearing in Vogue magazine and now a new Microsoft study suggests it is unlikely that readers realized that the photorealistic images were synthetic.

A recent study from Microsoft’s AI for Good Lab highlights the challenges people face in recognizing AI-generated images. According to the research, individuals’ ability to detect these images was “only slightly higher than flipping a coin.”

Participants in an online quiz titled ‘Real or Not,’ developed by Microsoft and used as the foundation for the study, correctly identified images only 62% of the time.

“Generative AI is evolving fast and new or updated generators are unveiled frequently, showing even more realistic output,” the study’s authors write. “It is fair to assume that our results likely overestimate nowadays people’s ability to distinguish AI-generated images from real ones.”

The study involved over 12,500 participants who evaluated around 287,000 images, a mix of real and AI-generated photos. The results indicate that people are marginally better at identifying human faces (65% success rate) than they are at distinguishing real landscapes (59% success rate). The researchers attributed this difference to humans’ innate ability to recognize faces, supported by other research such as a study from the University of Surrey, which notes that our brains are “drawn to and spot faces everywhere.”

Interestingly, participants performed similarly when identifying all images (62%) and when asked to focus exclusively on AI-generated images (63%).

Images produced by various leading generative models were included in the quiz, with those made using Generative Adversarial Networks (GANs) yielding the highest error rate — participants failed to correctly identify them 55% of the time. However, the study clarified that the realism of AI output isn’t solely dictated by the model type.

“We should not assume that a model architecture is responsible for the aesthetic of its output, the training data is,” the researchers write. “The model architecture only determines how successful a model is at mimicking a training set.”

Some of the most difficult images to judge were real photos that contained visual elements that appeared unnatural, such as unusual lighting. Although these characteristics may seem artificial at first glance, they were the result of authentic conditions — not digital manipulation.

The research team is attempting to develop its own AI detection tool, which reportedly achieves over 95% accuracy in identifying both real and synthetic images. However, such tools have failed to materialize in the past.

You can have a go at the quiz here.

Discussion