A free web-based app that claims to be able to identify images generated by artificial intelligence (AI) is at the center of an argument between Israel and Palestine supporters.
Content warning: This article contains upsetting descriptions and imagery of baby deaths.
After Hamas, a militant group that the United States designates as terrorists attacked kibbutzes and a music festival inside Israel, horrendous stories of what Hamas did to Jewish citizens have abounded.
One of the more disturbing stories was of babies being burnt to death. When photos of charred corpses were published by Israel’s official Twitter account (now known as X), the images were run through Optic’s AI or Not software which labeled the photos as being fake.
The impetus for the photos being declared AI-generated came about after conservative Jewish commentator Ben Shapiro posted the pictures to his Twitter feed on Thursday with the caption: “You wanted pictorial proof of dead Jewish babies? Here it is, you pathetic Jew-haters.”
That same day, provocateur Jackson Hinkle shared Shapiro’s post alongside a screenshot from Optic’s AI or Not which declared: “This image is generated by AI.”
Partisans have jumped on this finding with critics accusing the official Israel account of spreading propaganda through AI-generated imagery.
Is the Photo of a Burnt Baby Really Generated by AI?
An expert tells 404 Media that the photo is not generated by AI and is real. What it shows exactly cannot be confirmed — but it appears to be a genuine photo.
Hany Farid, a professor at UC Berkley and a world leader in the detection of digitally manipulated images, says that the photo contains elements that can’t be replicated by AI.
“One of the things these generators have trouble with is highly structured shapes and straight lines,” Farid tells 404. “If you see the leg of the table and the screw, that all appears perfectly and that doesn’t typically happen with AI generators.”
According to Farid, the other giveaway is the coherent light within the picture. A source from above provides consistent shadows throughout the image.
“The structural consistencies, the accurate shadows, the lack of artifacts we tend to see in AI — that leads me to believe it’s not even partially AI-generated,” Farid says.
Farid says that he has his own AI image detectors which classified the photo as real.
Even Optic’s own website says “AI or Not may produce inaccurate results.”
In a highly-charged conflict such as the one playing out in Israel and Palestine right now — in a world where disinformation has become rife. It can be difficult to ascertain what is real and what is not.
Photographs used to be near-certain pieces of evidence but it is becoming clear that the rise of AI imagery is putting an end to that.
A brief search on Twitter (or X) shows multiple accounts declaring the photos to be AI images and articles have been written denouncing them, yet that’s seemingly not the case.
Clearly, AI image detection tools have some way to go and should not be taken as fact.
“It’s a second level of disinformation,” Farid adds in the 404 Media article. “There are dozens of these tools out there. Half of them say real, half say fake, there’s not a lot of signal there.”