Social Media Platforms Are Trying to Prove Fake Images the Wrong Way

A woman with blonde hair holds a magnifying glass up to her face, which enlarges her right eye. She has a surprised expression and a pink background behind her.

There’s a clear need for a consistently effective way to tell if an image is fake on news and social media. However, current implementations of fake-detection technology have missed the mark and can potentially undermine the more effective tools that will undoubtedly arrive later.

One of the most notable “offenders” so far has been Meta, the parent company of the hugely popular social media platforms Facebook and Instagram, plus the Twitter/X competitor, Threads. In May, Meta’s “Made with AI” tag on Instagram made waves for being remarkably ineffectual, and it proved too common to be unfairly dinged for fake photos and too easy to bypass the checks altogether intentionally.

A beach scene in the early evening light with gentle waves reaching the shore. People are seen wading in the shallow water and enjoying the beach. A pier extends over the water, with some structures and flags visible. Hazy mountains are in the background.
Instagram’s ‘Made With AI’ tag is far from infallible. This image wasn’t made with AI, for example.

More recently, on Facebook, where many people — rightly or wrongly — receive their news, the platform mistook a real photo of the Trump assassination attempt on July 13, 2024, as fake.

In the case of Instagram’s “Made with AI” tag, metadata analysis was involved, which didn’t reliably account for all photo editing tools and how people, especially photographers, use them. Meta has already reversed some of this, putting different tools back into the oven to cook a bit longer.

The issue with the Trump photo on Facebook is a bit different. Someone edited an image of the immediate aftermath of the assassination attempt, making it appear like the Secret Service agents surrounding Trump were smiling. This is a minor change at an overall pixel level, impacting a tiny portion of the overall image. However, on a narrative level, it is a significant edit that entirely changes the nature of the important real-world event. Fake pictures like these have substantial disinformation potential. They’re insidious.

So, when Facebook inadvertently labels real, unedited images fake, it is, at the very least, an understandable overstep. Most people agree it is important to prevent edited images of newsworthy events from being spread unchecked everywhere.

A magnifying glass is placed on a black computer keyboard, with the blue lighting giving a cool, technological aesthetic. The enlarged view created by the magnifying glass highlights the keyboard keys underneath.

The other side of that coin, though, is that when a fake image check system fails by mislabeling real photos as being fake, which has happened on Meta’s two most prominent platforms in the last few months, the door is opened for two dangerous things to occur.

The first is that people think Meta has some agenda, sowing distrust in the platform’s systems by virtue of questioning its motives. The second and arguably more dangerous potential outcome is that people lose faith in image fact-checking systems altogether.

If society throws its hands in the air, saying, “Who knows what’s real and what’s fake? It’s impossible. I’ll believe whatever I want,” we collectively have a problem. Reliably accurate information is vital, especially regarding global events that can impact millions and even billions of people. The truth is a powerful shield against propaganda and intentional deception, but poorly implemented automated fact-checking systems introduce cracks in the armor.

This is not meant to be a Meta-bashing article, as the company is not the only offender, even if it is the highest-profile one. There’s something inherently problematic about all fake check-type systems for AI-generated and edited photos. With how good technology has gotten, it is complicated to determine reliably when an image is fake.

Without every single photo editing software and AI image generator implementing some digital marker that is impossible to remove, I don’t think it’s overly pessimistic to believe these systems will fail. It’s also safe to say that such a system isn’t anywhere near happening.

What is left to do if current image checks are unreliable and a potential solution is all but impossible?

The Path Forward Requires Shifting Focus Away from Trying to Prove Something is Fake and Toward Being Able to Show When Something is Verifiably True

While it can feel uncomfortable at first and seemingly not do enough to deal with emerging AI technology, the correct approach is to prioritize labeling images as verifiably real rather than worry about whether something is fake.

This is the approach that the Content Authenticity Initiative (CAI) takes with its C2PA standards. While that system was devised well before AI image generators spread like wildfire, it remains a promising way to securely maintain the provenance of authentic images.

In an ideal world, on news and social media websites, there would be an easy, instant way to see if an image is verified, providing users a way to see when, where, and how an image was captured and what, if any, editing has been done to it. Unlike regular metadata, this data cannot be edited. If unbroken information is unavailable, the image cannot be verified.

It’s essential to fight the temptation to conflate systems that try to detect a fake image and those that aim to show when an image is undeniably real. While the goal may be similar in both cases, the path forward is dramatically different, and the latter seems much more effective.

Methods of checking for fakes will be in a constant arms race with technology, and mistakes are inevitable. A system that can show when an image is real — and say nothing of whether an unverified image is real or not — may not have the reach some people demand, but it also will not screw up. A fake image will never be labeled as real. Sure, a C2PA-based system can’t say anything about photos without the necessary data.

A flowchart illustrating the stages of content creation: "Creation" with a camera icon, "Editing and generative AI" with a pencil icon, and "Publishing" with an upload icon. Dotted arrows connect the stages sequentially, emphasizing the process flow.
Credit: Content Authenticity Initiative

With increased implementation of the tech CAI is working on, including camera-level C2PA tools and inclusion on websites and social media platforms, a world where many of the most impactful images have a verified chain of information is not impossible. If enough photographers and editing apps have C2PA tools, major journalistic institutions could start requiring it for all photos, and then social media platforms may not be far behind if they can prioritize accuracy over other concerns.

A hand holds a black digital camera with visible controls such as "Play," "Fn," and "Menu." The camera screen displays a cityscape with tall buildings under a clear sky. The camera has a textured grip, and the label "Made in Germany" is visible above the screen.
The Leica M11-P has C2PA technology | Credit: Content Authenticity Initiative / Leica

At this point, C2PA adoption has been frustratingly slow. C2PA tools have only been implemented in a handful of cameras from companies including Fujifilm, Leica, Nikon, and Sony, and only in specific pro-oriented camera models. The technology required has not been adequately deployed and implemented at any stage of the photographic process: creation, editing, or publishing. It is possible to show when a photo is verifiably real, but it will only get more challenging to try to determine when an image is fake.

There is an immediate demand for this void to be filled, driving the half-baked, unreliable fake checks we’ve seen so far. However, the priority cannot be speed when it comes to the fight over truth in photography. The focus must be on accuracy. The framework for the technology to confirm truth in imagery already exists; it just needs to be implemented across the platforms where people get their news.


Image credits: Header photo licensed via Depositphotos.

Discussion