A Plea to Manufacturers: We Need the Definitive Photo

Photo Editor

The Africa Geographic 2019 winner “Tim in Amboseli National Park, Kenya is a great photo. It shows Tim the elephant majestically backlit against a dramatic sky. Except, it didn’t win: it was later disqualified for excessive manipulation.

One RAW file later, and some close inspection, led the judges to a decision that the photo exceeded the editing allowed by the rules. Image modification is part and parcel of photography, but should manufacturers be providing a solution to edits that go too far and help identify faked images?

Just using the word “faked” implies there’s something wrong or malicious with the editing process. Ultimately, editing is a tool and it’s the intended purpose that determines whether it’s a “fake” or not. Where it becomes a little more nuanced — at least in photo competitions — is when there are rules about editing and whether the edits you have undertaken break those rules.

Many competitions whose themes are based upon real-world scenes — as opposed to deliberate compositing — allow global edits, alongside limited dodging and burning but tend to draw the line at feature editing or, indeed, adding or removing objects. 

Where the photo of Tim fell foul was in editing his ears to make them less distracting. And if you think that this is being picky, then editing out a single piece of straw for a photo in the 2015 Nikon-Walkley Awards also saw a disqualification.

You could argue that this shouldn’t overly concern us given that these manipulations are designed to improve the aesthetics of the photo, however you’d be hard-pressed to use this argument at the prestigious World Press Photo competition where — in 2015 — 20% of images that made it to the penultimate round were disqualified after having been found to have made excessive edits that involved adding or removing features.

A good example of this manipulation came from photographer Stepan Rudik’s 2009 series on street fighting in Kyiv; on first viewing you’d be hard-pressed to spot the offending foot. Rudik used cropping (which is allowed) to remove the surrounding visual clutter, however the distracting appendage was cloned away. World Press Photo doesn’t believe these types of edits are malicious, however it is a worrying situation for a press photography competition where the veracity of the image is central. Imagine if the same level of carelessness was applied to the articles written in the same publications.

That said, Rudik’s photo is undoubtedly better without the foot, which begs the question as to why cropping is any better (or worse) in this instance. It also points to Werner Herzog’s notion of the “ecstatic truth” — the image might not be essentially truthful (i.e. an object has been altered), but it showcases truthfulness about the subject (street fighting).

Of course, hoax photos have always existed in one form or another; for example, the Cottingley Fairies, Bigfoot, and Evans’s levitation trick are all prime examples. However, they also include purportedly staged photos such as Capa’s “Falling Soldier” and Fenton’s “Valley of the Shadow of Death.” There are also photos that are intended to deceive, what we might now call deepfakes.

What’s the Problem?

Firstly, we need to define what the problem is. From the perspective of the viewer, it is about the intent of the image and, secondarily, its authenticity. “Intent” is problematic because there is the intent of the photographer as well as that of the publisher, which may not align. Where there is deception, the real intent of the photo may not be known or understood by the viewer. Ultimately, we need to be able to trust the publisher, while at the same time have systems in place that can detect faked images.

That leads us to authenticity: an image may not be authentic, but is presented as such and is not intended to deceive (e.g. an obvious composite). Problems arise when an image is not authentic, but is presented as original and therefore is intended to deceive (e.g. the missile launches noted above).

So what is the solution to this? Photo competitions, such as the World Press Photo, ask for raw files to be submitted alongside the post-production JPEG That’s good as far as it goes but doesn’t stop someone tampering with the raw file itself. What we need is to be able to definitively identify the raw file as the “original.” In fact, this is not a new idea and is broadly termed point of capture certification, with a number of technical approaches that are designed to “fingerprint” a photo. These solutions — such as Serelay, and Truepic — are designed to be C2PA compliant, a specification for image authenticity.

A Manufacturer Led Solution

Of course, the problem with these solutions is that they are one step removed from the point-of-capture: the camera. This is particularly true for interchangeable lens cameras which to this point don’t take an app-based approach. Being able to demonstrate that a RAW file represents what was actually captured by a camera would be a significant leap forward for end users where there is a requirement for demonstrable “truth”, particularly in an era where “alternative facts” are regularly trumpeted. Journalists are an obvious case in point, but law enforcement and security services would also clearly fall into this category. In fact, any application that intersects with the judiciary would benefit.

Such a development would require investment, design time, and implementation by a camera manufacturer to integrate appropriate technology. The base problem is confirming that a RAW file is identical to the original source; in computing, checksums are used for this purpose. By creating a hash file (the checksum) from the original RAW, a user can, at a later date, independently compute a hash from their version of the RAW and compare it.

What is needed is the secure creation and storage of the checksum; given the proprietary, locked-down, nature of camera firmware, this presents an ideal intervention for camera manufacturers. As the RAW file is written out to the memory card, a hash could be created and stored in the camera’s memory. The hashes themselves take up very little memory (over 150,000 takes up a measly 10MB). Extra functionality could be added by uploading the hashes to an authenticity database whenever the camera is next connected to a WiFi network; this could be provided by an independent authority, the camera manufacturer, or an employer.

This would be a ground-breaking feature for a manufacturer to offer, although it would remain a niche application as few photographers need to verify the originality of their work. As such, it is the sort of feature I would expect on a pro-spec camera targeted at photojournalists, such as Nikon’s Z9 — at least at first.

Whether we will see any manufacturers step in to solve what are very real problems or leave this space for innovators in the smartphone sector to take the lead. Maybe this will come sooner rather than later, since Sony recently joined the initiative that at least has some idea of how to tackle this problem.

Photo manipulation has taken place almost since photography was invented, however, the degree of editing that is available to digital creators today enables increasingly sophisticated image manipulation. Taken alongside the pervasiveness and persuasiveness of photography, the need to demonstrate the veracity of an image is incredibly important in photographic applications that rely on “truth.”


Image credits: Header photo licensed via Depositphotos.

Discussion