We’ve had variations of this discussion a few times, but recent developments appear to keep moving the goalposts. What was once seemingly a simple question to answer is getting a lot more complicated as technology advances.
Last month, it felt like the photography community agreed that artificial intelligence-generated (AI) images could never be considered photos. Obviously, I was told by readers, that a photo is only a photo if it’s taken with a camera. Fair enough, I suppose, but this week that stance was challenged as the pervasive nature of AI in smartphones appeared to cross a line. But again, I ask, where is that line? When is a photo no longer a photo?
For several years, many of my colleagues have argued that computational photography was going to be the natural evolution of the genre and given how smartphones have quickly become the most popular cameras in the world and they rely heavily on computational techniques to compensate for smaller sensors, I can’t say I disagree.
What exactly is computational photography? In short, it’s a term that refers to processing techniques that rely on the manipulation of visual data via computation instead of through traditional optical processes. The idea of it is to improve photography of digital sensors and optical arrays that are too small to be particularly good at photography on their own, but the idea can theoretically be applied to a digital camera of any size.
Even in 2023, the vast majority of computational photography is found only in smartphones since they house the camera systems that are in the most need of a digital uplift. But they aren’t the only types of cameras that have it. For example, OM Digital — previously Olympus — has been the most willing to try computational photography-like additions to its full-size cameras.
Take, for example, the Live ND feature found in OM Digital’s recent cameras like the OM-1. Without using physical filters to reduce the amount of light hitting the sensor, the OM-1 is able to digitally create the effect from a range of neutral density stops from one to six.
“The camera will then take a series of quick shots over the set period of time and blend them in-camera,” Matt Williams explains in his review for PetaPixel. “You’re still effectively capturing the progressive motion over that given period of time, but without the disadvantage of excessive light gathering.”
That is without a doubt computational photography, and not a single photographer I have ever spoken to has ever argued that the resulting image isn’t a photo. But the recent controversy surrounding Samsung’s Moon photo mode shows that for many, there is a point where that computation goes too far.
When a Photo is No Longer a Photo
After a Redditor found that they could trick their Samsung Galaxy smartphone into “fixing” a blurry photo of the Moon he photographed on his computer screen, outcry that the company was “faking” its Moon mode exploded online. Today, Samsung tried to explain how its AI works, but it’s clear that a large number of people believe that what the company is doing goes too far to consider the result a “real photo.”
But why was this the line? I’ve argued before that more technology in cameras is a good thing, but clearly some of you disagree. Obviously, smartphones across the board from a range of companies are doing something like this regularly. Portrait Mode on an iPhone, XPan mode on a OnePlus, long exposure mode on a Vivo, the list goes on. None of these cameras are relying on hardware alone to produce these effects — arguably none of them are relying much on the hardware at all, if we’re being honest. So why are these okay?
Why is Live ND on an OM-1 okay, but Samsung’s Moon photo mode is not?
We really have to ask ourselves what the line is. What if in the future, instead of this discussion being about the Moon, a camera could “fix” photos of a person by recognizing who it is, looking up all available photos of that person online, and then creating a sharp and accurate representation of that person in your new capture when it would have been blurry and unrecognizable? It’s still that person, and it’s still what you would have captured, but it was created by AI and not by gathering the light bouncing off that person’s face. Is that still a portrait of that person?
As I said, last month I was told that if a camera takes a photo, then that’s obviously a photo. The act of taking the image makes it a photo. But if I take a photo of the Moon with a Galaxy smartphone, what it gives me is not a photo?
What even is a photo anymore?
What is Clipped Highlights?
Clipped Highlights is a free, curated, weekly newsletter that will be sent out every Wednesday morning and will focus on a few of the most important stories of the previous week and explain why they deserve your attention. This newsletter is different from our daily news brief in that it provides unique insights that can only be found in Clipped Highlights.
In addition to unique takes on the biggest stories in photography, art, and technology, Clipped Highlights will also serve to feature at least one photo series or art project that we think is worth your time to check out. So often in the technology and imaging space we focus on the how and not the what. We think that it’s just as important, if not more so, to look at the art created by photographers around the world as it is to celebrate the new technologies that makes that artwork possible.
If this kind of content sounds like something you’re interested in, we encourage you to subscribe to the free Clipped Highlights newsletter today. You can read this week’s edition right here, no subscription necessary, to make sure it’s something you want in your inbox.
We’ll also make sure to share each edition of Clipped Highlights here on PetaPixel so if you aren’t a fan of email, you won’t be forced to miss out on the weekly newsletter.
Image credits: Header photo via Samsung marketing.