How Adobe Super Resolution Works (and Doesn’t) for Smartphone Photos

When Adobe first unveiled Super Resolution, it gave photos a chance to increase in size without losing any quality, but it wasn’t clear whether that kind of magic extended to the trove of images on people’s smartphones.

The short answer is that Adobe admits it optimized and designed Super Resolution with RAW files in mind, but also “trained” it to deal with already rendered files, including those processed by the devices everyone carries with them. Like cameras, smartphones have increased in megapixels over the years, and in some cases, been equipped with larger image sensors. But there’s often a wall when it comes to doing more with a photo from a phone. It’s easy enough to post it on social media, where smaller displays can hide some of the imperfections. Blowing it up to print, on the other hand, can reveal all.

Phones always default to capturing still photos in JPEG or HEIC (for iPhones), and with growing megapixel counts, they could theoretically be large enough to print or display on larger screens. Except for the largest image sensors on phones today aren’t the same as full-frame or APS-C sensors on regular cameras, even going back to over a decade ago.

Also read: Adobe Photoshop’s ‘Super Resolution’ Made My Jaw Hit the Floor

Phone manufacturers have turned to software to augment images, utilizing behind-the-scenes tools like HDR, sharpening, and color grading, among others, to render the images people preview right after they shoot them. Adobe feels Super Resolution can do right by those images, and PetaPixel spoke with two of the company’s engineers to get more insight into how that might actually happen.

Parsing Pixels

Eric Chan, Senior Principal Scientist, Digital Imaging at Adobe, covers many of the technical details of how the technology works in a blog post, though doesn’t specify how that might apply to smartphones.

“We’ve certainly trained our models on a lot of image content that has come from sensors of different sizes, so it covers things from the really small pixels you would find in a normal smartphone,” says Chan in an interview. “That said, if you apply Super Resolution to a high-quality DSLR file, and then apply it to a smartphone image, both will improve, but you shouldn’t expect the improvements to be exactly the same. It’s kind of proportional to the quality of the source.”

Original
Enhanced with Super Resolution and Edited in Lightroom

There are a number of reasons for that. Smartphones will sometimes apply aggressive noise reduction, particularly in low-light conditions, which can smear or soften elements. Adobe’s technology wouldn’t be able to recover the detail underneath, especially if there was texture to the subject in question. Clothing might look more like a solid color than detailed fabric, even after applying the additional resolution to it. The same would be true of, say, clouds in a night sky, or texture in rocks, for instance.

Smartphone JPEGs may also have heavier compression, making it more difficult to find all the detail in an image. Super Resolution could treat compression artifacts as image content, meaning that they’re something to preserve or make more visible, rather than clean up or fix. This is why the “cleanest possible source” will often derive the best results when using the feature, Chan says.

“The results can vary if there was more heavily detailed processing, but you may not see as much of a difference because you’re mostly noticing whatever the camera applied to the shot in the first place,” he says.

Going RAW

The differences in signal-to-noise ratio, plus the size of the pixels or photosites on a phone sensor already challenge handsets to deliver higher quality shots at higher resolutions. They add to the nuances of making JPEGs and HEIC files easier to deal with.

While RAW images from phone sensors won’t match the detail of those from cameras, they contain so much more to work with, says Chan.

“We just have a lot more freedom with RAW on a smartphone,” he says. “One of the reasons is because, with RAW, Super Resolution includes another part of the pipeline that we introduced in 2019, which was the Enhanced Details feature that really optimized the details at the pixel level when we’re interpolating from the RAW space to RGB. So, when you apply some resolution to a RAW file, you’re really getting both features in a single step, and that’s where a lot of the improved resolution and quality at the pixel level comes from.”

Photo captured with Apple ProRAW, Unedited

There is a processing caveat to how a workflow might get the best results. Chan says that using Photoshop or other tools to apply additional disruptive edits to an image, like increased sharpening or enhancing edges, may not yield much change when applying Super Resolution afterward.

“Any artifacts or things that you may get from messing around with the edges might just become more visible,” says Chan. “We still recommend that, whenever possible, if you’re ever going for quality with the intent of making the biggest output possible, start with RAW because you’re going to get cleaner edges that way. But a lot of people already have images that they’ve captured in the past that are already in JPEG format, and they can still use Super Resolution for those.”

Photo captured with Apple ProRAW, Unedited, 100% Crop
Photo captured with Apple ProRAW, Enhanced with Super Resolution, 100% Crop

Processing an image through Lightroom first, however, won’t cause the same potential issues because those edits are done as part of a non-destructive or parametric pipeline. In other words, running Super Resolution on an image coming from Lightroom applies some resolution to the underlying source image pixels, thereby copying and pasting edits — be they presets, clarity, color grading, temperature, etc. — on top of the resulting DNG file. Or you could run the untouched source image through Super Resolution first and add the outputted DNG file to Lightroom to edit. There’s no difference in output, either way.

Adobe is planning to add Super Resolution to Lightroom CC and Lightroom Classic, though hasn’t confirmed a release date yet, so the feature is still currently available only in Camera RAW.

Compared to older DSLR photos

Josh Haftel, Director of Product Management, Photography at Adobe, says really high ISO is often the culprit for the noise and artifacts that taint a phone image. Night and low-light modes try to remedy that through bracketed HDR capture that keeps ISO values lower, but if the process also involves sharpening and noise reduction, it can make Super Resolution’s job more difficult. HDR rendering, which is highly popular among phone manufacturers these days, doesn’t present the same challenges on its own, he says.

“The files where you’re going to have the biggest impact, even if they were JPEGs or HEICs, are the ones where the ISO was as low as possible, and the noise reduction hasn’t been applied to it,” says Haftel in an interview. “You’ll see more impact because there’s more detail in files where the lighting was really good. From a practical perspective, imagine you apply Super Resolution to a photo taken inside a bar. It’s not going to be as visible and much better than a perfectly-lit photo outside.”

Original
Enhanced with Super Resolution and Edited in Lightroom

The results for even well-lit images can be subjective, however, especially if highlights are blown out, or the photo was taken with a subpar sensor. Images taken on an iPhone 5 in 2012 may yield different results in Super Resolution than one taken on an iPhone 11 Pro in 2019, for instance.

That gamut isn’t quite as wide on DSLRs from the past. JPEGs coming from sensors from 2006-07 are typically “much cleaner” than smartphones are today, even if they both shot at 12-megapixels. Those DSLR image sensors were still several times larger than current smartphones, so issues with noise aren’t going to be as problematic, says Chan. While RAW files from those past cameras would still be best, the JPEGs should still come out retaining more visual information than most smartphone images can.

Also read: Super Resolution Eliminates the Advantage of High-Megapixel Cameras

To offset the limits of their image sensors, phone manufacturers have leaned on software optimizations to try enhancing a photo even before the user snaps it, applying terminology, like “AI” or “scene recognition”, to them. That creates additional processing, which may impact what Super Resolution can and can’t do.

Photo captured on an iPhone 7 Plus, enhanced with Super Resolution.

“Color optimization and contrast, for the most part, don’t have the same kinds of impacts, it’s more about the detail,” says Haftel. “The only thing that’s different between sharpening and contrast is really the radius of what you’re doing things at because it’s all just contrast relationships from pixel to pixel, so you could argue that some of those scene optimizations could result in a reduction of perceived detail. But for the most part, scene optimizations that aren’t focused on details like face or skin smoothing should not have a big impact on Super Resolution’s ability to work.”

Changing perceptions

Adobe’s DNG format has been widely adopted by smartphone manufacturers who offer RAW shooting. Apple developed its own ProRAW format when it launched its iPhone 12 Pro and 12 Pro Max devices, marking the first time the company officially included RAW in its own camera app. Previously, users needed a third-party app, of which there were plenty in the App Store, to capture images that way.

Also read: Photoshop Super Resolution vs Topaz Gigapixel AI: Upscaling Throwdown

The format won’t work with Apple’s Portrait mode, though will work with other features, like Night mode, Deep Fusion, and Smart HDR. These parameters were largely taken from what others on the Android side were already doing. If there’s a pro or manual mode on an Android phone, there’s a good chance RAW capture is available, and while Google’s Pixel phones don’t have manual shooting, they can save in RAW for almost every mode available, including Night Sight.

But the lion’s share of photos still come in JPEG until RAW shooting is further demystified for the average phone shooter.

“If the intention from the consumer or photographer’s perspective is to do something really wonderful with the photo, whether it’s printed, turned into a gift, or presented really large on TV for all to see, then be really serious about starting from the best input possible, which is the value of capturing in RAW,” says Haftel.

An unedited iPhone 12 Pro Max ProRAW photo superimposed on an enhanced Super Resolution version.

Even for a smartphone JPEG, the size increase can be enormous. A test 1.2MB image from a Pixel 2 turned into a 126MB DNG behemoth after going through the Super Resolution process — a 100x growth in file size. It was hard to tell there was any major difference in output until zooming in and inspecting detail from in close.

When asked if the feature could become scalable, Chan says Adobe is looking at other resolutions besides what it now has. Currently, it is four times the number of pixels, though he notes the company is open to feedback looking for more.

Photos processed with Super Resolution can grow substantially in size.

“When you have training samples of images, you need to train them to up-res to a given scale factor, and making it variable tends to make the model a lot bigger and slower to run. And we felt that it can be very difficult sometimes, if you’re looking at a preview image, to judge what the right scale factor is as a user,” says Chan. “That, plus the fact the models become a lot bigger and slower to run seemed like it wasn’t worth it for us to pursue. If 4x the number of pixels isn’t sufficient, then what would people want to see?”

Discussion