How Google’s Pixel Smartphone Super Res Zoom Feature Works

How Pixel’s Super Res Zoom works

Super Res Zoom has been a part of the Google Pixel system since the launch of the Pixel 3, but with the Pixel 7 Pro, Google notes it took a big step forward and has explained how the feature works.

How Does The Google Super Res Zoom Work?

The Super Res Zoom feature on Google’s Pixel phones is a tool that utilizes a combination of hardware, software, and machine learning algorithms to create high-resolution images that retain detail even when zoomed in 20 or 30 times. The feature has been a part of the Google Pixel ecosystem for several years now and while there have still been a few hiccups in its performance along the way, it has been constantly improving.

“Pixel’s approach to zoom is one that combines state-of-the-art hardware, a bunch of awesome software, and then a lot of AI on top of that,” Alexander Schiffhauer, a Pixel Camera and AI Group Product Manager, writes in a blog post. “This means the quality works well throughout a range of zoom settings — not just one specific setting, like 5x or 10x.”

“The combination of HDR+ with bracketing and remosaicing, allows zoomed photos that are high resolution and low noise.”

These tools combined with the remosaicing allow the system to crop into the inner portion of the 48-megapixel sensor, to output a 12-megapixel image at 10x zoom where it is analyzed, converted into a format that HDR+ and bracketing can work with where the overall noise is reduced in the final output which the company says will give you the best of both worlds — high resolution and low noise.

According to Google, the Super Res Zoom system uses the camera’s digital zoom (which would normally result in grainy or pixelated images) and captures several photos at different zoom levels to combine them in a single high-quality image. After capturing several frames of the same image at different zooms, the Pixel aligns these images and layers them together to correct any errors that may have occurred during the capture process. Using machine learning algorithms, the Pixel then creates a high-resolution file that contains all of the details the human eye would see if the user were physically closer to the subject.

Machine Learning (AI)

It’s the machine learning algorithms that are responsible for reducing the noise and improving the overall sharpness of the image.

“When you zoom in and take a photo, your phone is actually adapting to your zoom range and capturing multiple images at roughly the same time,” says Google Contributor Molly McHugh-Johnson. “After you press the shutter button, the software and machine learning algorithms work together to create the best version of all those image captures.”

The telephoto camera uses HDR+ with bracketing to merge these images to generate the best overall exposed image while preserving the best details captured throughout the multiple frames. According to the company, HDR+ is enabled automatically whenever a user activates any level of zoom, capturing multiple images so fast that users shouldn’t even notice that it’s happening. The Fusion Zoom algorithm handles the alignment and merging of the frames, and the Zoom Stabilization algorithm identifies and counters any potential shakiness that happens during the course of capturing the frame.

This combination of tools will help ensure the photos captured will look great regardless of being captured at 2x, 5x, or 15x+ zoom levels. After the 20x zoom, the new Pixel 7 Pro systems will use a new ML upscaler that includes a neural network to enhance the detail of your photos. “The more you zoom in, the more the telephoto camera leans into AI”.

The company says it is this deep integration between hardware, software, and AI that allows them to create the Super Res Zoom files, “You can zoom in confidently to get whatever perspective you want, and you can know your photos are going to come out beautifully.”


Image credits: Google

Discussion