How Google’s Handheld Multi-Frame Super-Resolution Tech Works
Since there are physical limits to how large sensors can be in smartphones, companies like Google have been pushing heavily into computational photography, the use of digital rather than optical processes to improve the capabilities of a camera. Here’s a 3-minute video that explains how Google’s super-resolution technology works.
Google’s solution for great image quality involves shooting a burst of raw photos every time the shutter is pressed. Since the human hand naturally has at least an ever-so-slight tremor, the pixel-level differences between the photos in the burst can be used to figure out optimal details at each pixel location.
“This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio,” the paper states. “Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones.
“Specifically, the algorithm is the basis of the Super-Res Zoom feature, as well as the default merge method in Night Sight mode (whether zooming or not) on Google’s flagship phone.”