This Camera System Can Focus on Everything, Everywhere, All At Once

Overhead view of a digital camera attached to a rig with multiple lenses. The camera’s screen displays a detailed close-up of colorful objects, highlighted by a red box and zoomed in another box for detail.

A trio of researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, developed a camera with a specialized lens that can focus individual pixels to different depths, ensuring that everything in a photo is perfectly sharp and in focus.

Yingsi Qin, Aswin C. Sankaranarayanan, and Matthew O’Toole’s project on spatially-varying autofocus, recently presented at the International Conference on Computer Vision 2025 conference in Honolulu, is a fascinating advancement in computational photography that has significant potential benefits for use cases where focus across the entire frame is vital, including surveillance, machine vision, and microscopy.

In contrast to conventional autofocus in typical cameras, which focuses to a single depth across the entire imaging surface, a spatially-varying autofocus camera like the one the researchers developed focuses independent pixel regions to any depth, creating a freeform depth of field, rather than the flat one produced by normal cameras and lenses.

Thanks to its freeform depth of field, the spatially-varying autofocus camera can map its focal plane to any scene geometry, including scenes with heavily varied, complex shapes. As long as the differences in depth can be mapped at the pixel level, they can be focused.

A table compares all-in-focus imaging techniques by optical sharpness, number of images required, all-in-focus generation method, and output depth. Spatially-varying autofocus is highlighted as high-performing in all areas.

There are various ways for photographers to achieve focus across a larger portion of a scene, but each has drawbacks. For example, focus stacking is a popular choice for macro photographers to increase visible sharpness across different focal planes in an image, but this requires more than one image, sometimes dozens or even hundreds. That doesn’t work when a subject isn’t stationary. Another option, using a very narrow aperture, can increase the depth of field but comes at the cost of resolution thanks to diffraction. There are also light field cameras, but these also fail resolution thresholds, per the researchers.

The group’s camera requires one image to approximate the scene geometry, and then the next image is all in focus. It is “well suited for dynamic settings,” as the prior image determines the focus for the next photo. Further, it utilizes all-optical processes, and does not require additional computational post-processing, another weakness of a light field camera.

A labeled optical setup with camera components: imaging lens, relay lens, beam splitter, cubic phase plate, and SLM. The sensor (Canon EOS R10) displays an all-in-focus image on its screen. Labels A-E indicate each part.

“Our design uses an optical arrangement of a Lohmann lens and a phase-only spatial light modulator to allow each pixel to focus at a different depth. We extend classical autofocusing techniques to the spatially-varying scenario where the depth map is iteratively estimated using contrast and disparity cues, enabling the camera to progressively shape its depth-of-field to the scene’s depth,” the researchers explain. “By obtaining an all-in-focus image optically, our technique advances upon prior work in two key aspects: the ability to bring an entire scene in focus simultaneously, and the ability to maintain the highest possible spatial resolution.”

Although dissecting a Lohmann lens, sometimes called an Alvarez lens, is beyond the scope of this article, it is important to understand that it is a specialized optic that is focus-tunable through the relative movement between two cubic lenses. However, this creates a global — or full-image — change in focus.

Schematic diagrams of Split-Lohmann optics show cubic plates and phase ramps for focus adjustment, with text explaining its 3D display application and camera adaptation for spatially-varying focus.

Building upon Lohmann lens research, the team created a Split-Lohmann lens, which is a computational lens that “can spatially vary the focal length.” At a high level, the Split-Lohmann lens can independently focus across different planes across varying parts of the image sensor thanks to the use of a central spatial light modulator (SLM). The camera creates a depth map and then precisely tilts the central SLM to focus light across different distances.

The prototype camera is a Canon EOS R10, which features, like Canon’s other EOS R mirrorless cameras, a Dual Pixel CMOS sensor. Dual Pixel CMOS AF is one of Canon’s trademark technologies.

With traditional phase-detect autofocus, the camera measures light striking the sensor from two different positions, creating two distinct images. The camera measures the difference between the images to calculate the distance the subject is from the sensor, then moves the focusing elements inside the lens accordingly to align the two images. In Canon’s Dual Pixel CMOS AF, every sensor pixel has two photodiodes, so each pixel can be used for phase-detect autofocus and image capture simultaneously, hence the “dual pixel” name.

Side-by-side comparison of a toy car in front of a mountain, showing a conventional photo with limited focus versus an auto-focused all-in-focus photo; diagrams below illustrate different focal planes for each method.

This image plane phase-detection technology is useful to regular photographers because it delivers fast focusing and good image quality. However, the same underlying technology can do more, as Qin and her colleagues demonstrate.

The spatially-varying autofocus camera system also relies on contrast-detect autofocus (CDAF), which has distinct advantages for certain subject matter and in some situations, which is why even modern cameras with PDAF also incorporate CDAF technology for autofocus.

“For the first time, we can autofocus every object, every pixel all at once,” says Qin.

“We believe that this novel approach to imaging has widespread applications where focus is of paramount importance,” the researchers conclude.


Image credits: Yingsi Qin, Aswin C. Sankaranarayanan, and Matthew O’Toole. The reference research was recently published online: “Spatially-Varying Autofocus.”

Discussion