The Sony A1 Could Open the Door to Full-Size Computational Photography

The Sony Alpha 1 may have many impressive specifications, but one that stands out as having greater implications for image-making is the sensor readout speed: it’s so fast it may be able to achieve the computational photography we see in smartphones in a full-size camera.

Smartphones use very small sensors that, because of that small size, have lightning fast readout speeds which allow them to capture images at much faster rates than full-size cameras, with their significantly larger sensors, have been able to. But Sony’s stacked CMOS technology continues to push the boundaries of former limitations to the point that something like the Sony a1 is approaching the readout speeds necessary to bring the power of computational photography to dedicated, large sensor cameras.

In the short 8-minute video from DPReview above, Chris Niccolls and Jordan Drake discuss the idea with the tech-minded editors Richard Butler and Rishi Sanyal and bring up some interesting ideas.

If you don’t know how your smartphone camera works, it’s worth reviewing. When you open up the photo application the sensor immediately starts capturing pictures in real-time of whatever it sees. Using a cyclical buffer, it is constantly reading, storing, and replacing photos in a loop for as long as it is active. So while you only see one photo after you tap the shutter button, it’s actually using compiled data from the seconds surrounding the time you tapped that button to create a finished image that is a combination of all that visual data.

Every time you snap a photo with Smart HDR, the phone captures a primary 4-photo buffer, secondary interframes at different exposures, and a long exposure for shadow details. The phone then analyzes all the photos, selects the best portions of each, and then combines them to create an optimal version of what you’re trying to capture. Photo courtesy Apple.

This kind of computational photography is necessary in smartphones to overcome the limitations of the small sensor. By combining a large subset of images, noise can be better controlled and overall image quality will be boosted. Growth in this space was how different iPhone models, for example, managed to make better quality images year over year without actually changing the sensor it was using. The iPhone 12 Pro Max was the first time a new, larger sensor had been used in the device for some time.

But while it’s used on small sensors to make images look better, the speed of the Sony a1 sensor has many thinking about how much more quality we might be able to get for images taken on a larger sensor that can use the same cyclical buffer technology.

Sanyal, who is one of the most informed journalists in the photographic space when it comes to technical discussions, calculates that the Sony a1 is capable of a readout time of 5 milliseconds, faster than the iPhone 11’s readout speed of 6.25 milliseconds.

He argues that the a1 could have probably been a fully electronic shutter camera because of that readout speed. This also means that, if the buffer could be improved and certain technologies implemented, the sensor could theoretically act like a giant version of a smartphone camera and use computational photography to improve images.

As much as many photographers openly state their disdain for smartphone photography, it’s an aspect of the current industry that cannot be ignored. Sanyal argues that the a1 shows how we might start to see full-size cameras actually act like smartphone cameras thanks to the advancements in technology we’re seeing today.

“In the future, we’re probably going to see cameras released without any mechanical shutters,” Sanyal says, referring to interchangeable lens cameras. “So as we see electronic-only shutter cameras, we’re kind of going full circle back to smartphones, because that’s essentially what smartphones are. We might actually see manufacturers thinking more like smartphone camera manufacturers in which case we might see some of these applications… like cyclical buffers and computational approaches.”

For now this is all speculation, but it is a lot to think about. If full-size cameras do gain the ability to act like smartphone cameras, is that something photographers would want? Niccolls argues that he is always looking for ways to improve his images, so adding computational photography to his bag of tools would certainly be something he would be interested in.

Discussion