PetaPixel

Apple Patents Method of Generating HDR Photos from Single Exposures

applehdrpatent

High dynamic range (HDR) mode is becoming a standard feature in newer digital cameras and smartphones. By snapping multiple photographs at different exposure levels, the camera can automatically generate an image that captures a greater range of light and dark areas than a standard photograph. However, the technique does have its weaknesses. Artifacts appear if any changes occur in the scene between the different shots, which limits the scenarios in which the technique can be used.

Apple wants to overcome this issue by implementing an HDR mode that only requires a single exposure. A recently published patent shows that Apple is well on its way to doing so.

Image Sensors World first spotted the patent (US2012/041398) titled “Image Sensor Having HDR Capture Capability”, which was filed by chip architect Michael Frank.

The patent outlines a way of generating multiple images with multiple exposure levels using a single exposure of the camera. Basically, the sensor reads each pixel row multiple times at different times and then combines the data afterward into single HDR rows using the image processor.

[…] the camera […] may acquire multiple images in during a single exposure, including one or more images at a low exposure level (underexposed) and one or more images at a high exposure level (overexposed), which may be utilized to generate a single composite HDR image by the image processing circuitry.

In the future, iPhone HDR photos such as this one might not take any extra time to capture

In the future, iPhone HDR photos such as this one might not take any extra time to capture

The technical description of how this magic happens is a bit trickier to understand:

To generate a HDR image during a single exposure of the frame (i.e., fixed amount of time t during which the rolling shutter reset 80 moves across a frame), multiple reads of the same row of pixels of the image sensor may occur. For example, a first data read 86, of the data stored in a row of pixels, may be undertaken at a time n, where n is a fixed fractional time of time t. This time n may be, for example, 1/2, 1/3, 1/4, 1/5, 1/10, 1/20, or another value of the frame time t. This time n may be represented as line 88 in FIG. 5. That is, the first data read 86 may occur at a time n subsequent to the reset of a row of pixels by the rolling shutter reset 80.

Accordingly, as the rolling shutter reset 80 passes downwards along line 78, the first data read 86 may trail the rolling shutter reset 80 by time n. In this manner, data stored in the pixels for each row of the frame may be read at a time n after the rolling shutter reset 80 of that row of pixels. Thus, each row of pixels read as the first data read 86 passes across the image sensor will have been exposed to light for the same time n, which may be referred to as an exposure time or integration time.

Image Sensor World points out that this technique is already being used for certain applications (e.g. security cameras), but would possibly be the first for smartphone cameras used by consumers.


Image credit: Austin by jeffgunn


 
  • branden rio

    At this point, it’s basically just a sensor with an adjustable dynamic range. This actually makes some sense, since this is the basic concept behind automated HDR tools in the first place. Photographers who blend exposures by hand will continue to do so, though, no matter what the dynamic range of their sensor is.

  • http://twitter.com/IEBAcom Anthony Burokas

    Apple should just license Arri’s HDR concept, pass the energy from a pixel through two different amplifiers set at different levels. This results in both bright and dark images and software can blend the two- the user can decide how much contrast they need to reign in.

  • hugh crawford

    I’ve been talking about this idea for years as have a lot of other people online. It’s the jumping off point for lots of computational photography schemes. A google search should turn up lots of prior art.

    A variation on this is to combine a short exposure (or sample in engineer speak) with a longer exposure and take the luminance information from the short exposure and the color information from the long exposure as a way of reducing blur. Or, if you have the bandwidth do lots of very short exposures, align them to reduce motion blur, discard the outlier values for each pixel and take the median of the rest of the values to reduce noise. Use the offset between the multiple exposures to produce a blur kernel ( although one of those cheap gyroscopes on a chip like the iPhone has would be an even better way of obtaining a blur kernel ) Using multiple shutter speeds would expand dynamic range and also be useful in controlling blur. Google “flutter shutter” for a lot more info

    The bandwidth needed for reading the data off of the sensor or the amount of on sensor buffering the biggest problem right now for any high resolution sensor.

  • Pierre Jasmin

    I won’t read the patent, but this seems it has already been invented. It is already featured in some cameras out there, including a commercial security camera system that use the technique for indoors to accomodate windows in location when it’s a sunny day outside. A purpose in such security system where lighting changes over time being to generate from that a tonal curve so outside light does not overblow the pixels in that area of image while the interior remains properly exposed…