Olympus Patent Shows Variable Exposure Times for Different Parts of an Image


Earlier this week, Egami came across an Olympus patent that (as far as we can tell) is truly one-of-a-kind, utilizing a unique feature that would allow you to get the best exposure possible in any scenario.

The patent is for technology that would let you set different exposure times for different areas in a photograph. For example, when you have a dark foreground and a bright background, rather than blowing out the background or rendering the foreground a silhouette, it would expose the foreground longer, making for an evenly exposed image.


Although not part of this patent, another possibility I see in the future – which makes more sense than exposure time – is for the camera to adjust its sensors ISO settings in the same manner, allowing for a more evenly exposed image. Imagine your sensor being able to recognize the need for the sky to be exposed at 200 ISO while the person standing in front of you needs 1600 ISO.

Ultimately, these technologies would lead to an HDR-like image as a result, not something everyone is aiming for, but there are definitely some practical applications for this technology. We have no idea whether or not Olympus will put this patent to use, but seeing companies start to get this stuff down on paper is always exciting.

What are your thoughts on the patent? Useless tech that is already covered by HDR photography (albeit in a more cumbersome fashion), or science-fiction come-to-life?

(via Egami via 4/3 Rumors)

  • Tim

    Sounds remarkably similar to MagicLantern for Canon users.

    (The rest of us are quite happy with lots of dynamic range and HDR.)

  • agour

    yup! To expand they already have a ‘dual ISO’ feature, which samples half the pixels at one ISO, and the other half at a different one.
    End result is more dynamic range

  • Dhaval Panchal

    Now combine this tech with a touchscreen where you can paint the areas you want to control…..

  • jon

    This sort of thing belongs in the post- workflow. More time should be spent creating a sensor that has the forgiving nature of film. Then burn and dodge all you’d like in photoshop.

  • Bruce

    How about shooting a longer exposure but reading the sensor multiple times? We could do single-shot HDR or pick-and-choose areas in PP.

  • Alan Klughammer

    Digital already has more dynamic range than (colour) film. Possibly even B&W if you take out special processing techniques.
    This is a feature I would like to see. Not like some of the other stuff listed recently on PetaPixel.

  • Alan Klughammer

    Moving subjects?

  • Banan Tarr

    how does it deal with the sudden change in exposure which would create visible lines in the photo? some kind of gradation process applied in software, or in hardware?

  • Stan B.

    Wish they’d concentrate on making some reasonably priced WA primes below 28mm(e) in the meantime.

  • kodiak xyza

    but this is exactly what it is trying to achieve: the forgiving nature of film. film is nice because it has a non-linear relationship of light intensity vs responsiveness to it. digital is linear, so highlights are blown and dark-shadows are not quantized with equal number of levels — thus noisier.

    by changing the exposure levels of dark and highlights, doing this non-linear behaviour, though it may not be able to replicate film in this regard either.

  • kodiak xyza

    « Useless tech that is already covered by HDR photography (albeit in a more cumbersome fashion), or science-fiction come-to-life? »

    how is this is useless tech or science fiction, if it has been a part of film’s response to light intensity. how is that HDR, if it is reducing the dynamic range of light intensity?

    as best as I can understand the salient points in the article, without reading the patent, it seems that they are after non-linearizing the response to light intensity, which has some consequence on dynamic range, but not quite to make it HDR. what it mainly helps with is reducing the highlights and dark shadows dynamic range, not increasing them, and consequently reducing the quantization noise which is accentuated in dark shadows — just look at all those low-light mobile phones shots for a clear example.

  • kodiak xyza

    different kind of engineering, so you cannot shift the staff around for different effort on lens shortcomings.

  • Omar Salgado

    “Imagine your sensor being able to recognize the need for the sky to be
    exposed at 200 ISO while the person standing in front of you needs 1600

    Well, the true achievement would be exposing both at the same ISO with detail and zero noise (or without clipped highlights). Clearly, what’s quoted can be easily done in post, but it could save time by merging “different exposures” in-camera.



  • Matt

    Exposure as we know it will change significantly at the tech advances. IMO it will evolve to a more continious sampling with each photosite getting its own ‘exposure’, but to be honest it is impossible to tell exactly where the tech is going… Just enjoy the changes as they come :) This is nearly as an exicting time as when photography was born.

  • Tristan Naramore

    That’s what I’m thinking. Imagine combining this in-camera high dynamic range sensor with a Lytro (lightfield) sensor. Point and shoot then decide both exposure range and focus (maybe even depth of field), later. Win win.

  • Rishi Sanyal

    Actually – adjusting the ISO amplification for different areas of the scene (or varying ISO for separate rows or columns of the sensor) is *not* very useful at all for modern sensors when shooting RAW. Varying the actual exposure time for different areas of the scene – like what this patent is attempting to do – is what has actual benefit (as long as your situation allows for increased exposure times, which may be difficult for scenes with motion).

    The reasons for this are as follows:

    (1) Increasing ISO amplification for low noise sensors/cameras shows very little benefit to simply brightening the RAW image in post-processing (PP). This is b/c your camera electronics are adding very little noise of their own to your RAW data – therefore, one might ask: why amplify (high ISO) the image before it’s written to a file, where you run the risk of clipping bright areas due to the clipping point of the ADC? Furthermore, an added benefit of doing it in PP means you can do it in a content-aware manner.

    (2) Increasing actual exposure time of shadows – as opposed to simply boosting the ISO – decreases the largest source of noise in modern cameras with good sensor design (i.e. those with low electronic noise). What noise is that? Shot (or statistical) noise. This is noise inherent in the light itself. Importantly, relative shot noise contributions decrease with increasing signal captured. Therefore, you’ll always maximize your image quality by capturing the most amount of light possible as long as you don’t clip. Pretty much like ‘ETTR’ philosophy (although all those technical briefs that talked about the benefits of ETTR being due to making better use of the digital levels in the RAW file were a bit misleading – the real reason to ETTR is to minimize the relative shot noise in the signal itself and, therefore, increase the signal:noise ratio).

  • Rishi Sanyal

    But do realize that this is only really useful for cameras that introduce a lot of read/electronic noise into your image data before it’s written to a file.

    For many modern cameras, the noise levels downstream of the ISO amplifier are so low that sampling pixels at different ISOs show little to no benefit at all. In fact, they can *hurt* image quality by clipping bright pixels – something which would not happen if you were to selectively raise exposure in post-processing instead.