Future Photographers May Adjust Focus During Post Processing

In the future, focusing on the wrong subject when taking a picture might be a thing of the past. At Nvidia’s GPU Technology Conference this year Adobe gave a demonstration of how plenoptic lenses can be used to allow focus to be arbitrarily chosen after the image is captured during post-processing. These are microlens arrays containing hundreds, thousands, or even tens of thousands (Stanford researchers used a camera with 90,000 lenses) of tiny lenses that record much more information about a scene than traditional single lenses.

What a plenoptic lens does is allow single rays of light to be recorded from many different perspectives, resulting in captured images that are composed of many small taken from slightly different viewpoints.

These images can then be processed by software, which extracts depth information from the “bug-eye” image. It can then be processed into traditional photographs, giving photographers the freedom of selecting exactly at what depth focus should be.

Here’s a clip of Adobe’s presentation on the technology:

We reported on similar research being done at the University of Toronto back in May.

(via Laptop Magazine)

Thanks for the tip Jeffery!

  • Paul Lomax

    Very interesting indeed!

  • Pingback: The First Plenoptic Camera on the Market()

  • Chung Dha Lam

    Can’t you just take a picture with slow aperture and just blur in photoshop. Also I wonder what you see in your viewfinder if its that image with everything is small squares then I doubt you can easily take a picture with that.

  • Zak Henry

    It is unlikely you would be able to have a dslr with this lens. The ‘decryption’ would be done on the fly in live view. Otherwise tlrs may make a comeback.

  • Pingback: Adjust Focus During Post Processing()

  • Zach Love

    you can create a similar image in photoshop, but not with any ease.

    first you’d have to separate (cut out) each object in the image depending on the distance they are away from the camera, as the amount something is out of focus depends on where the object is. plus most software blur / defocus post processing doesn’t have the same bokeh as a shallow focus image from a long lens.

  • Lekan

    Most of these are actually applications of research being done out of Stanford University:

  • Conor Raypholtz

    and that’s what a fly sees :/

  • Conor Raypholtz

    what they didn’t show is that the viewing angle can also be changed within a few degrees as the data is taken in 3d.

  • Conor Raypholtz

    lumia 1820 will be the first phone with a field of view camera :) its suppose to create a image live and save it uncompressed for editing. should only be a matter of time before fb and google start there own live photo viewers for these shared images.