PetaPixel

Researchers Develop Method for Getting High-Quality Photos from Crappy Lenses

simplesample

There are many reason high-quality lenses cost as much as they do (and in some cases that is quite a lot), and one of them is that high-end lenses use many specially-designed elements that are perfectly-positioned to counteract aberrations and distortions.

But what if you could correct for all of that in post? Automatically? With just the click of a button? You could theoretically use a crappy lens and generate high-end results. Well, that’s what researchers at the University of British Columbia are working on, and so far their results are very promising.

The technique was presented at SIGGRAPH 2013, and it may some day provide a software alternative for those who can’t afford high-end glass. For their experiments, they developed a hand-made lens using only one element and then processed the resulting test images through their software to generate sharper results.

Check out their SIGGRAPH video below:

We won’t get into the technical bits (you can read the full paper here) but the basic premise is that once this software knows the point spread functions (PSFs) of your cheap lens, it can correct for blur, distortion and aberration and “recover” a high-quality image.

Here are some photos that show how their test images looked before (top) and after (bottom) sharpening with their computational imaging techniques:

simplesample1

simplesample2

simplesample3

simplesample4

simplesample5

simplesample6

simplesample7

simplesample8

The results are impressive, but for now there are still many hurdles left to jump before something like this could be brought to market. They have to figure out a way to account for the different PSFs of objects at different distances, and if the aperture gets any more open than f/2, the system runs into issues.

Still, this is a very promising start. To read more about the technique, check out the full paper. And if you’d like to see how their technique fared when using a non-homebrew lens — specifically, a Canon EF 28-105mm Macro — head over to this link for high-res samples.

(via Reddit)


Image credits: Photographs courtesy of the University of British Columbia


 
  • Renato Murakami

    Awesome and very interesting…. could improve quality of cheap cameras a lot.
    It’s basically on the fly software calibration right?
    Nice to note though, that even if it’s impressive, it still won’t replace big lenses in big cameras with lots of glass elements. The basic idea is that nothing can be done on the software side for information that has already been lost.
    You can compensate for parts of the image that have been garbled up because of distortion on cheap lenses, which will make images crispier and more accurate, but for details that have been lost because of it will remain lost.
    Still, the correction is impressive by itself, and could do wonders for smartphone cameras for instance.

  • Sir Stewart Wallace

    That is really promising. I can’t wait until I have the time to read the paper thoroughly and this technology becomes publicly available.

  • TheNightstalker

    Inb4 Leica/Zeiss/Canon L/Nikon gold ring lens owners come in claiming it’ll never be as good as their $493289809483 worth of equipment.

  • YetAnotherGeekWithACamera

    You know that most digital camera sensors don’t measure all 3 RGB wavelengths at each pixel, right? The demosaicing process to estimate the 2 channels not measured at a particular pixel from measurements at other pixels is accounting for “lost information” and they do a pretty damned good job at that in most cases (and are getting more sophisticated…). Most of the time in real life it’s not about losing information, it is about how much you lose. Even high-quality lenses can resolve up to so much, but that bound is *good enough* for most uses and many high-end uses. This line of research is trying to push the boundary of how much you can recover via software and (with some more work) could still be used with better glass that physically corrects some lens aberrations but not all, thus relying on software to account for other aberrations and decreasing lens complexity and cost.

  • Tom Markham

    My head hurts after watching that video.

  • Mantis

    But how will other photographers know i’m better than them if I use a lens without a little red line around it?

  • Mike

    Make your own lens and paint the whole thing red.

  • Dhaval Panchal

    Witchcraft I tell you!

  • Syuaip

    Wow, those CSI movies are for real!

  • Harleen Sahni

    This is impressive, but shouldn’t be that surprising since our brains do something similar to make our vision seem sharp with less than perfect optics.

  • Wilba

    “Have you ever worked with primal-dual forward-backward splitting methods?”
    “Yes. My aunt had one.”

  • kassim

    wouldn’t we get the same result if by using curve?

  • kassim

    Nikonian, hahahaha.

  • P. J. Fry

    No sure if trolling…

    or just stupid.

  • Nature

    “[...] but shouldn’t be that surprising since our brains do something similar [...]”

    You don’t get surprised easily…

  • noisejammer

    I remember hearing of something approximating this trick being used by spies during the cold war. The idea was that a photo would be made using a weird lens. This would result in a deliberately blurred photograph. Upon reception, the image would be examined through a corresponding lens that undid the weirdness.

  • Renato Murakami

    Good point! Oh yeah, this is why I said it’s a calibration of sorts… but what I meant with lost information is really information that is lost as in not captured at all. Like in stuff that is lost because of poor compression, lenses that are so crappy that you end up loosing bits of the image due to imperfections, very low dynamic range, among other things that makes cheaper cameras loose information without chances for recovery.
    You are absolutely right though about the gradation of quality in capturing images.
    As shown on the video and comparison pics, there’s quite a lot that can be done to make a supposedly poor image captured by a single (or simple) lens system to get leaps and bounds better. It’s pretty awesome!
    Something like this software being implemented to cameras that are limited by it’s own nature (like smartphone cameras, or small point ‘n shoots) would be awesome… Well, big cameras too since it’s also appliable.

  • Zos Xavius

    But it won’t be.

  • Zos Xavius

    The deconstructed image looks sharper, but still has all the same flaws of the original lens outside of resolution. Sure you can correct for distortion and other flaws in post processing, but you still are trading resolving power for straighter lines as the pixels get stretched. Their method also cannot reproduce fine detail that is lost. This same method could be applied to better glass though. It would be awkward IMO because you would have to calibrate each lens on your own due to sample variation. If you ask me, its best to just start out with a good image that needs little correction from the start.

  • Atlanta Owner

    Adobe will let them perfect it and then buy them out for $$$… we’ll then see it show up in Photoshop CC.

  • kassim

    Opps.. sorry, I meant Richardson–Lucy deconvolution… :P

  • P. J. Fry

    Fair enough. Then the answer is no.
    If you know the details of RL deconvolution, you might find their convex deconvolution approach interesting.

  • P. J. Fry

    Like they did with their recent camera shake reduction?
    I don’t see that happening in the next 5 years or so…

  • PAKoWORKS

    what about bokeh? we buy expensive lenses to generate some nice bokeh, and how the software will know what areas to treat (sharpen) and what to leave blurred (bokeh)

  • fyosores

    It’s DXO doing this already?

  • Roel

    Now every movie/episode that used “Ok, zoom in, enhance” is legit

  • Oskarkar

    What about using this software in combination with high quality lenses too? I guess the results would be astonishing!?

  • Genkakuzai

    This could be some groundbreaking stuff, for sure.

  • Cinekpol

    Here’s my money. TAKE IT! TAKE IT NOW!

  • Olivier Lance

    Exactly what I was going to say!
    They’ve been doing it for many years already, and not only for their software DxO Optics Pro.

  • Cinekpol

    Why are you asking? You got sample photos with Bokeh. They look just fine.

  • Igor Ken

    so true. the pictures will be better than before, not better than pictures taken with professional equipment.

  • YetAnotherGeekWithACamera

    Did you know that your “professional equipment” consists of hardware and software? Did you know that the knowledge used in the design of that hardware and software evolves and results from research are incorporated into newer equipment?

    The pictures will indeed be better than before. Including when that “before” is captured with “professional equipment” (which is not flawless). Modern cameras are much more than just dark boxes with glass in front and light-sensitive material on the back…

  • YetAnotherGeekWithACamera

    Nope.

  • YetAnotherGeekWithACamera

    You either don’t know what DxO Optics Pro does or what is being proposed in the paper.

  • Olivier Lance

    Well they created a method to mathematically characterize the defaults of a lens (coupled, or not to a sensor). They actually made a first software out of this, which is called DxO Analyzer and is used by many labs and magazines to rate photography gear around the world.
    These measured defaults are expressed as parameters of differential equations to model each lens, so that they can apply the “inverse” equation to the photos and correct chromatic aberrations, distortions, blur, sharpness, … in their other software called DxO Optics Pro.

    I do not say they use the exact same method, but the intent and results are similar enough to say that’s what they’ve been doing for many years.

    If you still think I don’t know what I’m talking about, maybe you should express your own opinion clearly and explain what you know that I do not, mister Anonymous.

  • Ryan Keane

    Surely results would vary depending on which focal length you are at as well as resolution you are working in. Also how is this going to correct varying amounts of chromatic aberration throughout the image considering edges tend to be softer and more prone to colour fringing? Don’t get me wrong, it’s always nice to see how this kind of post production is advancing, I just don’t see how you can use the same algorithm for different lens profiles.

    Also, what would the rendering time on something like this be, I recall Adobe Max 2011 doing a “sneak peak” of image de-blurring software, yet the processing time was very long even on tiny .jpeg images.

  • Stefan Janse van Rensburg

    You know where I see real potential for this… Security cameras. They often don’t have the best lenses and “recovering” extra detail could be most useful.

  • not being conned

    unsharp mask in photoshop..been available for the last 20 years

  • ɯɐן ǝɔuɐɹɹǝʇ

    Unsharp mask doesn’t recover details. Unsharp mask, fractal sharpening, and other variations of the same only do some level of contrast enhancements. It’s not really sharpening, rather it’s a trick to make it look like it’s sharpening by increasing micro contrast.

    PSF algorithms actually reverse the effects of each of the aberrations and creates a software enhancement based on the blurred information in the surrounding pixels (one great example is another relatively unknown algorithm called SuperResolution). Way different than unsharp mask by a long shot, which just boosts mid level contrasts around a set pixel level.

  • junyo

    Cue the NSA dropping a bag of money on the researcher in 3… 2…

  • Thomas Casey

    They’ll still be crappy pictures, just sharp crappy pictures.

  • John Flury

    I think I have a (very) rough understanding of how they do it. Sort of a deconvolution thing with local patterns. So the key is these local patterns, right? But, won’t they change when you change your focus? Or even worse, when it’s a zoom lens. You’d theoretically have an infinite amount of test patterns to do the deconvolution with. Doesn’t seem feasable. Well, I did say my understand is only very rough….

  • Cinekpol

    Don’t worry, that won’t be needed any more – as soon as people will start buying Google Glass all of the NSA issues shall disappear. ;)

  • Jessica Darko

    People who don’t know what they are talking about, but who think they do, often tell people who do know what they are talking about, like you, that you “don’t know what you’re talking about”.

    It’s the dunning-kruger effect. They’re too ignorant to realize they aren’t the expert they think they are.

  • Jessica Darko

    P. J. Fry, see you’re doctor, you’re suffering from dunning-kruger disease!

  • Jeremy

    Image detail is just spacial high-frequency information that’s been attenuated (but not removed) by the cheap lens. If a point spread function accurately models the blurring, that blurring can be almost completely undone (almost only because cameras don’t sample with infinite precision, so high-frequency noise will be boosted).

    The provided examples even show text illegible in the unprocessed image being recovered by processing.

  • Mommy2hs

    would be great for video and not just the still images that can be taken from a security recording. They still need to record at a high enough frame rate to get all the images though.

  • jonquimbly

    You can accomplish this today through stacking. Take a bunch of photos with a crap lens, merge them to extract the sharp details.

    This article is a little surprising in that it claims to be new research – I’ve done this for awhile now, and I’m not talking about DOF stacking, just the same photo multiple times. Same applies to even photos with a bit of motion blur, high ISO noise, etc.

  • ɯɐן ǝɔuɐɹɹǝʇ

    Yes, it’s nothing new, but trying doing that with a single decisive moment photo it’s much different. Stacking works if you have multiple photos and a static subject.

    They don’t claim that it’s new research and that it’s based on lots of other prior research work.