The State of Adobe’s Image Deblurring Technology

Yesterday we shared some clearer comparison images from Adobe’s jaw-dropping Image Deblurring demo. Cari Gushiken over on the blog has written up a post that sheds a little more light on how the idea came about, the current challenges they face, and where they see it headed.

To be clear, the feature deals with blur caused by camera shake. In other words, blur that wouldn’t have been there had the camera not been moving. For other types of blur (e.g. motion blur in the scene, a dirty lens, a not-in-focus image), the feature can’t work its magic.

The feature runs into problems when presented with photographs that contain multiple types of blur and photos that don’t have strong edges with which the feature can base its calculations on:

The tricky part is when an image has more than one kind of blur, which occurs in most images. Current deblur technology can’t solve for different blur types occurring in different parts of a single image, or on top of one another. For example, if you photograph a person running and also shake the camera when you press the shutter, the runner will be blurry because he is moving and the whole image might have some blur due to the camera shake. If an image has other issues like the noise you often get from camera phones, or if it was taken in low light, the algorithms might identify the wrong parts of an image as blurry, and thus add artifacts in the deblur process that actually make it look worse.

Strong edges in an image help the technology estimate the type of blur.

Here’s a before-and-after image showing how the feature is ineffective on photos that don’t contain “strong edges”:

Gushiken also writes that a major application of the technology (besides allowing consumers to sharpen poorly captured photos) is forensics. The feature can be used to reclaim information (e.g. the text in the photo at the beginning of this post) that would have been considered lost.

Behind All the Buzz: Deblur Sneak Peek [ Blog]

  • Gary O’Brien

    Hasn’t this been part of smart sharpen since CS4? It looks as if they’ve finally figured how to actually make that work.

  • Anonymous

    The shot of the dog and girl (the larger sample that has been floating around for the last few days) is a side by side identical shot.  Don’t believe me?  Look for yourself.  Either the algorithm they applied did NOTHING to the image (didn’t even try), or they are using the exact same image as before and after.  Pixel for pixel.  Noise, compression artifacts, all of it.  


  • Jason Loudermilk

       It says right above the image that in that example the deblurring would not work with out a strong edge.

    “Here’s a before-and-after image showing how the feature is ineffective on photos that don’t contain “strong edges”

    I can see this working if the image is slightly blur, but like Gary has said, smart sharpen that will actually be “Smart”

  • Anonymous

    It doesn’t appear ineffective to me, but rather unattempted.

    That there is not even a ONE pixel difference makes me somewhat suspicious.  Why put up an example of before and after where the image was spit back out of the software without even so much as an attempt to improve it?  Why even show the non-results at all?  It’s like showing a set of before and after images of your car NOT being painted.  A before and after shot would indicate SOME level of change.  No, there was none.  

    Not to mention I see PLENTY of hard edges in the shot.  So yeah, this seems a bit odd at best.

  • Focusfree

    Agreed. Everything I’ve seen so far feels like they are just faking the behaviour they think is theoretically possible. It’s an interesting concept, but one that can’t really be judged until we see how effective and flexible it is in the real world.

  • Anonymous

    I know, right?  This no doubt sounds crazy…it most likely is — I haven’t taken my haldol this decade (ok, I’ve actually never taken haldol).  

    But for some reason which I can’t quite put my finger on, these ‘before’ images appear to be blurred through programmatic means, and don’t appear to exhibit natural motion blur to my eyes.  It seems almost as if they took a sharp image, algorithmically blurred it, then applied their algorithm to deblur it.Again, I am most certainly under the influence of schizo delusions here, but I can’t escape the thought that it feels a bit…off.

  • John Godwin

    This’ll get a bunch of amateur wedding photographers off the hook, but it’s just one more Photoshop feature that has absolutely no impact on professional photographers.

  • Derek

    smoke and mirrors

  • Mike Smith

    Is anyone really going to be surprised if this feature doesn’t work as advertised once put into practice with real world images?  Remember how great content aware fill looked in the demos? Once it shipped in CS5 it was very underwhelming when used by real photographers with real images.

  • Dennis Marciniak


  • Anonymous

    I agree, John.  I wish Adobe would hold a MAX conference where they demoed a version of photoshop that didn’t crash, Camera RAW and Lightroom that didn’t drag down your $4k system, an entire Creative Suite that had all the UI elements finally made uniform, and dialog boxes that didn’t tell you things like “there was an unknown error” or “the file has changed since you last saved it”.  No kidding?  I wonder why I’d want to save my work TWICE…sheesh.

    Instead, they spend their time on features that weekend warriors will brag about having used to save the print for that Chamber of Commerce Gala Exhibit.  Ahh…life.

  • Mike

    Hi Ronadair,  I hate to say it but you are not the target audience. You are a professional photographer, you will buy Photoshop anyway! they are trying to sell products to people who will not normally buy it, and that is the mass market. Imagine an uplift of 50% in sales due to this error correction. That’s why it’s there.

    I also whole hearty agree with you, why have a custom build with 8 cores and 16G of ram with over a gig of graphics card if the programs aren’t even going to use them.

  • Anonymous

    The trick will be for camera manufacturers to license the technology and use accelerometers to have in body stabilization.

  • Jacqui Dee

    I can see this being more useful for forensics than photographers who know how to use their cameras.

  • shamb

    I remember the content aware fill that looked really good in the demos a year or so back, but didn’t shine half as brightly in real life use.

    I’m guessing the deblurring will be  a similar story: another ‘don’t need’ toy to promote application update on something that is pretty much complete anyway

  • Anonymous

    “they are trying to sell products to people who will not normally buy it”
    [….at the expense of their core market….]

    Hello bloat, obsolescence, and eventual abandonment.  Goodbye Adobe.  Death is at your door, even if you don’t smell it yet.  It’s deceptively easy for companies at the top to think they’re safe from customer abandonment until some well-positioned competitor or an ill-fated disaster sneaks up on them.  Then, out of nowhere, their healthy lead is wiped out and gone forever.

    Wanna watch giants fall?  Just keep an eye out for the companies that obsess over the 1%, 5%, or 10% fringe market (at best, in this case) while concurrently neglecting their largest core group of users/customers.  Once a company adopts this paradigm, the inevitable march toward a long, slow decline and subsequent death has begun.  Sure, it may take 10 years (or possibly even a few more) for the competition to reach their own tipping point. However, it’s all but guaranteed that the market leader will find their core users abandoning them while meticulously mirroring the company’s preemptive example of indifference.  

    And long before these companies’ death, many onlookers will wish it had come much sooner for them, if only to avoid such a horrific sight of gore and atrophy.  

    Kodak.  Microsoft.  WaMu.  GM.  Xerox.  To name a very few.

  • Sebastián Soto

    I’m still skeptical about this.

  • Poster

    then just don’t use it when it goes available.  no need to go technical about it. whether you like it or not, people will use it.