PetaPixel

Before and After Comparisons of Adobe’s Amazing Image Deblurring Feature

Last week we shared a sneak peek at some jaw-dropping image deblurring technology currently in development at Adobe. The video wasn’t the best quality and was captured from the audience, so we didn’t get to see the example images very clearly. Adobe has now released an official video of the demo, giving us a better glimpse at what the feature can do.

Here’s the first blurry image they demonstrated the feature on:

Here’s the it looked looks like after the Image Deblur feature does its magic (hover your mouse over it for a comparison):

They also did a demo with an image that includes text, demonstrating that useful data can be recovered using the technology:

The feature makes the blurry phone number crystal clear (hover to compare):

Here’s the official video by Adobe (watch it here if it doesn’t load for you):

MAX 2011 Sneak Peek – Image Deblurring (via Digital Trends)


Update: Apparently there’s a non-blurry version of the sample photo showing Adobe CTO Kevin Lynch. Hmmm… (Thx Wing Wong)


Update: Adobe has responded to questions regarding the sample photo of Kevin Lynch:

For those who are curious – some additional background on the images used during the recent MAX demo of our “deblur” technology. The first two images we showed – the crowd scene and the image of the poster, were examples of motion blur from camera shake. The image of Kevin Lynch was synthetically blurred from a sharp image taken from the web. What do we mean by synthetic blur? A synthetic blur was created by extracting the camera shake information from another real blurry image and applying it to the Kevin Lynch image to create a realistic simulation. This kind of blur is created with our research tool. Because the camera shake data is real, it is much more complicated than anything we can simulate using Photoshop’s blur capabilities. When this new image was loaded as a JPEG into the deblur plug-in, the software has no idea it was synthetically generated. This is common practice in research and we used the Kevin example because we wanted it to be entertaining and relevant to the audience – Kevin being the star of the Adobe MAX conference!


 
  • Shane Potter

    I am having trouble with believing the validity of what is claimed here. I would have to use this feature for it to be more real in my mind.

  • http://twitter.com/zak Zak Henry

    Now everyone can do CSI-esque manipulations. Still waiting on the ability to completely change the perspective in post

  • http://twitter.com/StyleQuotient Melo

    I don’t buy it.  It looks more like double exposures than blurry images.  If an image is out of focus, that is an optical issue and can’t simply be re-created in software.  In the third image its obvious it is not so much blur as it is ‘double vision’.

    Not holding my breath.

  • http://sevennine.net Marc

    I looks like it’s fixing blurriness caused by camera shake and not the type cause by being out of focus.

  • http://twitter.com/russianbox lloyd

    oh yeah they are going to fake this just for the sake of what?

    you’re an idiot

  • http://twitter.com/warzauwynn Daniel Hoherd

    You’re right.  I know that because I watched the demo video where they showed exactly what was going on.  Check it out.

  • http://www.facebook.com/profile.php?id=1469663692 Jason Heilig

    I can’t imagine changing stuff that is purely out of focus, the first photo of the person with the microphone maintains a depth of field.

    I’m not complaining though, it’s still awesome, and I don’t doubt it’s ability. I remember thinking content aware fill was a joke and well, it’s damned amazing.

  • http://twitter.com/warzauwynn Daniel Hoherd

    This is incredible, I cannot wait to use this.  This is equally as impressive as content aware fill, but much more useful to me as a photographer.  w00t!

  • http://blog.wingtangwong.com/ Wing Wong

    As someone on DPreview pointed out, the photo that got “deblurred” was originally taken sharp: http://www.flickr.com/photos/15543694@N06/5117266025
    So if the original was just blurred, then de-blurred, it does kind of cut into the wow-factor of it. Had they taken images that people took, which were blurred and de-blurred it, it would have made a much stronger case. The issue of the customized profile text files makes me wonder as well, what secret sauce is going on behind the scenes…As with most things, I’ll believe it when I see it. Case in point, intelligent auto-fill is nice, but it sure isn’t a silver bullet.

  • Pat Smerdon

    wonder if they can apply frame by frame to old, blurry videos.. like the Kennedy assassination.

  • Nathaniel Young

    I think it opens the door to creating more fauxtographers.

  • David

    I thought this would be obvious to any photographers that have been shooting more than a couple of months. All of the examples here have been altered in post production and the original shown as “fixed”. The blurring is too uniform and doesn’t follow the traits of a real blurred photograph.  Are these photographs direct from Adobe? I don’t doubt the tech, but this is just junk created by the blogs for pageviews.

  • Charles Mason

    uhh.. did you even watch the Adobe presentation that’s linked in the post?

  • Tzctplus -

    The cabal speaks in fear of their hobby being mastered by normal people. Shock, horror.

  • Hornydogco

    Wow, this is the Milli Vanilli of photoshop.

  • Weasel

    Is it April 1st already

  • Anonymous

    This appears to work on photos that have been blurred by more or less linear camera movement. I wonder what it does for pictures that are truly out of focus?

    You see this kind of “linear” movement in “HDR” iPhone photos a lot.

  • http://twitter.com/sherifffruitfly Billy Bob

    Yah I’m calling bullshit. Information cannot be spun up out of nothingness.

  • http://www.facebook.com/profile.php?id=1469663692 Jason Heilig

    #undef hobby
    #define profession
    One of these things is not like the other.

  • http://www.facebook.com/profile.php?id=1469663692 Jason Heilig

    Content aware fill does a decent job of spinning content out of educated guesses.

    If this is dealing with what looks to be mostly motion blur, then it’s not that unreasonable. 

    If you can figure out the direction of the motion, then coming up with an approximation doesn’t strike me as impossible.

  • David Wigram

    Those of you who think this is made up clearly have a limited understanding of image processing.  The examples they use may or may not be made from sharp photos originally (if so, more fool them), but the maths underlying this kind of work is quite achievable. Removal of camera-based motion blur will enter our toolset sooner or later.

  • GT Dezines

    What I want to know is- what are those ‘parameters’ that he kept loading?

    This whole thing to me is a thing that makes you go hmmmm

  • Matt

    Can’t wait for this to become available.  Another great tool in the old toolbox. 

  • nhaler

    Blur is from motion, not being out of focus. That is why it looks like a “double-image”—BECAUSE IT IS. 

  • nhaler

    Compared to what this looks like (who knows what the actual simplicity of its use is), content-aware fill is an utter P.I.T.A. 

  • http://twitter.com/Soiden Sebastián Soto

    Like someone pointed out, it’s more likely they’re deblurring virtually blurred images than actually shaky-cam photos.

    I mean, the blur images here look fake. I’ve taken many shaky-cam photos and none of them looked like that.

  • http://500px.com/Phil_Johnston Phil Johnston

    Why though?

    Just take the time to make a sharp image to begin with…its not that hard really!

  • Allison Altdoerffer

    Hi there,

    Thanks for the keen observation. Adobe actually addressed this concern in an update to Sunday’s blog post: http://blogs.adobe.com/photoshopdotcom/2011/10/behind-all-the-buzz-deblur-sneak-peek.html. 

    Scroll to the bottom to review the update, posted today:

    “The image of Kevin Lynch was synthetically blurred from a sharp image taken from the web. What do we mean by synthetic blur? A synthetic blur was created by extracting the camera shake information from another real blurry image and applying it to the Kevin Lynch image to create a realistic simulation. This kind of blur is created with our research tool. Because the camera shake data is real, it is much more complicated than anything we can simulate using Photoshop’s blur capabilities. When this new image was loaded as a JPEG into the deblur plug-in, the software has no idea it was synthetically generated. This is common practice in research and we used the Kevin example because we wanted it to be entertaining and relevant to the audience – Kevin being the star of the Adobe MAX conference!”

  • Dave

    This is going to kill tripod sales :)

  • Jadie_moll

    Maybe just me but the phone number looks like its wrong in the second image :/

  • M johnson

    You’re close to being a believer in Intelligent Design. Congrats !

  • FotoMatt

    what ever happened to photography skills?

  • http://pulse.yahoo.com/_CJ4GARZQBNS5CNMKLNMKA5FWOY rogeliog.

    yes, opening the door to the further watering down of the professional photography industry

  • Shephard

    what tool where you using, to deblur this image, this is so amazing

  • http://pulse.yahoo.com/_N5MBV5ESGPFXGBCNN5W45RL5ZA Tru

    No, it looks to be real.  Adobe just put out some more before & after photos, and is inviting others
      
    http://prodesigntools.com/photoshop-image-deblur-adobe-releases-new-photos.html
      
    With those historical and independent shots in there it’s pretty convincing

  • Kurt Thomas

    This is for real. The science behind it was presented at a number of Siggraph conferences in the past years. I had been wondering when it would become commercially available.

  • Seriously??

    wow ALLOT of you really need to go out and get a life, it’s just photo editing software for crap sake. And after working in photo processing for 12 yrs, almost every “professional” photographer has their heads so far up there own ass to know that in reality they are just photographers who get ok shots they edit to be good. Then they way over charge for luck. Plus in this economy nobody can afford to pay for a photographers ego anyway.

  • Mr.VFX

     I believe these are Motion Blur images from Camera shakes. So the image is in FOCUS at one point. Not really the same thing as an image out of focus. Even with this in mind this technology still as very real practical applications.

    An example of this would be in the Visual Effects world a Compositors could remove and add Motion Blur with ease when separating elements before comping.

  • Roy

    I think you are all taking this incorrectly, as it wasn’t explained very well. They are de-blurring camera shake, NOT out of focus images. That’s the key. The image is in CORRECT FOCUS, but the camera shake has a specific trajectory that can be tracked in the image itself and blended back together. It’s really not that complicated, because the whole image will have the same movement, it’s just a matter of the algorithm solving it. It’s NOT a magic tool that will take an out of focus face and magically make them appear crisp and in focus.

  • NikonFotoMatt

    What ever happened to learning to shoot it right in the first place? 

  • SomeGuy

    actually the kernel on one of their images looked like it almost pulled a 180 within the exposure time…

  • Film Comper

    When it works on moving elements great… but this filter works as a constant blur across an entire image… ie camera blur… and at this point probably only if the blur is straight.