Yesterday we shared some clearer comparison images from Adobe’s jaw-dropping Image Deblurring demo. Cari Gushiken over on the Photoshop.com blog has written up a post that sheds a little more light on how the idea came about, the current challenges they face, and where they see it headed.
To be clear, the feature deals with blur caused by camera shake. In other words, blur that wouldn’t have been there had the camera not been moving. For other types of blur (e.g. motion blur in the scene, a dirty lens, a not-in-focus image), the feature can’t work its magic.
The feature runs into problems when presented with photographs that contain multiple types of blur and photos that don’t have strong edges with which the feature can base its calculations on:
The tricky part is when an image has more than one kind of blur, which occurs in most images. Current deblur technology can’t solve for different blur types occurring in different parts of a single image, or on top of one another. For example, if you photograph a person running and also shake the camera when you press the shutter, the runner will be blurry because he is moving and the whole image might have some blur due to the camera shake. If an image has other issues like the noise you often get from camera phones, or if it was taken in low light, the algorithms might identify the wrong parts of an image as blurry, and thus add artifacts in the deblur process that actually make it look worse.
Strong edges in an image help the technology estimate the type of blur.
Here’s a before-and-after image showing how the feature is ineffective on photos that don’t contain “strong edges”:
Gushiken also writes that a major application of the technology (besides allowing consumers to sharpen poorly captured photos) is forensics. The feature can be used to reclaim information (e.g. the text in the photo at the beginning of this post) that would have been considered lost.
Behind All the Buzz: Deblur Sneak Peek [Photoshop.com Blog]