YouTube just announced a useful new feature: an easy face blur option. The announcement says the feature is aimed for news and human rights agencies to protect privacy and identities especially if posting images of activists who may need to remain anonymous or if minors are present in the videos and privacy is a concern. Read more…
“Super-Resolution From a Single Image” is an interesting research page by computer scientists over at the Weizmann Institute of Science in Israel. It details the group’s efforts to create sharp enlargements of small photographs, and offers comparisons between their algorithm and other popular ones being used and researched (e.g. nearest neighbor, bi-cubic). The large image of the baby seen above was created from the tiny image on the left. See if you can create something more useable using Photoshop.
Here’s a video overview of some interesting research that’s being done in the area of video processing. By taking standard video as an input and doing some fancy technical mojo on it, researchers are able to amplify information in it to reveal things that are virtually invisible to the human eye. For example, you can detect a baby’s heartbeat by simply pointing a camera at his/her face. The method is able to visualize the pulsating flow of blood that fills the face.
What if all advertising photos came with a number that revealed the degree to which they were Photoshopped? We might not be very far off, especially with recent advertising controversies and efforts to get “anti-Photoshop laws” passed. Researchers Hany Farid and Eric Kee at Dartmouth have developed a software tool that detects how much fashion and beauty photos have been altered compared to the original image, grading each photo on a scale of 1-5. The program may eventually be used as a tool for regulation: both publications and models could require that retouchers stay within a certain threshold when editing images.
Xerox is showing off a new tool called Aesthetic Image Search over on Open Xerox (the Xerox equivalent of Google Labs). It’s an algorithm being developed at one of the company’s labs that aims to make judging a photograph’s aesthetics something a computer can do.
Many methods for image classification are based on recognition of parts — if you find some wheels and a road, then the picture is more likely to contain a car than a giraffe. But what about quality? What is it about a picture of a building or a flower or a person that makes the image stand out from the hundreds which are taken with a digital camera every day? Here we tackle the difficult task of trying to learn automatically what makes an image special, and makes photo enthusiasts mark it as high quality.
You can play around with a simple demo of the technology here. Don’t tell the Long Beach Police Department about it though — they might use it against photographers.
Last week we shared a sneak peek at some jaw-dropping image deblurring technology currently in development at Adobe. The video wasn’t the best quality and was captured from the audience, so we didn’t get to see the example images very clearly. Adobe has now released an official video of the demo, giving us a better glimpse at what the feature can do. Read more…
At the Adobe MAX 2011 event in LA last week, the company gave a sneak peek into an advanced Image Deblurring feature that may appear in an upcoming version of Photoshop. Provided with a blurred photograph, the feature uses advanced algorithms to calculate the camera movements that caused the blur, which allows the program to do a very accurate unblurring of the photograph. The video is a bit shaky and the quality isn’t the best, but judging from the audience’s reaction when the example photo is unblurred, the feature works extremely well and caused a lot of jaws to drop.
Demos at graphics conferences are often interesting to watch because they offer a sneak peek at technologies that may soon become available to the general public. The video above is a demo for “PatchMatch“, an algorithm developed by researchers at Princeton and Adobe. Although you might be unfamiliar with PatchMatch, you’ve probably heard of its most famous feature: Content Aware Fill. Only a small piece of this amazing technology was introduced in Photoshop CS5, so the amazing image manipulations seen in this demo are likely a sneak peek into what we’ll be seeing in Photoshop CS6.
Computer vision PhD student Zdenek Kalal developed a camera called “Predator” that learns from its mistakes while given difficult recognition and tracking tasks. It’s a pretty interesting glimpse at how powerful the autofocus feature on consumer cameras might be in the future. Imagine being able to teach your camera to recognize a particular moving subject, then having all the photographs taken show that subject in perfect focus! Cameras already have simple facial recognition features built in these days, but something like this would take it to the next level.
Head on over to Kalal’s project page, where you can even download the software to try out yourself.
Adobe is working on a new feature for Photoshop called “Content Aware Fill”, and posted a mind-boggling demonstration of it on YouTube. The description states:
One of the biggest requests we get of Photoshop is to make adding, removing, moving or repairing items faster and more seamless. From retouching to completely reimagining an image, heres an early glimpse of what could happen in the future when you press the delete key.
Basically it allows you to alter or create reality in photographs as easily as selecting an area and running the feature. Gone will be the days when photojournalists are caught with embarrassing patterns created by improperly using the stamp tool. The demonstration is so amazing that many commenters are saying it’s fake, going as far as to say it looks… “photoshopped”?
What do you think of this feature and the sneak peek? Is it too good to be true, or will it change the way we think about photography forever?