As people snap more and more digital photos, being able to organize those photos into useful sets is becoming increasingly important. Facial recognition algorithms are quickly becoming a standard feature in popular photo origination programs (e.g. iPhoto), but people-sorting is only the tip of the “semantic photo search” iceberg. Cloud photo service EverPix is one company that’s currently working to take photo recognition beyond faces. Sarah Perez of TechCrunch writes,
[…] the eventual goal for Everpix is to become the default way people choose to view and share photos. One development which could help it get there is the image analysis technology the company has been developing in-house. As people’s photo collections grow exponentially over the years, it’s something that will become more valuable in time. Using generalized semantic tagging techniques, Everpix is building algorithms that can identify what the photo is of – meaning, whether it’s a person, a night or day shot, a wide or close shot, a city scene, a nature photo, a photo of a baby, or a vehicle, or a photo of food, among many other things.
What’s important here is that the way they’ve built this to scale. After training the system on a minimal amount of photos, Everpix can then look for other photos in a user’s collection that match that signature without reprocessing the entire photo collection.
In the future, we’ll likely be able to search for photos with photos. Looking for a particular photo that you took at a popular tourist landmark? Just show the app a similar photo found online, and voilà, yours appears.
YouTube just announced a useful new feature: an easy face blur option. The announcement says the feature is aimed for news and human rights agencies to protect privacy and identities especially if posting images of activists who may need to remain anonymous or if minors are present in the videos and privacy is a concern. Read more…
“Super-Resolution From a Single Image” is an interesting research page by computer scientists over at the Weizmann Institute of Science in Israel. It details the group’s efforts to create sharp enlargements of small photographs, and offers comparisons between their algorithm and other popular ones being used and researched (e.g. nearest neighbor, bi-cubic). The large image of the baby seen above was created from the tiny image on the left. See if you can create something more useable using Photoshop.
Here’s a video overview of some interesting research that’s being done in the area of video processing. By taking standard video as an input and doing some fancy technical mojo on it, researchers are able to amplify information in it to reveal things that are virtually invisible to the human eye. For example, you can detect a baby’s heartbeat by simply pointing a camera at his/her face. The method is able to visualize the pulsating flow of blood that fills the face.
What if all advertising photos came with a number that revealed the degree to which they were Photoshopped? We might not be very far off, especially with recent advertising controversies and efforts to get “anti-Photoshop laws” passed. Researchers Hany Farid and Eric Kee at Dartmouth have developed a software tool that detects how much fashion and beauty photos have been altered compared to the original image, grading each photo on a scale of 1-5. The program may eventually be used as a tool for regulation: both publications and models could require that retouchers stay within a certain threshold when editing images.
Xerox is showing off a new tool called Aesthetic Image Search over on Open Xerox (the Xerox equivalent of Google Labs). It’s an algorithm being developed at one of the company’s labs that aims to make judging a photograph’s aesthetics something a computer can do.
Many methods for image classification are based on recognition of parts — if you find some wheels and a road, then the picture is more likely to contain a car than a giraffe. But what about quality? What is it about a picture of a building or a flower or a person that makes the image stand out from the hundreds which are taken with a digital camera every day? Here we tackle the difficult task of trying to learn automatically what makes an image special, and makes photo enthusiasts mark it as high quality.
You can play around with a simple demo of the technology here. Don’t tell the Long Beach Police Department about it though — they might use it against photographers.
Last week we shared a sneak peek at some jaw-dropping image deblurring technology currently in development at Adobe. The video wasn’t the best quality and was captured from the audience, so we didn’t get to see the example images very clearly. Adobe has now released an official video of the demo, giving us a better glimpse at what the feature can do. Read more…
At the Adobe MAX 2011 event in LA last week, the company gave a sneak peek into an advanced Image Deblurring feature that may appear in an upcoming version of Photoshop. Provided with a blurred photograph, the feature uses advanced algorithms to calculate the camera movements that caused the blur, which allows the program to do a very accurate unblurring of the photograph. The video is a bit shaky and the quality isn’t the best, but judging from the audience’s reaction when the example photo is unblurred, the feature works extremely well and caused a lot of jaws to drop.
Demos at graphics conferences are often interesting to watch because they offer a sneak peek at technologies that may soon become available to the general public. The video above is a demo for “PatchMatch“, an algorithm developed by researchers at Princeton and Adobe. Although you might be unfamiliar with PatchMatch, you’ve probably heard of its most famous feature: Content Aware Fill. Only a small piece of this amazing technology was introduced in Photoshop CS5, so the amazing image manipulations seen in this demo are likely a sneak peek into what we’ll be seeing in Photoshop CS6.
Computer vision PhD student Zdenek Kalal developed a camera called “Predator” that learns from its mistakes while given difficult recognition and tracking tasks. It’s a pretty interesting glimpse at how powerful the autofocus feature on consumer cameras might be in the future. Imagine being able to teach your camera to recognize a particular moving subject, then having all the photographs taken show that subject in perfect focus! Cameras already have simple facial recognition features built in these days, but something like this would take it to the next level.
Head on over to Kalal’s project page, where you can even download the software to try out yourself.