Google I/O brought with it a lot of exciting updates for Google+, not the least of which were a slew of automatic improvements to Google+ Photos including Auto Highlight, Auto Enhance and Auto Awesome. But the updates didn’t stop when I/O ended last Friday.
Today, Google’s Search blog announced that the company has started implementing some impressive technology that will allow you to search for your photos based on what they contain visually, even if there’s not a tag in sight. Read more…
Here’s the current state of imagery: still cameras can shoot HD video, video cameras can capture high quality stills, and data storage costs continue to fall. In the future, it might become commonplace for people to make photos by shooting uber-high quality video and then selecting the best still. However, as any photographer knows, selecting the best photograph from a series of photos captured in burst mode is already a challenge, so selecting a still from 30fps footage would be quite a daunting challenge.
To make the future easier for us humans, researchers at Adobe and the University of Washington are working on training computers to do the grunt work for us. One research project currently being done involves training a computer to automatically select candid portraits when given video of a person. The video above is a demo of the artificial intelligence in action.
Computer vision PhD student Zdenek Kalal developed a camera called “Predator” that learns from its mistakes while given difficult recognition and tracking tasks. It’s a pretty interesting glimpse at how powerful the autofocus feature on consumer cameras might be in the future. Imagine being able to teach your camera to recognize a particular moving subject, then having all the photographs taken show that subject in perfect focus! Cameras already have simple facial recognition features built in these days, but something like this would take it to the next level.
Head on over to Kalal’s project page, where you can even download the software to try out yourself.