Cornell offers a course on designing with microcontrollers, and this year’s final project submissions featured a couple of groups who decided to build robotic photographers that help capture selfies.
That’s right, not just tags, full on captions like “A person riding a motorcycle on a dirt road.” Read more…
In the future, “Auto” may be an option for choosing between JPEG and RAW on Canon DSLRs. A recently published patent reveals that the company has tinkered with the idea of a feature that could automatically choose which photos to save in RAW and which ones to save as JPEG only.
Data is embedded in our environment, in our behavior, and in our genes. Over the past two years, the world has generated 90% of all the data we have today. The information has always been there, but now we can extract and collect massive amounts of it.
Given the explosion of mobile photography, social media based photo sharing, and video streaming, it’s likely that a large portion of the data we collect and create comes in the form of digital images. Read more…
Cameras these days are smart enough to recognize the faces found inside photographs and label them with names. What if the same kind of recognition could be done for the locations of photographs? What if, instead of using satellite geodata, the camera could simply recognize where it is by the contents of the photographs?
That’s what research being done at Carnegie Mellon University and INRIA/Ecole Normale Supérieure in Paris may one day lead to. A group of researchers have created a computer program that can identify the distinctive architectural elements of major cities by processing street-level photos.
What if all advertising photos came with a number that revealed the degree to which they were Photoshopped? We might not be very far off, especially with recent advertising controversies and efforts to get “anti-Photoshop laws” passed. Researchers Hany Farid and Eric Kee at Dartmouth have developed a software tool that detects how much fashion and beauty photos have been altered compared to the original image, grading each photo on a scale of 1-5. The program may eventually be used as a tool for regulation: both publications and models could require that retouchers stay within a certain threshold when editing images.
Here’s the current state of imagery: still cameras can shoot HD video, video cameras can capture high quality stills, and data storage costs continue to fall. In the future, it might become commonplace for people to make photos by shooting uber-high quality video and then selecting the best still. However, as any photographer knows, selecting the best photograph from a series of photos captured in burst mode is already a challenge, so selecting a still from 30fps footage would be quite a daunting challenge.
To make the future easier for us humans, researchers at Adobe and the University of Washington are working on training computers to do the grunt work for us. One research project currently being done involves training a computer to automatically select candid portraits when given video of a person. The video above is a demo of the artificial intelligence in action.
Robots might not be able to convey emotions or tell stories through photographs, but one thing they’re theoretically better than humans at is calculating proportions in a scene, and that’s exactly what one robot at India’s IIT Hydrabad has been taught to do. Computer scientist Raghudeep Gadde programmed a humanoid robot with a head-mounted camera to perfectly obey the rule of thirds and the golden ratio. New Scientist writes,
The robot is also programmed to assess the quality of its photos by rating focus, lighting and colour. The researchers taught it what makes a great photo by analysing the top and bottom 10 per cent of 60,000 images from a website hosting a photography contest, as rated by humans.
Armed with this knowledge, the robot can take photos when told to, then determine their quality. If the image scores below a certain quality threshold, the robot automatically makes another attempt. It improves on the first shot by working out the photo’s deviation from the guidelines and making the appropriate correction to its camera’s orientation.
It’s definitely a step up from Lewis, a wedding photography robot built in the early 2000s that was taught to recognize faces.
What if you could take perfect group photographs by first shooting multiple frames and then selecting the best portions of each one? Microsoft amazed us with this concept last year with its Photo Fuse technology, and now we may soon be seeing something similar coming to mobile phone cameras (and hopefully compact cameras as well). Imaging technology company Scalado gave the above demonstration at a conference earlier this month showing off Rewind, a super-useful feature that shoots a burst of full-res photos, then lets you select the best faces for each person in the image. Next up on our wishlist: Content Aware Fill.