Facial recognition features are appearing in everything from cameras to photo-sharing sites, but have you thought about the different security and privacy concerns it introduces? Fast Company has published a piece on how mobile apps in the future may be able to quickly look up your identity, your personal information, and perhaps even your social security number!
[CMU researchers] used three relatively simple technologies to create their face recognition system: An off-the-shelf face recognizer, cloud computing processing, and personal data available through the public feed at social networking sites such as Facebook […] Combining the data gathered from the face recognizer hardware with clever search algorithms that were processed on a cloud-computing platform, the team has performed three powerful experiments: They were able to “unmask” people on a popular dating site where it’s common to protect real identities using pseudonyms, and they ID’d students walking in public on campus by grabbing their profile photos from Facebook.
Most impressively the research algorithm tried to predict personal interests and even to deduce the social security number of CMU students based solely on an image of their face–by interrogating deeper into information that’s freely available online.
Having a camera that shoots 5000 frames per second is enough to capture slow motion footage of a bullet flying through the air, but scientists at the Science and Technology Facilities Council have now announced a camera that shoots a staggering 4.5 million frames per second. Rather than bullets, the camera is designed to capture 3D images of individual molecules using powerful x-ray flashes that last one hundred million billionth of a second. The £3 million camera will land in scientists hands in 2015.
Facial recognition technology has become ubiquitous in recent years, being found in everything from the latest compact camera to websites like Facebook. The same may soon be said about location recognition. Through a new project called “Finder“, the US government military research division IARPA is looking into how to quickly and automatically identify where a photograph was taken without any geotag data. The goal is to use only the identifying features found in the background of scenes to determine the location — kinda like facial recognition except for landscapes.
What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:
In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]
Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!
The WVIL concept camera that made the rounds on the Internet featured a lens that could operate separately from the camera body, but Or Leviteh‘s MMI camera is even simpler: it’s a small screen-less camera that uses a smartphone as its “camera body”.
MMI enables you to see what the camera sees on your [smartphone] screen, to adjust the settings as needed, and to see the results without getting up and even to upload the pictures online. From the application you can control all settings: white balance, focus, picture burst, timer and even tilt the camera lens, all without having to reach the camera.
Separating the lens and sensor components of a camera from its LCD screen and controls seems to be a pretty popular idea as of late (Nikon even showed off a similar concept camera recently).
A compact camera probably isn’t the first thing someone would grab when looking to make a photo with an extremely shallow depth-of-field, since the small aperture and small sensor limit it in this regard. That might soon be different: a recently published patent application by Samsung shows that the company is looking into producing achieving shallow depth of fields with compact cameras by using a second lens to create a depth map for each photo. Read more…
You might soon be able to control Nikon DSLRs using only your emotions. A patent published recently shows that the company is looking into building biological detectors into its cameras, allowing the camera to automatically change settings and trigger the shutter based on things like heart rate and blood pressure. For example, at a sporting event, the sensors could be used to trigger the shutter when something significant happens and the photographer’s reflexes are too slow. The camera could also choose a faster shutter speed to reduce blurring if the user is nervous.
CNBC ran this short segment a couple days ago in which they invited CNET’s Dan Ackerman to explain the changing landscape in the digital camera industry. He thinks point-and-shoot cameras may soon become extinct due to the rise of camera-equipped phones, but also that DSLRs are the cameras here to stay. A recent study found that phones have replaced digital cameras completely for 44% of consumers, and that number seems bound to rise as the cameras on phones continue to improve.
My guess is that in five years, we’ll see digital camera users divided into three camps: mobile phone, interchangeable lens compact, and DSLR. What’s your prediction?
Thought the grain-of-salt-sized camera announced in Germany earlier this year was small? Well, researchers at Cornell have created a camera just 1/100th of a millimeter thick and 1mm on each size that has no lens or moving parts. The Planar Fourier Capture Array (PFCA) is simply a flat piece of doped silicon that cost just a few cents each. After light information is gathered, some fancy mathematical magic (i.e. the Fourier transform) turns the information into a 20×20 pixel “photo”. The fuzzy photo of the Mona Lisa above was shot using this camera.
Obviously, the camera won’t be very useful for ordinary photography, but it could potentially be extremely useful in science, medicine, and gadgets.
Robots might not be able to convey emotions or tell stories through photographs, but one thing they’re theoretically better than humans at is calculating proportions in a scene, and that’s exactly what one robot at India’s IIT Hydrabad has been taught to do. Computer scientist Raghudeep Gadde programmed a humanoid robot with a head-mounted camera to perfectly obey the rule of thirds and the golden ratio. New Scientist writes,
The robot is also programmed to assess the quality of its photos by rating focus, lighting and colour. The researchers taught it what makes a great photo by analysing the top and bottom 10 per cent of 60,000 images from a website hosting a photography contest, as rated by humans.
Armed with this knowledge, the robot can take photos when told to, then determine their quality. If the image scores below a certain quality threshold, the robot automatically makes another attempt. It improves on the first shot by working out the photo’s deviation from the guidelines and making the appropriate correction to its camera’s orientation.
It’s definitely a step up from Lewis, a wedding photography robot built in the early 2000s that was taught to recognize faces.