Facial recognition technology has become ubiquitous in recent years, being found in everything from the latest compact camera to websites like Facebook. The same may soon be said about location recognition. Through a new project called “Finder“, the US government military research division IARPA is looking into how to quickly and automatically identify where a photograph was taken without any geotag data. The goal is to use only the identifying features found in the background of scenes to determine the location — kinda like facial recognition except for landscapes.
Artificial lens flare is an important part of making certain computer generated scenes look realistic, but up to this point creating realistic lens flare has been a task that requires a good deal of processing power. Now, researchers have come up with a way to simulating lens flare quickly and accurately, taking into account a large number of physical factors that cause the phenomenon:
The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings.
The video above discusses how the technology works, and also touches on the science behind lens flares. The method is patent-pending, and will be presented later this year at SIGGRAPH 2011.
What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:
In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]
Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!
A study conducted by market research firm J.D. Power and Associates has found that “Nikon Pro Series” DSLRs rank highest in customer satisfaction. The company surveyed 4,500 verified online DSLR buyers to find out their satisfaction across five factors: image quality, durability, features, ease of use, and responsiveness.
The Nikon Pro Series ranks highest in online buyer satisfaction with a score of 914. The Nikon Pro Series performs particularly well in shutter speed/lag time, durability and reliability and ease of operation. The Canon Mark-Series follows in the rankings with a score of 909, and performs particularly well in performance and picture quality. The Canon D-Series and Nikon D-Series rank third in a tie, each with a score of 889.
Overall, customers were most satisfied with image quality but least satisfied with durability and responsiveness. Read more…
A compact camera probably isn’t the first thing someone would grab when looking to make a photo with an extremely shallow depth-of-field, since the small aperture and small sensor limit it in this regard. That might soon be different: a recently published patent application by Samsung shows that the company is looking into producing achieving shallow depth of fields with compact cameras by using a second lens to create a depth map for each photo. Read more…
You might soon be able to control Nikon DSLRs using only your emotions. A patent published recently shows that the company is looking into building biological detectors into its cameras, allowing the camera to automatically change settings and trigger the shutter based on things like heart rate and blood pressure. For example, at a sporting event, the sensors could be used to trigger the shutter when something significant happens and the photographer’s reflexes are too slow. The camera could also choose a faster shutter speed to reduce blurring if the user is nervous.
Thought the grain-of-salt-sized camera announced in Germany earlier this year was small? Well, researchers at Cornell have created a camera just 1/100th of a millimeter thick and 1mm on each size that has no lens or moving parts. The Planar Fourier Capture Array (PFCA) is simply a flat piece of doped silicon that cost just a few cents each. After light information is gathered, some fancy mathematical magic (i.e. the Fourier transform) turns the information into a 20×20 pixel “photo”. The fuzzy photo of the Mona Lisa above was shot using this camera.
Obviously, the camera won’t be very useful for ordinary photography, but it could potentially be extremely useful in science, medicine, and gadgets.
Robots might not be able to convey emotions or tell stories through photographs, but one thing they’re theoretically better than humans at is calculating proportions in a scene, and that’s exactly what one robot at India’s IIT Hydrabad has been taught to do. Computer scientist Raghudeep Gadde programmed a humanoid robot with a head-mounted camera to perfectly obey the rule of thirds and the golden ratio. New Scientist writes,
The robot is also programmed to assess the quality of its photos by rating focus, lighting and colour. The researchers taught it what makes a great photo by analysing the top and bottom 10 per cent of 60,000 images from a website hosting a photography contest, as rated by humans.
Armed with this knowledge, the robot can take photos when told to, then determine their quality. If the image scores below a certain quality threshold, the robot automatically makes another attempt. It improves on the first shot by working out the photo’s deviation from the guidelines and making the appropriate correction to its camera’s orientation.
It’s definitely a step up from Lewis, a wedding photography robot built in the early 2000s that was taught to recognize faces.
Late last year we showed you an interesting demonstration of HDR video filmed using two Canon 5D Mark IIs. The cameras captured the exact same scene at different exposure values using a beam-splitter. Now, a new camera called AMP has been developed that captures real-time HDR video using a single lens. The trick is that there are two beam-splitters in the camera that take the light and direct it onto three different sensors, giving the system a dynamic range of 17 stops. Check out some sample clips in the video above — they might be pretty ugly, but the technology here is pretty interesting. Read more…
Researchers in Australia are working on developing a thin piezoelectric film that can be used to convert mechanical energy into electricity. An uber-useful application would be to use the film in existing gadgets, allowing button presses and finger swipes to be used to recharge the device’s battery. One of the lead scientists, Dr. Madhu Bhaskaran, states,
The power of piezoelectrics could be integrated into running shoes to charge mobile phones, enable laptops to be powered through typing or even used to convert blood pressure into a power source for pacemakers – essentially creating an everlasting battery.
Wouldn’t it be crazy if cameras of the future could be powered solely by pressing the shutter button when taking photos (and perhaps other buttons while chimping)?