What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:
In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]
Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!
The WVIL concept camera that made the rounds on the Internet featured a lens that could operate separately from the camera body, but Or Leviteh‘s MMI camera is even simpler: it’s a small screen-less camera that uses a smartphone as its “camera body”.
MMI enables you to see what the camera sees on your [smartphone] screen, to adjust the settings as needed, and to see the results without getting up and even to upload the pictures online. From the application you can control all settings: white balance, focus, picture burst, timer and even tilt the camera lens, all without having to reach the camera.
Separating the lens and sensor components of a camera from its LCD screen and controls seems to be a pretty popular idea as of late (Nikon even showed off a similar concept camera recently).
A compact camera probably isn’t the first thing someone would grab when looking to make a photo with an extremely shallow depth-of-field, since the small aperture and small sensor limit it in this regard. That might soon be different: a recently published patent application by Samsung shows that the company is looking into producing achieving shallow depth of fields with compact cameras by using a second lens to create a depth map for each photo. Read more…
You might soon be able to control Nikon DSLRs using only your emotions. A patent published recently shows that the company is looking into building biological detectors into its cameras, allowing the camera to automatically change settings and trigger the shutter based on things like heart rate and blood pressure. For example, at a sporting event, the sensors could be used to trigger the shutter when something significant happens and the photographer’s reflexes are too slow. The camera could also choose a faster shutter speed to reduce blurring if the user is nervous.
CNBC ran this short segment a couple days ago in which they invited CNET’s Dan Ackerman to explain the changing landscape in the digital camera industry. He thinks point-and-shoot cameras may soon become extinct due to the rise of camera-equipped phones, but also that DSLRs are the cameras here to stay. A recent study found that phones have replaced digital cameras completely for 44% of consumers, and that number seems bound to rise as the cameras on phones continue to improve.
My guess is that in five years, we’ll see digital camera users divided into three camps: mobile phone, interchangeable lens compact, and DSLR. What’s your prediction?
Thought the grain-of-salt-sized camera announced in Germany earlier this year was small? Well, researchers at Cornell have created a camera just 1/100th of a millimeter thick and 1mm on each size that has no lens or moving parts. The Planar Fourier Capture Array (PFCA) is simply a flat piece of doped silicon that cost just a few cents each. After light information is gathered, some fancy mathematical magic (i.e. the Fourier transform) turns the information into a 20×20 pixel “photo”. The fuzzy photo of the Mona Lisa above was shot using this camera.
Obviously, the camera won’t be very useful for ordinary photography, but it could potentially be extremely useful in science, medicine, and gadgets.
Robots might not be able to convey emotions or tell stories through photographs, but one thing they’re theoretically better than humans at is calculating proportions in a scene, and that’s exactly what one robot at India’s IIT Hydrabad has been taught to do. Computer scientist Raghudeep Gadde programmed a humanoid robot with a head-mounted camera to perfectly obey the rule of thirds and the golden ratio. New Scientist writes,
The robot is also programmed to assess the quality of its photos by rating focus, lighting and colour. The researchers taught it what makes a great photo by analysing the top and bottom 10 per cent of 60,000 images from a website hosting a photography contest, as rated by humans.
Armed with this knowledge, the robot can take photos when told to, then determine their quality. If the image scores below a certain quality threshold, the robot automatically makes another attempt. It improves on the first shot by working out the photo’s deviation from the guidelines and making the appropriate correction to its camera’s orientation.
It’s definitely a step up from Lewis, a wedding photography robot built in the early 2000s that was taught to recognize faces.
Late last year we showed you an interesting demonstration of HDR video filmed using two Canon 5D Mark IIs. The cameras captured the exact same scene at different exposure values using a beam-splitter. Now, a new camera called AMP has been developed that captures real-time HDR video using a single lens. The trick is that there are two beam-splitters in the camera that take the light and direct it onto three different sensors, giving the system a dynamic range of 17 stops. Check out some sample clips in the video above — they might be pretty ugly, but the technology here is pretty interesting. Read more…
Researchers in Australia are working on developing a thin piezoelectric film that can be used to convert mechanical energy into electricity. An uber-useful application would be to use the film in existing gadgets, allowing button presses and finger swipes to be used to recharge the device’s battery. One of the lead scientists, Dr. Madhu Bhaskaran, states,
The power of piezoelectrics could be integrated into running shoes to charge mobile phones, enable laptops to be powered through typing or even used to convert blood pressure into a power source for pacemakers – essentially creating an everlasting battery.
Wouldn’t it be crazy if cameras of the future could be powered solely by pressing the shutter button when taking photos (and perhaps other buttons while chimping)?
A company called Lytro has just launched with $50 million in funding and, unlike Color, the technology is pretty mind-blowing. It’s designing a camera that may be the next giant leap in the evolution of photography — a consumer camera that shoots photos that can be refocused at any time. Instead of capturing a single plane of light like traditional cameras do, Lytro’s light-field camera will use a special sensor to capture the color, intensity, and vector direction of the rays of light (data that’s lost with traditional cameras).
[...] the camera captures all the information it possibly can about the field of light in front of it. You then get a digital photo that is adjustable in an almost infinite number of ways. You can focus anywhere in the picture, change the light levels — and presuming you’re using a device with a 3-D ready screen — even create a picture you can tilt and shift in three dimensions. [#]
Try clicking the sample photograph above. You’ll find that you can choose exactly where the focus point in the photo is as you’re viewing it! The company plans to unveil their camera sometime this year, with the goal of having the camera’s price be somewhere between $1 and $10,000… Check out more sample photos here