Last year imaging company Scalado showed off an app called Rewind that lets you create perfect group shots by picking out the best faces from a burst of shots and then combining them into a single image. Now the company is back with another futuristic photo app: it’s called Remove, and lets you create images of scenes without the clutter of things passing through (e.g. people, cars, bikes). It works like this: simply snap a photograph, and the app will outline everything that’s moving in the scene with a yellow line. Tap that person or object, and it magically disappears from the scene! Read more…
In the future, after you print photos onto paper using your camera, you’ll be able to scan them and share them on Flickr using your mouse. At CES earlier this year, LG showed off an amazing new mouse that lets you quickly scan images and documents by simply waving the mouse over them. Now it’s available — if you live in the UK, you can buy one from Dabs for £90 (~$150).
Facial recognition technology has become ubiquitous in recent years, being found in everything from the latest compact camera to websites like Facebook. The same may soon be said about location recognition. Through a new project called “Finder“, the US government military research division IARPA is looking into how to quickly and automatically identify where a photograph was taken without any geotag data. The goal is to use only the identifying features found in the background of scenes to determine the location — kinda like facial recognition except for landscapes.
What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:
In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]
Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!
You might soon be able to control Nikon DSLRs using only your emotions. A patent published recently shows that the company is looking into building biological detectors into its cameras, allowing the camera to automatically change settings and trigger the shutter based on things like heart rate and blood pressure. For example, at a sporting event, the sensors could be used to trigger the shutter when something significant happens and the photographer’s reflexes are too slow. The camera could also choose a faster shutter speed to reduce blurring if the user is nervous.
A company called Lytro has just launched with $50 million in funding and, unlike Color, the technology is pretty mind-blowing. It’s designing a camera that may be the next giant leap in the evolution of photography — a consumer camera that shoots photos that can be refocused at any time. Instead of capturing a single plane of light like traditional cameras do, Lytro’s light-field camera will use a special sensor to capture the color, intensity, and vector direction of the rays of light (data that’s lost with traditional cameras).
[...] the camera captures all the information it possibly can about the field of light in front of it. You then get a digital photo that is adjustable in an almost infinite number of ways. You can focus anywhere in the picture, change the light levels — and presuming you’re using a device with a 3-D ready screen — even create a picture you can tilt and shift in three dimensions. [#]
Try clicking the sample photograph above. You’ll find that you can choose exactly where the focus point in the photo is as you’re viewing it! The company plans to unveil their camera sometime this year, with the goal of having the camera’s price be somewhere between $1 and $10,000… Check out more sample photos here
If you think the 5-megapixel sensor found on the iPhone 4 is good, wait till you see the camera found on the next iPhone — it’s reportedly going to be a 8-megapixel sensor made by Sony. The Street wrote back in 2010 that the next version of the iPhone to arrive in 2011 would pack an 8-megapixel Sony sensor rather than the 5-megapixel OmniVision one found in the current phone, and Sony’s CEO Howard Stringer seems to have confirmed that today in an interview with the Wall Street Journal. Read more…
The blogosphere is abuzz today over a rumor that Canon and Apple may be planning to collaborate on an upcoming project. Craig over at Canon Rumors started it yesterday when he wrote,
I’ve received a few pieces of information about an upcoming collaboration between Apple and Canon. What that collaboration is hasn’t been spelled out to me. It could be with the upcoming Final Cut Pro 8, or maybe something more.
The story was soon picked up by blogs and magazines, with everyone trying to make guesses as to what the “secret project” might be (if there even is one). Hopefully it has to do with Aperture or something photography related, though the next version of Final Cut Pro is a likely candidate as well.
If you thought Google Earth was cool, check out the work being done by Swedish corp C3 Technologies. Using only photos shot from planes, they can automatically create high-resolution 3D models of entire cities that can then be explored. The above video shows a beautiful fly-by of New York City.
All of the C3 products are based on high-resolution photography captured with carefully calibrated cameras. For every picture, the positions and angles of the cameras are calculated with extremely high precision, using an advanced navigation system. This is what enables C3 to give each pixel its geographical position with very high accuracy. [#]
They can also apply the technology to turn panoramic photographs captured at street-level into 3D models of the scene that the user can navigate through freely. Hopefully this kind of thing makes its way to products like Google Maps soon. It would also be awesome for creating maps in video games!