Google scientist Sam Hasinoff has come up with a technique called “light-efficient photography” that uses focus-stacking to reduce the amount of time exposures require. In traditional photography, increasing the depth of field in a scene requires reducing the size of the aperture, which reduces the amount of light hitting the sensor and increases the amount of time required to properly expose the photo. This can cause a problem in some situations, such as when a longer exposure would lead to motion blur in the scene.
Hasinoff’s technique allows a camera to capture a photo of equal exposure and equivalent depth of field in a much shorter amount of time. He proposes using a wide aperture to capture as much light as possible, and using software to compensate for the shallow depth of field by stacking multiple exposures. In the example shown above, the camera captures an identical photograph twice as fast by simply stacking two photos taken with larger apertures.
Japan’s Ministry of Defense has unveiled an amazing “Spherical Flying Machine”: a 42-inch remote controlled ball that can zip around in any direction at ~37mph. Built using off-the-shelf parts for about $1,400, in Internet is abuzz over the potential applications, which include military reconnaissance and search-and-rescue operations. What we’re most interested in, however, is the device’s potential as an aerial camera for things like sports photography and combat photojournalism. Read more…
Here’s the current state of imagery: still cameras can shoot HD video, video cameras can capture high quality stills, and data storage costs continue to fall. In the future, it might become commonplace for people to make photos by shooting uber-high quality video and then selecting the best still. However, as any photographer knows, selecting the best photograph from a series of photos captured in burst mode is already a challenge, so selecting a still from 30fps footage would be quite a daunting challenge.
To make the future easier for us humans, researchers at Adobe and the University of Washington are working on training computers to do the grunt work for us. One research project currently being done involves training a computer to automatically select candid portraits when given video of a person. The video above is a demo of the artificial intelligence in action.
We always get a laugh when news organizations or governments try to pass off bad Photoshop jobs as real images, but with the way graphics technology is advancing, bad Photoshop jobs may soon become a thing of the past. Here’s a fascinating demo into technology that can quickly and realistically insert fake 3D objects into photographs — lighting, shading and all. Aside from a few annotations provided by the user (e.g. where the light sources are), the software doesn’t need to know anything about the images. Mind-blowing stuff…
Demos at graphics conferences are often interesting to watch because they offer a sneak peek at technologies that may soon become available to the general public. The video above is a demo for “PatchMatch“, an algorithm developed by researchers at Princeton and Adobe. Although you might be unfamiliar with PatchMatch, you’ve probably heard of its most famous feature: Content Aware Fill. Only a small piece of this amazing technology was introduced in Photoshop CS5, so the amazing image manipulations seen in this demo are likely a sneak peek into what we’ll be seeing in Photoshop CS6.
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[...] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
If Apple ever got into the photo printer business, this SWYP (“See What You Print”) printer might be similar to what they’d come up with. It’s a brilliant concept photo printer design by Artefact, the same design group that dreamed up the WVIL concept camera. Instead of having to send photos to the printer from a computer, users use a giant touchscreen interface that shows you exactly what’s going to pop out of the bottom. Come on SWYP, hurry up and exist!
Gigalinc is an “immersive photography” project by University of Lincoln student Samuel Cox that allows people to explore gigapixel photographs on a giant display using arm movements and hand gestures. Using an Xbox Kinect sensor for motion detection and a large cinema display, the resulting user interface is strikingly similar to the interface Tom Cruise uses in Minority Report.
According to the smart folks over at MIT, this video shows footage that was captured at an unbelievable one trillion frames per second. It appears to show some kind of light pulse traveling through some kind of object. Here’s a confusing explanation found on the project’s website:
We use a pico-second accurate detector (single pixel). Another option is a special camera called a streak camera that behaves like an oscilloscope with corresponding trigger and deflection of beams. A light pulse enters the instrument through a narrow slit along one direction. It is then deflected in the perpendicular direction so that photons that arrive first hit the detector at a different position compared to photons that arrive later. The resulting image forms a “streak” of light. Streak cameras are often used in chemistry or biology to observe milimeter sized objects but rarely for free space imaging.
In the future, after you print photos onto paper using your camera, you’ll be able to scan them and share them on Flickr using your mouse. At CES earlier this year, LG showed off an amazing new mouse that lets you quickly scan images and documents by simply waving the mouse over them. Now it’s available — if you live in the UK, you can buy one from Dabs for £90 (~$150).