Here’s an interesting look at the amazing camera being developed at MIT that shoots a staggering one trillion frames per second — fast enough to create footage of light traveling:
[...] the researchers were able to create slow-motion movies, showing what appears to be a bullet of light that moves from one end of the bottle to the other [...] Each horizontal line is exposed for just 1.71 picoseconds, or trillionths of a second, Dr. Raskar said — enough time for the laser beam to travel less than half a millimeter through the fluid inside the bottle.
To create a movie of the event, the researchers record about 500 frames in just under a nanosecond, or a billionth of a second. Because each individual movie has a very narrow field of view, they repeat the process a number of times, scanning it vertically to build a complete scene that shows the beam moving from one end of the bottle, bouncing off the cap and then scattering back through the fluid. If a bullet were tracked in the same fashion moving through the same fluid, the resulting movie would last three years. [#]
They believe that the technology may one day be useful for medicine, industry, science, or even consumer photography.
What if all advertising photos came with a number that revealed the degree to which they were Photoshopped? We might not be very far off, especially with recent advertising controversies and efforts to get “anti-Photoshop laws” passed. Researchers Hany Farid and Eric Kee at Dartmouth have developed a software tool that detects how much fashion and beauty photos have been altered compared to the original image, grading each photo on a scale of 1-5. The program may eventually be used as a tool for regulation: both publications and models could require that retouchers stay within a certain threshold when editing images.
Xerox is showing off a new tool called Aesthetic Image Search over on Open Xerox (the Xerox equivalent of Google Labs). It’s an algorithm being developed at one of the company’s labs that aims to make judging a photograph’s aesthetics something a computer can do.
Many methods for image classification are based on recognition of parts — if you find some wheels and a road, then the picture is more likely to contain a car than a giraffe. But what about quality? What is it about a picture of a building or a flower or a person that makes the image stand out from the hundreds which are taken with a digital camera every day? Here we tackle the difficult task of trying to learn automatically what makes an image special, and makes photo enthusiasts mark it as high quality.
You can play around with a simple demo of the technology here. Don’t tell the Long Beach Police Department about it though — they might use it against photographers.
You probably know that stopping down (i.e. increasing your f-stop number) can increase the sharpness of your subject, but how much should you stop down to boost resolution without losing that nice, creamy bokeh? Roger Cicala did some research on this question and writes:
For those lenses that do benefit, stopping down just to f/2.0 provides the majority of resolution improvement. The difference between wide open and f/2.0 is generally much greater than the difference between f/2.0 and the maximum resolution.
Getting the edges and corners sharp requires stopping down to at least f/4 for most wide-aperture primes, and some really need f/5.6. Stopping down to f/2.8 may maximize center sharpness but often makes only a slight difference in the corners, at least on a full-frame camera.
None of the lenses performed any better after f/5.6 (for the center) or f/8 for the corners. Most were clearly getting softer at f/11.
If you’re using a wide-aperture lens, stopping down to just f/2.0 will reap big gains in sharpness while still keeping the depth-of-field narrow. Furthermore, for some lenses you don’t really even need to worry about stopping down for sharpness, since it hasn’t a relatively negligible effect on the outcome.
Ross Technology Corp. has developed an amazing silicon-based spray-on coating called NeverWet that can make almost anything completely waterproof. An iPhone sprayed with NeverWet still functions perfectly after being submerged underwater for half an hour. Spraying the coating on clothes causes liquids (e.g. water, oil, chocolate syrup) to slide right off. Read more…
Contrast detection is one of the two main techniques used in camera autofocus systems. Although focusing speeds continue to improve, the method uses an inefficient “guess and check” method of figuring out a subject’s distance — it doesn’t initially know whether to move focus backward or forward. UT Austin vision researcher Johannes Burge wondered why the human eye is able to instantly focus without the tedious “focus hunting” done by AF systems. He and his advisor then developed a computer algorithm that’s able determine the exact amount of focus error by simply examining features in a scene.
His research paper, published earlier this month, offers proof that there is enough information in a static image to calculate whether the focus is too far or too close. Burge has already patented the technology, which he says could allow for cameras to focus in as little as 10 milliseconds.
Google scientist Sam Hasinoff has come up with a technique called “light-efficient photography” that uses focus-stacking to reduce the amount of time exposures require. In traditional photography, increasing the depth of field in a scene requires reducing the size of the aperture, which reduces the amount of light hitting the sensor and increases the amount of time required to properly expose the photo. This can cause a problem in some situations, such as when a longer exposure would lead to motion blur in the scene.
Hasinoff’s technique allows a camera to capture a photo of equal exposure and equivalent depth of field in a much shorter amount of time. He proposes using a wide aperture to capture as much light as possible, and using software to compensate for the shallow depth of field by stacking multiple exposures. In the example shown above, the camera captures an identical photograph twice as fast by simply stacking two photos taken with larger apertures.
Japan’s Ministry of Defense has unveiled an amazing “Spherical Flying Machine”: a 42-inch remote controlled ball that can zip around in any direction at ~37mph. Built using off-the-shelf parts for about $1,400, in Internet is abuzz over the potential applications, which include military reconnaissance and search-and-rescue operations. What we’re most interested in, however, is the device’s potential as an aerial camera for things like sports photography and combat photojournalism. Read more…
Here’s the current state of imagery: still cameras can shoot HD video, video cameras can capture high quality stills, and data storage costs continue to fall. In the future, it might become commonplace for people to make photos by shooting uber-high quality video and then selecting the best still. However, as any photographer knows, selecting the best photograph from a series of photos captured in burst mode is already a challenge, so selecting a still from 30fps footage would be quite a daunting challenge.
To make the future easier for us humans, researchers at Adobe and the University of Washington are working on training computers to do the grunt work for us. One research project currently being done involves training a computer to automatically select candid portraits when given video of a person. The video above is a demo of the artificial intelligence in action.
Adobe’s amazing Image Deblurring demo was the star of the Sneak Peeks event at Adobe MAX 2011, but it was just one of the many demos shown that night. Another interesting photography-related demo was for “Pixel Nuggets”: a feature that lets you search a large library of photos for features (e.g. people, landmarks, patterns, logos). Read more…