For a recent advertising campaign to bring attention to its hydrogen-powered cars, Mercedes-Benz decided to make a car “invisible” by creating a novel cloaking device using LEDs and a Canon 5D Mark II. One side of the car was covered with several mats of LEDs that display what the DSLR sees on the other side.
Back in the late 1800s and early 1900s, while the world was still shooting black and white photographs, Russian photographer Sergey Prokudin-Gorsky was busy inventing techniques for creating color images. Credited with capturing the only known color photo of Leo Tolstoy, Prokudin-Gorsky’s technique involved capturing three separate monochrome photographs of the same scene, each captured through a red, green, or blue filter. He would then project the three slides using colored lights, which reconstructed the original color scene. Since the images were captured at different times, any changes in the scene caused my movement show up as ghosted images (similar to what happens in HDR photography). Read more…
Samsung has developed what the company claims is the world’s first CMOS sensor that can capture both RGB and range images at the same time. Microsoft’s Kinect has received a good deal of attention as of late for its depth-sensing capabilities, but it uses separate sensors for RGB images and range images. Samsung’s new solution combines both functions into a single image sensor by introducing “z-pixels” alongside the standard red, blue, and green pixels. This allows the sensor to capture 480×360 depth images while 1920×720 photos are being exposed. One of the big trends in the next decade may be depth-aware devices, and this new development certainly goes a long way towards making that a reality.
Photoshop CS6 will have a new Iris Blur tool that lets you quickly add blur to an image that fakes a shallow depth of field. It’s a one tool-process that eschews the traditional methods of using masks, layers or depth maps.
Last year imaging company Scalado showed off an app called Rewind that lets you create perfect group shots by picking out the best faces from a burst of shots and then combining them into a single image. Now the company is back with another futuristic photo app: it’s called Remove, and lets you create images of scenes without the clutter of things passing through (e.g. people, cars, bikes). It works like this: simply snap a photograph, and the app will outline everything that’s moving in the scene with a yellow line. Tap that person or object, and it magically disappears from the scene! Read more…
If you think Content Aware Fill is an amazing Photoshop feature, wait till you play around with the new content aware tools found in Photoshop CS6. In addition to a new Patch Tool for selecting where you want to Content Aware Fill from, the program will also introduce new a Content Aware Move tool that lets you easily move portions of your photographs around and extend them intelligently.
Want to see how far DSLRs have come in the past decade? Lee Morris of Fstoppers published these two photos taken at Super Bowl halftime shows. The crop on the left was captured in 2001, possibly with the Nikon D1H at 2.7 megapixels and ISO 800 (state of the art specs at the time). The slice on the right was from this past weekend, and was shot with a Nikon D3s at 12MP and ISO 12,800.
Image credits: Photographs by Lonny Krasnow/AP and FilmMagic
For those of you who are interested in the business and technology side of things, here’s an interesting 45-minute interview in which Digg founder Kevin Rose chats with Instagram founder Kevin Systrom:
They chat about Systrom’s growing up with computers, his time spent at Stanford, and landing an internship at a startup destined to be worth billions. This ultimately led to launching Instagram which is now 15 million users strong and one of the fastest growing social networks on the planet!
German scientists have been awarded a Guinness World Record for “fastest movie” after successfully capturing two images of an X-ray laser beam 50 femtoseconds apart. One femtosecond is equal to one quadrillionth (or one millionth of one billionth) of a second. Here’s some science talk explaining it:
[...] the scientists split the X-ray laser beam into two flashes and sent one of them via a detour of only 0.015 millimetres, making it arrive 50 femtoseconds later than the first one. Since no detector can be read out so fast, the scientists stored both images as superimposed holograms, allowing the subsequent reconstruction of the single images.
With these experiments, the scientists showed that this record slow motion is achievable. However, they did not only take the world’s fastest but probably also the shortest film – with just two images. Thus, additional development work is necessary for the use of this method in practice. [#]
If we were to apply the technology in smartphones, that ecosystem is, of course, very complex, with some very large players there. It’s an industry that’s very different and driven based on operational excellence. For us to compete in there, we’d have to be a very different kind of company. So if we were to enter that space, it would definitely be through a partnership and a codevelopment of the technology, and ultimately some kind of licensing with the appropriate partner.
He also states that Lytro has “the capital to do that, the capability in the company to do that, and… the vision to execute.” If Apple were to form an exclusive partnership with Lytro for its iPhone cameras, light field photography would instantly be adopted by the millions of people who purchase the phones every year. That’d definitely be a huge shift in the way people take pictures.