Adobe is getting serious about making Photoshop a serious tool for editing video. The sample video above was made entirely using an upcoming version of the program. Regarding why this is being added into Photoshop rather than left to Premiere Pro, product manager Bryan O’Neil Hughes states,
Video is now being generated by photographers… everyone really; the 5D Mk. II really kicked it off on the DSLR, but since then we’ve seen just about every DSLR, point and shoot and PHONE generate video… most of it HD! We did several waves of research and regularly heard, “I want Photoshop for video”; “I need a workflow I understand” and for the people who had seen what we introduced in CS3 Extended – “make that easier to use.” Video is being generated by more people than ever before; it’s being shared more places than ever… and yet people are hitting a wall with what they can do with it! They know and love Photoshop… their stills are already passing through it, the fit is more natural than it sounds at first.
You’ll soon be able to do to video just about anything do with stills: filters, adjustments, etc…
Last year imaging company Scalado showed off an app called Rewind that lets you create perfect group shots by picking out the best faces from a burst of shots and then combining them into a single image. Now the company is back with another futuristic photo app: it’s called Remove, and lets you create images of scenes without the clutter of things passing through (e.g. people, cars, bikes). It works like this: simply snap a photograph, and the app will outline everything that’s moving in the scene with a yellow line. Tap that person or object, and it magically disappears from the scene! Read more…
If we were to apply the technology in smartphones, that ecosystem is, of course, very complex, with some very large players there. It’s an industry that’s very different and driven based on operational excellence. For us to compete in there, we’d have to be a very different kind of company. So if we were to enter that space, it would definitely be through a partnership and a codevelopment of the technology, and ultimately some kind of licensing with the appropriate partner.
He also states that Lytro has “the capital to do that, the capability in the company to do that, and… the vision to execute.” If Apple were to form an exclusive partnership with Lytro for its iPhone cameras, light field photography would instantly be adopted by the millions of people who purchase the phones every year. That’d definitely be a huge shift in the way people take pictures.
Olympus and Panasonic might be cofounders of the Micro Four Thirds movement, but the companies appear to be taking different approaches toward 3D photography. While Panasonic offers a special 3D lens that contains two lenses, a newly discovered Olympus patent shows an even more novel approach: adding a second lens to a camera via its hot shoe. Simply stick the lens on and turn your camera sideways to transform it into a stereoscopic 3D camera!
Perhaps in response to the growing capacities and falling prices of SD cards, the CompactFlash Association has announced a new format to replace CF cards for professional photographers. It’s called XQD, and has a size that falls between CF and SD cards (it’s thicker than SD cards, but smaller than CF cards). The interface used is PCI Express, which has a theoretical max write speed of roughly 600MB/s, though the target for real-world write speeds at first will be 125MB/s. It’ll start making public appearances at trade shows early next year, and will be licenced out to card makers around the same time.
The Apple iCam is a concept camera by Italian designer Antonio DeRosa that imagines a future where cameras are modular and powered by smartphones. Smartphones have already invaded the compact camera market in recent years, but their small lenses and sensors keep them from being seen as suitable alternatives to more advanced cameras. The iCam camera changes that by adding a large sensor and interchangeable lens system to the mix. Simply attach your iPhone 5 to the case and you’ll have yourself a mirrorless interchangeable lens camera with a huge LCD screen, fast processor, internet connectivity, and countless photo apps! Read more…
Xerox is showing off a new tool called Aesthetic Image Search over on Open Xerox (the Xerox equivalent of Google Labs). It’s an algorithm being developed at one of the company’s labs that aims to make judging a photograph’s aesthetics something a computer can do.
Many methods for image classification are based on recognition of parts — if you find some wheels and a road, then the picture is more likely to contain a car than a giraffe. But what about quality? What is it about a picture of a building or a flower or a person that makes the image stand out from the hundreds which are taken with a digital camera every day? Here we tackle the difficult task of trying to learn automatically what makes an image special, and makes photo enthusiasts mark it as high quality.
You can play around with a simple demo of the technology here. Don’t tell the Long Beach Police Department about it though — they might use it against photographers.
Contrast detection is one of the two main techniques used in camera autofocus systems. Although focusing speeds continue to improve, the method uses an inefficient “guess and check” method of figuring out a subject’s distance — it doesn’t initially know whether to move focus backward or forward. UT Austin vision researcher Johannes Burge wondered why the human eye is able to instantly focus without the tedious “focus hunting” done by AF systems. He and his advisor then developed a computer algorithm that’s able determine the exact amount of focus error by simply examining features in a scene.
His research paper, published earlier this month, offers proof that there is enough information in a static image to calculate whether the focus is too far or too close. Burge has already patented the technology, which he says could allow for cameras to focus in as little as 10 milliseconds.
Japanese company Nippon Electric Glass has developed a new type of ‘invisible glass’ that drastically reduces reflections, rendering the glass almost invisible to human eyes. The secret is a special anti-reflection film that is formed on each side of the glass, which allows more light to pass through rather than bounce off. In ordinary glass, about 8% of the incoming light is reflected, but with this new glass, only 0.5% is. In the photo above, we “see” normal glass on the left and the new glass on the right.
Gadget blogs are salivating over the glass’ potential benefits for phone and computer screens, but we’re interested in seeing whether the glass may prove useful for photography. Perhaps it could pave the way for next-generation lenses and filters?
Google scientist Sam Hasinoff has come up with a technique called “light-efficient photography” that uses focus-stacking to reduce the amount of time exposures require. In traditional photography, increasing the depth of field in a scene requires reducing the size of the aperture, which reduces the amount of light hitting the sensor and increases the amount of time required to properly expose the photo. This can cause a problem in some situations, such as when a longer exposure would lead to motion blur in the scene.
Hasinoff’s technique allows a camera to capture a photo of equal exposure and equivalent depth of field in a much shorter amount of time. He proposes using a wide aperture to capture as much light as possible, and using software to compensate for the shallow depth of field by stacking multiple exposures. In the example shown above, the camera captures an identical photograph twice as fast by simply stacking two photos taken with larger apertures.