Demos at graphics conferences are often interesting to watch because they offer a sneak peek at technologies that may soon become available to the general public. The video above is a demo for “PatchMatch“, an algorithm developed by researchers at Princeton and Adobe. Although you might be unfamiliar with PatchMatch, you’ve probably heard of its most famous feature: Content Aware Fill. Only a small piece of this amazing technology was introduced in Photoshop CS5, so the amazing image manipulations seen in this demo are likely a sneak peek into what we’ll be seeing in Photoshop CS6.
Posts Tagged ‘future’
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[…] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
If Apple ever got into the photo printer business, this SWYP (“See What You Print”) printer might be similar to what they’d come up with. It’s a brilliant concept photo printer design by Artefact, the same design group that dreamed up the WVIL concept camera. Instead of having to send photos to the printer from a computer, users use a giant touchscreen interface that shows you exactly what’s going to pop out of the bottom. Come on SWYP, hurry up and exist!
Gigalinc is an “immersive photography” project by University of Lincoln student Samuel Cox that allows people to explore gigapixel photographs on a giant display using arm movements and hand gestures. Using an Xbox Kinect sensor for motion detection and a large cinema display, the resulting user interface is strikingly similar to the interface Tom Cruise uses in Minority Report.
According to the smart folks over at MIT, this video shows footage that was captured at an unbelievable one trillion frames per second. It appears to show some kind of light pulse traveling through some kind of object. Here’s a confusing explanation found on the project’s website:
We use a pico-second accurate detector (single pixel). Another option is a special camera called a streak camera that behaves like an oscilloscope with corresponding trigger and deflection of beams. A light pulse enters the instrument through a narrow slit along one direction. It is then deflected in the perpendicular direction so that photons that arrive first hit the detector at a different position compared to photons that arrive later. The resulting image forms a “streak” of light. Streak cameras are often used in chemistry or biology to observe milimeter sized objects but rarely for free space imaging.
In November 2010, we reported that MIT scientists were working on a camera that would be able to see around corners using echos of light. Well, this is that camera. Insane.
In the future, after you print photos onto paper using your camera, you’ll be able to scan them and share them on Flickr using your mouse. At CES earlier this year, LG showed off an amazing new mouse that lets you quickly scan images and documents by simply waving the mouse over them. Now it’s available — if you live in the UK, you can buy one from Dabs for £90 (~$150).
We may soon live in a world where the photographs in newspapers and magazines move like they do in Harry Potter — that is, if newspapers and magazines are still around in a few years.
As technology improves, features that were once limited to expensive professional models often become available to the masses, but will this ever be true for full-frame sensors? Nikon’s Senior VP David Lee was recently asked this question in an interview with TWICE, and here’s what he said:
I think that there are definitely two different approaches here. What we’re seeing is that sensor performance continues to improve, but obviously there’s really a need for bulk because with a full-size sensor there’s a real low-light performance benefit, high speed performance, framing rates, and so on and so forth. So, I think you’ll definitely continue to see the higher-end pro consumer continue to have that large format. It’s definitely needed in the D3 and D700. You’ll see that technology continue to improve and grow, but the DX sensor form factor is also important. The compactness of the D3100 and D5100 is very popular. I don’t think one approach will ever overtake the other because of the overall image capabilities and the light performance capabilities.
Seems like he either misunderstood the question, or decided to beat around the bush. It’s an interesting question though — will any of the big manufacturers shake up the industry by being the first to put a full-frame sensor in a consumer-level camera? The sensors have already jumped from pro-level cameras to prosumer-level ones starting in 2005 with the Canon 5D, so it seems like the next logical step will be the consumer level. A sub-$1000 full-frame camera. Now that’s a thought.
Last week it came to light to Amazon founder Jeff Bezos had filed a patent for having airbags built into cell phones to protect them if they’re ever accidentally dropped. Rather than having a NASA-style airbag that completely envelops the phone, micro air jets orient the device so that it lands on a tiny airbag that pops out of the bottom. Wouldn’t it be interesting if this kind of thing became common on digital cameras in the future? The idea is pretty farfetched, but some people I know would definitely benefit from camera airbags.
There’s a good chance the digital photos you’ve stored on hard drives and DVDs won’t outlive you, but what if there was a disc that could last forever? M-Disc, short for Millenial Disc, is a new type of disc that doesn’t suffer from natural decay and degradation like existing disc technologies, allowing you to store data safely for somewhere between “1000 years” and “forever”.
Existing disc technologies write data using an organic dye layer that begins to experience “data rot” immediately after it’s written, causing the disc to become unreadable after a certain amount of time. The M-Disc, on the other hand, actually carves your data into “rock-like materials” that are known to last for centuries, meaning there’s no data rot. Apparently NASA uses the discs to store data. Hopefully it becomes available and affordable soon…