MIT’s Media Lab is no stranger to innovation; from super-high-speed cameras to cameras that can see around walls, they always seem to be on the cutting edge of imaging innovation. Their newest project, the EyeRing, is yet another innovative idea that could some day revolutionize the way we take pictures and experience our world. Read more…
BBC Research has released a new report stating that the digital photography industry has an annual growth rate of 3.8%. Valued at $68.4 billion last year, the global market will reach an estimated value of $82.5 billion by 2016. The study defined the market as a combination of camera equipment, printing equipment, and complementary products. While the photo printing industry is predicted to struggle and lose $300 million between now and 2016, digital cameras and lenses will reportedly do just fine: they have a healthy annual growth rate of 5.8%.
Researchers at Osaka University in Japan have created a new camera that makes shooting “from the hip” easier by projecting a white border onto the real world — similar to what laser sights do for firearms. The frame line shows exactly the area that will be in the photograph, and allows users to quickly shoot without looking through or at the camera itself. Before you get too excited about the possibility of using it for street photography, here’s the bad news: it’s more suited for things like snapping QR codes due to the fact that the compact projector is only bright enough to be used in dark places and at close range.
Last November we featured a concept camera called Air that is worn on your fingers and snaps photographs when you frame scenes with your fingers. That concept may soon become a reality. Researchers at IAMAS in Japan have developed a tiny camera called Ubi-Camera that captures photos as you position your fingers in the shape of a frame. The shutter button is triggered with your opposite hand’s thumb, and the “zoom” level is determined by how far the camera is from the photographer’s face. Expect these cameras to land on store shelves at about the same time as the gesture-controlled computers from Minority Report.
Back in 2010 we shared that MIT was developing a special camera that uses echoes of light to see around corners. Now, two years later, the researchers are finally showing off the camera in action. It works by firing 50 “femtosecond” (quadrillionth of a second) laser pulses 60 times at various spots at an angled wall. A special imaging sensor then collects the scattered light that’s reflected back and uses complex algorithms to piece together the scene based on how long the photons take to return. The process currently takes several minutes, but researchers hope to reduce it to less than 10 seconds, which would make it more useful for military and industrial applications.
Samsung has developed what the company claims is the world’s first CMOS sensor that can capture both RGB and range images at the same time. Microsoft’s Kinect has received a good deal of attention as of late for its depth-sensing capabilities, but it uses separate sensors for RGB images and range images. Samsung’s new solution combines both functions into a single image sensor by introducing “z-pixels” alongside the standard red, blue, and green pixels. This allows the sensor to capture 480×360 depth images while 1920×720 photos are being exposed. One of the big trends in the next decade may be depth-aware devices, and this new development certainly goes a long way towards making that a reality.
In a paper published in Science this week, Japanese researchers reported on a discovery that jumping spiders use a method for gauging distance called “image defocus”, which no other living organism is known to use. Rather than use focusing and stereoscopic vision like humans or head-wobbling motion parallax like birds, the spiders have two green-detecting layers in their eyes — one in focus and one not. By comparing the two, the spiders can determine the distance from objects. Scientists discovered that bathing spiders in pure red light “breaks” their distance measuring ability. Read more…
Hyperspectral cameras are capable of collecting and processing information across the electromagnetic spectrum and beyond what the human eye can see. The technology ordinarily costs a fortune to get a hold of, but scientists at the Vienna University of Technology have figured out how to create a hyperspectral camera using an ordinary DSLR (the Canon 5D) and an adapter made of off-the-shelf parts (PVC pipes, a gel filter, and three camera lenses). The camera still has a ways to go in many areas — it requires several seconds to exposes images rather than milliseconds — but it’s a big step towards showing what’s possible with consumer camera technology.
Here’s an interesting look at the amazing camera being developed at MIT that shoots a staggering one trillion frames per second — fast enough to create footage of light traveling:
[...] the researchers were able to create slow-motion movies, showing what appears to be a bullet of light that moves from one end of the bottle to the other [...] Each horizontal line is exposed for just 1.71 picoseconds, or trillionths of a second, Dr. Raskar said — enough time for the laser beam to travel less than half a millimeter through the fluid inside the bottle.
To create a movie of the event, the researchers record about 500 frames in just under a nanosecond, or a billionth of a second. Because each individual movie has a very narrow field of view, they repeat the process a number of times, scanning it vertically to build a complete scene that shows the beam moving from one end of the bottle, bouncing off the cap and then scattering back through the fluid. If a bullet were tracked in the same fashion moving through the same fluid, the resulting movie would last three years. [#]
They believe that the technology may one day be useful for medicine, industry, science, or even consumer photography.
What if all advertising photos came with a number that revealed the degree to which they were Photoshopped? We might not be very far off, especially with recent advertising controversies and efforts to get “anti-Photoshop laws” passed. Researchers Hany Farid and Eric Kee at Dartmouth have developed a software tool that detects how much fashion and beauty photos have been altered compared to the original image, grading each photo on a scale of 1-5. The program may eventually be used as a tool for regulation: both publications and models could require that retouchers stay within a certain threshold when editing images.