Typically, augmented reality falls somewhere between technological breakthrough and really cool thing to show your friends; but in the Science Museum in London’s Making of the Modern World exhibit, augmented reality also takes up the mantle of education.
Using the $3 Science Stories app, visitors to the museum can point their iOS or Android devices at markers set in front of particular exhibits, and prompt a 3-dimensional James May (one of the hosts of BBC’s Top Gear) to appear and explain the particulars of the display. Read more…
Google’s Project Glass has been all the rage since the company released their mock-ups and video of the project at the beginning of the month, and for good reason — the idea is out-of-this-world cool. But from the start we’ve known that Project Glass was only in the beginning stages, the glasses were an idea that couldn’t yet do many, if any, of the things featured in that futuristic video. A couple of days ago, however, the world got its first glimpse of what Project Glass can do.
In an interview with Charlie Rose, researcher Sebastian Thrun used the glasses and his voice to snap a photo of Mr. Rose and upload it to his Google+. The photo (shown above) is nothing special — it looks like an ancient camera phone image — but it serves as confirmation that the glasses can already perform a few basic functions via voice command. And considering the speed with which technology advances these days, any indication of functionality could mean Project Glass is much further along than we think.
When German image sensor scientist Joachim Linkemann gave a talk called “Advanced Camera and Image Sensor Technology” at Automate 2011 back in March 2011, he tried to boil things down to terms people could understand and ended up using beer to illustrate the concepts. If you want to learn about how things like signal-to-noise, dynamic range, and dark noise would work if a glass of beer was the pixel on an image sensor, check out the PDF slideshow.
Researchers at Osaka University in Japan have created a new camera that makes shooting “from the hip” easier by projecting a white border onto the real world — similar to what laser sights do for firearms. The frame line shows exactly the area that will be in the photograph, and allows users to quickly shoot without looking through or at the camera itself. Before you get too excited about the possibility of using it for street photography, here’s the bad news: it’s more suited for things like snapping QR codes due to the fact that the compact projector is only bright enough to be used in dark places and at close range.
Facial recognition service Face.com has announced a new feature in its API: age detection. After analyzing a photograph of a person’s face, the software returns three values: minimum age, maximum age, and estimated age, along with the confidence level of the guesses. Applications for the new technology include enhanced parental controls and targeted advertising. If you want to test out the service yourself, you can play around with the API here (in the photo above, the correct age is ~47).
Last November we featured a concept camera called Air that is worn on your fingers and snaps photographs when you frame scenes with your fingers. That concept may soon become a reality. Researchers at IAMAS in Japan have developed a tiny camera called Ubi-Camera that captures photos as you position your fingers in the shape of a frame. The shutter button is triggered with your opposite hand’s thumb, and the “zoom” level is determined by how far the camera is from the photographer’s face. Expect these cameras to land on store shelves at about the same time as the gesture-controlled computers from Minority Report.
Back in 2010 we shared that MIT was developing a special camera that uses echoes of light to see around corners. Now, two years later, the researchers are finally showing off the camera in action. It works by firing 50 “femtosecond” (quadrillionth of a second) laser pulses 60 times at various spots at an angled wall. A special imaging sensor then collects the scattered light that’s reflected back and uses complex algorithms to piece together the scene based on how long the photons take to return. The process currently takes several minutes, but researchers hope to reduce it to less than 10 seconds, which would make it more useful for military and industrial applications.
Imagine a world in which cameras are as connected to the web as cell phones and purchased with contracts from wireless service providers such as AT&T, Verizon, or Sprint. That world may not be too far off. Last week we reported that both Samsung and Panasonic are considering Android-powered cameras that would offer third-party apps and many of the same things offered by mobile phones.
Samsung officials were also quoted as saying that “in a year or two cameras will have the same processing power and memory as smartphones,” and that, “once the cloud computing era truly dawns, a non-connected device will be meaningless. In that case, the camera will need real-time connectivity, and [carriers] are looking for devices like this.”
We’ve all seen photographers make mad dashes into group portraits, hoping to get into position before the camera’s self timer automatically snaps a photograph. Apple wants to make those a thing of the past. A new patent filed by the company (#20120057039) describes a new and smarter self-timer system that uses facial recognition in addition to the standard timer. Using a picture of the photographer’s face, the camera will wait until the shooter is in the scene before starting the countdown, ensuring that everyone in the photo has the same amount of time to put on a picture perfect smile.
For a recent advertising campaign to bring attention to its hydrogen-powered cars, Mercedes-Benz decided to make a car “invisible” by creating a novel cloaking device using LEDs and a Canon 5D Mark II. One side of the car was covered with several mats of LEDs that display what the DSLR sees on the other side.