The camera is thrown into the air and captures an image at the highest point of flight – when it is hardly moving. The camera takes full spherical panoramas, requires no preparation and images are taken instantaneously. It can capture scenes with many moving objects without producing ghosting artifacts and creates unique images.
It uses 36 separate 2-megapixel mobile phone camera modules, which are mounted in an enclosure that’s padded with foam. Photographs can then be downloaded to a computer via USB and viewed in a spherical panoramic viewer. Video after the jump
MIT scientists have discovered that graphene, a material consisting of one-atom thick sheets of carbon, produces electric current when struck by light. The researchers say the finding could impact a number of fields, including photography:
Graphene “could be a good photodetector” because it produces current in a different way than other materials used to detect light. It also “can detect over a very wide energy range,” Jarillo-Herrero says. For example, it works very well in infrared light, which can be difficult for other detectors to handle. That could make it an important component of devices from night-vision systems to advanced detectors for new astronomical telescopes.
No word on when DSLRs will start packing graphene sensors.
A team of researchers at UC Davis have come up with a super-cheap way of turning an iPhone into a microscope — useful for diagnosing diseases in areas where medical equipment is hard to come by. Inspired by the CellScope project at UC Berkeley, Sebastian Wachsmann-Hogiu decided to create something even smaller and cheaper. By taping a 1-millimeter ball lens embedded in a rubber sheet to the iPhone, he was able to boost magnification by 5x, which allows the camera to photograph blood cells. Only a small portion of each image is in focus, so they also utilize focus stacking to achieve more usable photos.
The best part is the price — each lens only costs $30-40, and would be even cheaper if mass produced.
We always get a laugh when news organizations or governments try to pass off bad Photoshop jobs as real images, but with the way graphics technology is advancing, bad Photoshop jobs may soon become a thing of the past. Here’s a fascinating demo into technology that can quickly and realistically insert fake 3D objects into photographs — lighting, shading and all. Aside from a few annotations provided by the user (e.g. where the light sources are), the software doesn’t need to know anything about the images. Mind-blowing stuff…
Demos at graphics conferences are often interesting to watch because they offer a sneak peek at technologies that may soon become available to the general public. The video above is a demo for “PatchMatch“, an algorithm developed by researchers at Princeton and Adobe. Although you might be unfamiliar with PatchMatch, you’ve probably heard of its most famous feature: Content Aware Fill. Only a small piece of this amazing technology was introduced in Photoshop CS5, so the amazing image manipulations seen in this demo are likely a sneak peek into what we’ll be seeing in Photoshop CS6.
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[...] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
According to the smart folks over at MIT, this video shows footage that was captured at an unbelievable one trillion frames per second. It appears to show some kind of light pulse traveling through some kind of object. Here’s a confusing explanation found on the project’s website:
We use a pico-second accurate detector (single pixel). Another option is a special camera called a streak camera that behaves like an oscilloscope with corresponding trigger and deflection of beams. A light pulse enters the instrument through a narrow slit along one direction. It is then deflected in the perpendicular direction so that photons that arrive first hit the detector at a different position compared to photons that arrive later. The resulting image forms a “streak” of light. Streak cameras are often used in chemistry or biology to observe milimeter sized objects but rarely for free space imaging.
Ever wonder why certain people always seem to engage in meaningless Canon vs Nikon vs et al. camera brand debates at every opportunity? A recent study conducted at the University of Illinois has found that the more knowledge and experience you have with a particular brand, the stronger your self-esteem is tied to it. Ars Technica writes,
Those who had high self-brand connections (SBC)—that is, those who follow, research, or simply like a certain brand—were the ones whose self esteem suffered the most when their brands didn’t do well or were criticized. Those with low SBC remained virtually unaffected on a personal level.
The residual effect of this is that those with high SBCs tend to discount negative news about their favorite brands, and sometimes even ignore it altogether in favor of happier thoughts.
So that’s why feathers are so easily ruffled when camera brands are bashed…
Update: It looks like the video was taken down by the uploader. Sorry guys.
Color is simply how our brains respond to different wavelengths of light, and wavelengths outside the spectrum of visible light are invisible and colorless to us simply because our eyes can’t detect them. Since colors are created in our brains, what if we all see colors differently from one another? BBC created a fascinating program called “Do You See What I See?” that explores this question, and the findings are pretty startling. Read more…
Facial recognition features are appearing in everything from cameras to photo-sharing sites, but have you thought about the different security and privacy concerns it introduces? Fast Company has published a piece on how mobile apps in the future may be able to quickly look up your identity, your personal information, and perhaps even your social security number!
[CMU researchers] used three relatively simple technologies to create their face recognition system: An off-the-shelf face recognizer, cloud computing processing, and personal data available through the public feed at social networking sites such as Facebook [...] Combining the data gathered from the face recognizer hardware with clever search algorithms that were processed on a cloud-computing platform, the team has performed three powerful experiments: They were able to “unmask” people on a popular dating site where it’s common to protect real identities using pseudonyms, and they ID’d students walking in public on campus by grabbing their profile photos from Facebook.
Most impressively the research algorithm tried to predict personal interests and even to deduce the social security number of CMU students based solely on an image of their face–by interrogating deeper into information that’s freely available online.