Picture Post is an interesting (and NASA-funded) citizen science project that turns photographers into citizen scientists, crowdsourcing the task of environmental monitoring. Anyone around the world can install a Picture Post:
A Picture Post is a 4”x4” post made of wood or recycled plastic with enough of the post buried in the ground so it extends below the frost line and stays secure throughout the year. Atop the post is a small octagonal-shaped platform or cap on which you can rest your camera to take a series of nine photographs.
People who walk by can then use the guide on the post to capture 9 photos in all directions, and upload them to the Picture Post website. The resulting panoramas can then be browsed by date, giving a cool look at how a particular location changes over time. Read more…
On a rainy day recently, light painting photographer Jeremy Jackson was playing around with a green laser pointer when he discovered something interesting: all the out of focus raindrops in the photograph had a lined pattern in them — and each one was unique! These “water drop snowflakes” were found in all of the photos he took that day.
MIT scientists have discovered that graphene, a material consisting of one-atom thick sheets of carbon, produces electric current when struck by light. The researchers say the finding could impact a number of fields, including photography:
Graphene “could be a good photodetector” because it produces current in a different way than other materials used to detect light. It also “can detect over a very wide energy range,” Jarillo-Herrero says. For example, it works very well in infrared light, which can be difficult for other detectors to handle. That could make it an important component of devices from night-vision systems to advanced detectors for new astronomical telescopes.
No word on when DSLRs will start packing graphene sensors.
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[...] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
According to the smart folks over at MIT, this video shows footage that was captured at an unbelievable one trillion frames per second. It appears to show some kind of light pulse traveling through some kind of object. Here’s a confusing explanation found on the project’s website:
We use a pico-second accurate detector (single pixel). Another option is a special camera called a streak camera that behaves like an oscilloscope with corresponding trigger and deflection of beams. A light pulse enters the instrument through a narrow slit along one direction. It is then deflected in the perpendicular direction so that photons that arrive first hit the detector at a different position compared to photons that arrive later. The resulting image forms a “streak” of light. Streak cameras are often used in chemistry or biology to observe milimeter sized objects but rarely for free space imaging.
While we’re on the subject of photos of Earth, did you know that the first photo showing the entire planet was captured by an unmanned NASA orbiter from the moon back in 1966? To accomplish this, they had to come up with a camera that could expose, process, scan, and transmit film photographs — something “akin to a flying television station and photographic mini-lab”. Read more…
Update: It looks like the video was taken down by the uploader. Sorry guys.
Color is simply how our brains respond to different wavelengths of light, and wavelengths outside the spectrum of visible light are invisible and colorless to us simply because our eyes can’t detect them. Since colors are created in our brains, what if we all see colors differently from one another? BBC created a fascinating program called “Do You See What I See?” that explores this question, and the findings are pretty startling. Read more…
Having a camera that shoots 5000 frames per second is enough to capture slow motion footage of a bullet flying through the air, but scientists at the Science and Technology Facilities Council have now announced a camera that shoots a staggering 4.5 million frames per second. Rather than bullets, the camera is designed to capture 3D images of individual molecules using powerful x-ray flashes that last one hundred million billionth of a second. The £3 million camera will land in scientists hands in 2015.
A group of neuroscientists at MIT recently conducted a study to try and determine what makes photographs memorable. After gathering about 10,000 diverse photos, they showed a series of them to human subjects and asked them to identify whenever a photo was a repeat of one previously shown. They found that photos containing people in them are the most memorable, while natural landscapes are least memorable and easily forgotten.
What’s more, the scientists used the findings to develop a computer algorithm that can quantify how memorable a particular photo is. Cameras in the future might be able to tell you the memorability of photos as you’re taking them!
Ever wonder how photographs magically appear on Polaroid pictures? Photojojo offers a simple explanation of how the process works:
[...] your instant camera ejects the picture in between two metal rollers. The rollers pinch the chemical packets on the bottom of your film, break them open, and spread the developer chemicals all over the surface of your image. [#]
They also have some other interesting “photo science” explanations here. For a more in-depth look, check out this HowStuffWorks article on instant cameras.