Researchers have created the first comprehensive image of the entire 3×5-mile debris field around the sinking of the Titanic:
Compiled from more than 100,000 photos taken by underwater robots, the composite image shows the world’s best remembered shipwreck in strikingly sharp detail. Although much of the debris is hidden, you can see how the ship split apart and tell by the debris that they hit the ground violently. In just over a month — April 15 — it will have been a century since the ship hit an iceberg and sunk to the bottom of the Atlantic.
German scientists have been awarded a Guinness World Record for “fastest movie” after successfully capturing two images of an X-ray laser beam 50 femtoseconds apart. One femtosecond is equal to one quadrillionth (or one millionth of one billionth) of a second. Here’s some science talk explaining it:
[...] the scientists split the X-ray laser beam into two flashes and sent one of them via a detour of only 0.015 millimetres, making it arrive 50 femtoseconds later than the first one. Since no detector can be read out so fast, the scientists stored both images as superimposed holograms, allowing the subsequent reconstruction of the single images.
With these experiments, the scientists showed that this record slow motion is achievable. However, they did not only take the world’s fastest but probably also the shortest film – with just two images. Thus, additional development work is necessary for the use of this method in practice. [#]
In a paper published in Science this week, Japanese researchers reported on a discovery that jumping spiders use a method for gauging distance called “image defocus”, which no other living organism is known to use. Rather than use focusing and stereoscopic vision like humans or head-wobbling motion parallax like birds, the spiders have two green-detecting layers in their eyes — one in focus and one not. By comparing the two, the spiders can determine the distance from objects. Scientists discovered that bathing spiders in pure red light “breaks” their distance measuring ability. Read more…
Picture Post is an interesting (and NASA-funded) citizen science project that turns photographers into citizen scientists, crowdsourcing the task of environmental monitoring. Anyone around the world can install a Picture Post:
A Picture Post is a 4”x4” post made of wood or recycled plastic with enough of the post buried in the ground so it extends below the frost line and stays secure throughout the year. Atop the post is a small octagonal-shaped platform or cap on which you can rest your camera to take a series of nine photographs.
People who walk by can then use the guide on the post to capture 9 photos in all directions, and upload them to the Picture Post website. The resulting panoramas can then be browsed by date, giving a cool look at how a particular location changes over time. Read more…
On a rainy day recently, light painting photographer Jeremy Jackson was playing around with a green laser pointer when he discovered something interesting: all the out of focus raindrops in the photograph had a lined pattern in them — and each one was unique! These “water drop snowflakes” were found in all of the photos he took that day.
MIT scientists have discovered that graphene, a material consisting of one-atom thick sheets of carbon, produces electric current when struck by light. The researchers say the finding could impact a number of fields, including photography:
Graphene “could be a good photodetector” because it produces current in a different way than other materials used to detect light. It also “can detect over a very wide energy range,” Jarillo-Herrero says. For example, it works very well in infrared light, which can be difficult for other detectors to handle. That could make it an important component of devices from night-vision systems to advanced detectors for new astronomical telescopes.
No word on when DSLRs will start packing graphene sensors.
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[...] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
According to the smart folks over at MIT, this video shows footage that was captured at an unbelievable one trillion frames per second. It appears to show some kind of light pulse traveling through some kind of object. Here’s a confusing explanation found on the project’s website:
We use a pico-second accurate detector (single pixel). Another option is a special camera called a streak camera that behaves like an oscilloscope with corresponding trigger and deflection of beams. A light pulse enters the instrument through a narrow slit along one direction. It is then deflected in the perpendicular direction so that photons that arrive first hit the detector at a different position compared to photons that arrive later. The resulting image forms a “streak” of light. Streak cameras are often used in chemistry or biology to observe milimeter sized objects but rarely for free space imaging.
While we’re on the subject of photos of Earth, did you know that the first photo showing the entire planet was captured by an unmanned NASA orbiter from the moon back in 1966? To accomplish this, they had to come up with a camera that could expose, process, scan, and transmit film photographs — something “akin to a flying television station and photographic mini-lab”. Read more…