YouTube user Roy Prol created this fascinating animation that imagines what Earth would be like if our planet had Saturn-like rings. In addition to views from space, he show us beautiful renderings of what the rings would look like in landscape photos captured at famous landmarks (e.g. the Eiffel Tower in Paris, Christ the Redeemer in Rio de Janeiro) around the world.
Posts Tagged ‘science’
Back in 2010 we shared that MIT was developing a special camera that uses echoes of light to see around corners. Now, two years later, the researchers are finally showing off the camera in action. It works by firing 50 “femtosecond” (quadrillionth of a second) laser pulses 60 times at various spots at an angled wall. A special imaging sensor then collects the scattered light that’s reflected back and uses complex algorithms to piece together the scene based on how long the photons take to return. The process currently takes several minutes, but researchers hope to reduce it to less than 10 seconds, which would make it more useful for military and industrial applications.
(via Scientific American)
Researchers have created the first comprehensive image of the entire 3×5-mile debris field around the sinking of the Titanic:
Compiled from more than 100,000 photos taken by underwater robots, the composite image shows the world’s best remembered shipwreck in strikingly sharp detail. Although much of the debris is hidden, you can see how the ship split apart and tell by the debris that they hit the ground violently. In just over a month — April 15 — it will have been a century since the ship hit an iceberg and sunk to the bottom of the Atlantic.
Image credit: Photograph by RMS Titantic Inc.
Here’s a simple lesson by Dylan Bennett on what depth of field is, how it works, and how to control it in your photography.
German scientists have been awarded a Guinness World Record for “fastest movie” after successfully capturing two images of an X-ray laser beam 50 femtoseconds apart. One femtosecond is equal to one quadrillionth (or one millionth of one billionth) of a second. Here’s some science talk explaining it:
[...] the scientists split the X-ray laser beam into two flashes and sent one of them via a detour of only 0.015 millimetres, making it arrive 50 femtoseconds later than the first one. Since no detector can be read out so fast, the scientists stored both images as superimposed holograms, allowing the subsequent reconstruction of the single images.
With these experiments, the scientists showed that this record slow motion is achievable. However, they did not only take the world’s fastest but probably also the shortest film – with just two images. Thus, additional development work is necessary for the use of this method in practice. [#]
And we thought one trillion frames per second was impressive…
Image credit: Photograph by Stefan Eisebitt/HZB
In a paper published in Science this week, Japanese researchers reported on a discovery that jumping spiders use a method for gauging distance called “image defocus”, which no other living organism is known to use. Rather than use focusing and stereoscopic vision like humans or head-wobbling motion parallax like birds, the spiders have two green-detecting layers in their eyes — one in focus and one not. By comparing the two, the spiders can determine the distance from objects. Scientists discovered that bathing spiders in pure red light “breaks” their distance measuring ability.
Picture Post is an interesting (and NASA-funded) citizen science project that turns photographers into citizen scientists, crowdsourcing the task of environmental monitoring. Anyone around the world can install a Picture Post:
A Picture Post is a 4”x4” post made of wood or recycled plastic with enough of the post buried in the ground so it extends below the frost line and stays secure throughout the year. Atop the post is a small octagonal-shaped platform or cap on which you can rest your camera to take a series of nine photographs.
People who walk by can then use the guide on the post to capture 9 photos in all directions, and upload them to the Picture Post website. The resulting panoramas can then be browsed by date, giving a cool look at how a particular location changes over time.
On a rainy day recently, light painting photographer Jeremy Jackson was playing around with a green laser pointer when he discovered something interesting: all the out of focus raindrops in the photograph had a lined pattern in them — and each one was unique! These “water drop snowflakes” were found in all of the photos he took that day.
Anyone know what causes this phenomenon?
Image credit: Photograph by Jeremy Jackson and used with permission
MIT scientists have discovered that graphene, a material consisting of one-atom thick sheets of carbon, produces electric current when struck by light. The researchers say the finding could impact a number of fields, including photography:
Graphene “could be a good photodetector” because it produces current in a different way than other materials used to detect light. It also “can detect over a very wide energy range,” Jarillo-Herrero says. For example, it works very well in infrared light, which can be difficult for other detectors to handle. That could make it an important component of devices from night-vision systems to advanced detectors for new astronomical telescopes.
No word on when DSLRs will start packing graphene sensors.
P.S. Did you know that graphene was first discovered in 2004 after a thin layer of pencil lead was pulled off using some ordinary tape?
Image credit: Illustration by AlexanderAlUS
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[...] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!