We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[…] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
You might want to skip this post if you’re squeamish. A filmmaker named Rob Spence has successfully become a cyborg by replacing an eye he lost through a childhood accident with a wireless camera that transmits everything he sees to a computer. Spence believes that technology may soon reach the point where are be tempted to swap out their body parts for superior prosthetics. No word on when he’ll be able to apply Instagram filters to his eye camera photos.
What if in the future, the human eye itself could be turned into a camera by simply reading and recording the data that it sends to the brain? As crazy as it sounds, researchers have already accomplished this at a very basic level:
In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. [#]
Basically, the scientists were able to tap into the brain of a cat and display what the cat was seeing on a computer screen. Something similar was accomplished with humans a few years ago, and scientists believe that in the future we may even be able to “photograph” human dreams!
San Franciscan Tanya Vlach lost her left eye in a car accident back in 2005. Dissatisfied with her prosthetic eye, she’s trying to raise money to develop an in-eye camera that captures blink-activated still photos and 720p HD video. Her wish list of features include geotagging, IR/UV capture, facial recognition, and sensor activated zoom, focus, and on/off. Vlach’s Kickstarter project is titled “Grow a new eye“, and has a set goal of reaching $15,000 in funding by August 3rd, 2011.