As post-production software continues to become more and more powerful, researchers are doing their best to keep up by developing new methods of spotting digital photo fakes. In the past, we’ve seen that noise patterns and even Twitter trends can help spot fakes, but a new method out of UC Berkeley is taking a look at something else entirely: the shadows. Read more…
UC Berkeley’s library system is the fourth largest library in the United States, so it’s no wonder that treasures are often forgotten and buried inside the rare collections. Case in point: a massive collection of signed prints by Ansel Adams have been discovered in one of the 32 libraries, just sitting around in a box. Read more…
The University of California has agreed to dish out a $162,500 settlement to David Morse, a 43-year-old photographer who was arrested back in 2009 while covering a student protest. The SF Chronicle writes,
[The suit] an officer told Morse, “We want your camera. We believe your camera contains evidence of a crime.”
The officers ignored his press pass and arrested him and seven others on suspicion of rioting, threatening an education official, attempted burglary, attempted arson of an occupied building, vandalism, and assault with a deadly weapon on a police officer, the suit said.
Morse spent the night in jail. Prosecutors declined to file charges.
But police obtained a search warrant and used several of his photos in brochures and online in hopes that the public could identify individuals.
As part of the settlement, the police department has also agreed to modify its procedures regarding seeking materials from journalists and will be conducting training sessions teaching its officers about media rights.
We’re now one step closer to being able to take photographs with our minds. Scientists at UC Berkeley have come up with a way to reconstruct what the human brain sees:
[Subjects] watched two separate sets of Hollywood movie trailers
[...] brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie. [#]
Unlike the cat brain research video we shared a while back, the resulting imagery in this project isn’t directly generated from brain signals but is instead reconstructed from YouTube clips similar to what the person is thinking. They’re still calling it a “major leap toward reconstructing internal imagery” though. In the future this technology might be used to record not just our visual memories, but even our dreams!
This animation was created by students of the Engineering 128: Advanced Engineering Design Graphics course at UC Berkeley during the Spring 2008 semester. The first part shows a Canon 10D DSLR exploding into its individual parts, and then those parts coming together again to slowly rebuild the camera, while the second part does the same for a Canon 24-85mm lens. Pretty dang impressive considering that it’s for an undergraduate course.
Here’s an interesting video created by Make Magazine showing how UC Berkeley architecture professor Charles Benton uses kites to capture amazing aerial photographs. Benton creates his own gear for mounting his DSLR on a kite and controlling it from afar — you might be surprised at how creative some of his contraptions are (for one rig he uses a disposable camera, rubber bands, and a ping-pong ball).