This behind-the-scenes video by the Associated Press gives a neat look at the various robotic cameras the agency will use at the London Olympic Games (earlier this month we shared some of Reuters’ rigs). Fancy remote-controlled rigs will allow for many photographic firsts, as cameras will be found in locations that were previously inaccessible. Wired writes that despite their usefulness, robotic cameras are causing some human photogs to sweat:
“We are essentially able to put cameras and photographers where they’ve never been before, capturing images in ways they’ve never been captured,” [Fabrizio] Bensch said. “For example, I’ve installed a robotic camera unit on a truss, 30 meters high — in a position where no photographer has been in a previous Olympics.”
For [Mark] Reblias, those are positions you just can’t compete against. With the traditional remote-control cameras, if the subject showed untethered joy five feet out of frame, you were out of luck. Now if Reuters is able to get that shot, “well, there’s nothing I can do,” he said. “Maybe I’ll have to upgrade my gear and make a robotic system. It’d be expensive, it might be a cost I have to take on.”
Super slow motion footage captured by high speed cameras usually shows slow movements (if any), but German studio The Marmalade came up with a brilliant way of speeding up the movements: a high-speed robot camera operator.
Our groundbreaking High Speed Motion Control System ‘Spike’ brings the creative freedom of a moving camera to the world of high speed filming and so enables us to create shots that would be impossible to achieve otherwise. ‘Spike’ can freely move the camera with unparalleled speed and precision, thereby removing the previously existing creative limitation of having to shoot high speed sequences with a locked camera.
By marrying the hardware of a sturdy and reliable industrial robot to software that was built from the ground up for the demands of motion controlled high speed imaging, we developed a unique system for creating real life camera moves with the ease of use normally associated with 3D Animation.
The system does camera moves that are exactly repeatable, allowing them to be slightly tweaked until the shot is just right. Read more…
“Blind Self-Portrait” is a project by artists Kyle McDonald and Matt Mets that’s based around a machine that can help you turn photographs into sketches. The machine constantly track’s the subject’s face using a camera and translates the image into a line-drawing and x- and y-coordinates. The user then rests their hand on the machine’s “hand” and presses a pen into a piece of paper. The robot hand does the rest of the work, guiding the hand into drawing the photograph as the person sits back and watches the magic happen. Read more…
The photos that went into the animation above were all created in-camera using software and a robotic arm programmed before hand with predetermined patterns. The project, known as LightPlot, started as an NXT Lego experiment in stop-motion photography by Ben Cowell-Thomas. He wanted to create a motion control rig for stop-motion using NXT, but as he was looking through some light painting projects online, he began to wonder how he could turn his lego project into a light painting rig. Read more…
Photographer Thomas Jackson, whose swarm photos we shared earlier this week, has a creative project titled The Robot that “offers a darkly humorous narrataive about a lone robot’s failure to co-exist with the natural world.” It’s a series of photos that brings a cleverly arranged heap of metal to life. Read more…
Robots might not be able to convey emotions or tell stories through photographs, but one thing they’re theoretically better than humans at is calculating proportions in a scene, and that’s exactly what one robot at India’s IIT Hydrabad has been taught to do. Computer scientist Raghudeep Gadde programmed a humanoid robot with a head-mounted camera to perfectly obey the rule of thirds and the golden ratio. New Scientist writes,
The robot is also programmed to assess the quality of its photos by rating focus, lighting and colour. The researchers taught it what makes a great photo by analysing the top and bottom 10 per cent of 60,000 images from a website hosting a photography contest, as rated by humans.
Armed with this knowledge, the robot can take photos when told to, then determine their quality. If the image scores below a certain quality threshold, the robot automatically makes another attempt. It improves on the first shot by working out the photo’s deviation from the guidelines and making the appropriate correction to its camera’s orientation.
It’s definitely a step up from Lewis, a wedding photography robot built in the early 2000s that was taught to recognize faces.
This light painting photograph was created by a group of students over in Germany using a swarm of seven Roomba automated vacuum cleaners. Each one had a different colored LED light attached to the top, making the resulting photo look like some kind of robotic Jackson Pollock painting. There’s actually an entire Flickr group dedicated to using Roombas for light painting — check it out of you have one of these robot minions serving you in your home. Read more…
Apparently this is what Pentax considers “legendary collaboration”: a Korejanai robot edition (Korejanairobomoderu) of the K-r DSLR. It doesn’t boast any spec upgrades from the stock version, but instead sports a wacky primary color paint job and a robot head attached to the hotshoe. You’ll also get a matching special edition 35mm ƒ/2.4 prime lens to complete the horrifying awesome look. If only these were working DSLR cameras that also transformed into robot action figures.
Only 100 will be sold at a price of ¥99,800 (~$1,190), and pre-orders start at midnight on December 24, 2010.