Robots are Learning to Move on Their Own Using a Single Camera
Scientists at MIT have developed a new AI system that can teach itself how to control different types of robots using just one camera.
This AI system, created by scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), lets robots teach themselves how to move and control their bodies using only a single camera and visual data. It replaces the need for sensors or detailed programming by allowing robots to learn through observation.
As reported by Live Science, the AI system gathers information about the robot’s structure using cameras, similar to how people use their eyes to understand how their own bodies move.
This method introduces a new way of training robots, according to a study published in Nature last month. Instead of using detailed physical models or complex sensors, the AI learns how a robot responds to commands simply by watching how it moves. The key to this is a system developed by CSAIL called “Neural Jacobian Fields” (NJF). It builds a visual model of the robot’s movement — a map showing how visible 3D points on the robot relate to its internal motors.
“The system gives robots a kind of body awareness,” Sizhe Lester Li, an MIT PhD student and lead researcher, says in a press release. “This work points to a shift from programming robots to teaching robots. Today, many robotic tasks require a lot of engineering. In the future, we could just show a robot a task and let it learn how to do it on its own.”
Li adds: “Think about how you learn to control your fingers: you wiggle, you observe, you adapt. That’s what our system does. It experiments with random actions and figures out which controls move which parts of the robot.”
To train the model, the robot performs random movements while multiple cameras record what happens. The system doesn’t need any prior knowledge about the robot’s design. It learns by linking its control signals to how its body moves.
Once the learning phase is complete, the robot only needs a single standard camera to operate in real-time. It can then watch itself, make decisions, and respond quickly — running at about 12 cycles per second. That’s faster and more practical than many traditional systems used for soft robots.
MIT researchers believe this approach could one day be used in real-world settings, like farming, construction, or dynamic environments, without needing heavy sensors or custom programming.
Image credits: Header photo licensed via Depositphotos.