News Release

Robots learn how to move by watching themselves

By observing their own motions, robots can learn how to overcome damage to their bodies, which could make them more adaptable for a wide variety of applications.

Peer-Reviewed Publication

Columbia University School of Engineering and Applied Science

Robots Learn How to Move By Watching Themselves

image: 

A robot observes its reflection in a mirror, learning its own morphology and kinematics for autonomous self-simulation. The process highlights the intersection of vision-based learning and robotics, where the robot refines its movements and predicts its spatial motion through self-observation.

view more 

Credit: Jane Nisselson/Columbia Engineering

New York, NY—Feb. 25, 2025— By watching their own motions with a camera, robots can teach themselves about the structure of their own bodies and how they move, a new study from researchers at Columbia Engineering now reveals. Equipped with this knowledge, the robots could not only plan their own actions, but also overcome damage to their bodies.

"Like humans learning to dance by watching their mirror reflection, robots now use raw video to build kinematic self-awareness," says study lead author Yuhang Hu, a doctoral student at the Creative Machines Lab at Columbia University, directed by Hod Lipson, James and Sally Scapa Professor of Innovation and chair of the Department of Mechanical Engineering. "Our goal is a robot that understands its own body, adapts to damage, and learns new skills without constant human programming."

Most robots first learn to move in simulations. Once a robot can move in these virtual environments, it is released into the physical world where it can continue to learn. “The better and more realistic the simulator, the easier it is for the robot to make the leap from simulation into reality,” explains Lipson. 

However, creating a good simulator is an arduous process, typically requiring skilled engineers. The researchers taught a robot how to create a simulator of itself simply by watching its own motion through a camera. “This ability not only saves engineering effort, but also allows the simulation to continue and evolve with the robot as it undergoes wear, damage, and adaptation,” Lipson says.

In the new study, the researchers instead developed a way for robots to autonomously model their own 3D shapes using a single regular 2D camera. This breakthrough was driven by three brain-mimicking AI systems known as deep neural networks. These inferred 3D motion from 2D video, enabling the robot to understand and adapt to its own movements. The new system could also identify alterations to the bodies of the robots, such as a bend in an arm, and help them adjust their motions to recover from this simulated damage.

Such adaptability might prove useful in a variety of real-world applications. For example, "imagine a robot vacuum or a personal assistant bot that notices its arm is bent after bumping into furniture," Hu says. "Instead of breaking down or needing repair, it watches itself, adjusts how it moves, and keeps working. This could make home robots more reliable—no constant reprogramming required."

Another scenario might involve a robot arm getting knocked out of alignment at a car factory. "Instead of halting production, it could watch itself, tweak its movements, and get back to welding—cutting downtime and costs," Hu says. "This adaptability could make manufacturing  more resilient."

As we hand over more critical functions to robots, from manufacturing to medical care, we need these robots to be more resilient. “We humans cannot afford to constantly baby these robots, repair broken parts and adjust performance. Robots need to learn to take care of themselves, if they are going to become truly useful,” says Lipson. “That’s why self-modeling is so important.”  

The ability demonstrated in this study is the latest in a series of projects that the Columbia team has released over the past two decades, where robots are learning to become better at self-modeling using cameras and other sensors. 

In 2006, the research team’s robots were able to use observations to only create simple stick-figure-like simulations of themselves. About a decade ago, robots began creating higher fidelity models using multiple cameras. In this study, the robot was able to create a comprehensive kinematic model of itself using just a short video clip from a single regular camera, akin to looking in the mirror. The researchers call this newfound ability “Kinematic Self-Awareness.” 

“We humans are intuitively aware of our body; we can imagine ourselves in the future and visualize the consequences of our actions well before we perform those actions in reality,” explains Lipson. “Ultimately, we would like to imbue robots with a similar ability to imagine themselves, because once you can imagine yourself in the future, there is no limit to what you can do.”

The researchers detailed their findings February 25 in the journal Nature Machine Intelligence.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.