image: Professor Boyuan Chen poses with some of his 3D printed robots that were designed and built through his new platform called Text2Robot that allows people to simply tell a computer what kind of robot to create.
Credit: Alex Sanchez, Duke University
DURHAM, N.C. -- When personal computers were first invented, only a small group of people who understood programming languages could use them. Today, anyone can look up the local weather, play their favorite song or even generate code with just few keystrokes.
This shift has fundamentally changed how humans interact with technology, making powerful computational tools accessible to everyone. Now, advancements in artificial intelligence (AI) are extending this ease of interaction to the world of robotics through a platform called Text2Robot.
Developed by engineers at Duke University, Text2Robot is a novel computational robot design framework that allows anyone to design and build a robot simply by typing a few words describing what it should look like and how it should function. Its novel abilities will be showcased at the upcoming IEEE International Conference on Robotics and Automation (ICRA 2025) taking place May 19 - 23, in Atlanta, Georgia. And last year, it won first place in the innovation category at the Virtual Creatures Competition that has been held for 10 years at the Artificial Life conference in Copenhagen, Denmark.
“Building a functional robot has traditionally been a slow and expensive process requiring deep expertise in engineering, AI and manufacturing,” said Boyuan Chen, the Dickinson Faculty Assistant Professor of Mechanical Engineering and Materials Science, Electrical and Computer Engineering, and Computer Science at Duke University. “Text2Robot is taking the initial steps toward drastically improving this process by allowing users to create functional robots using nothing but natural language."
Text2Robot leverages emerging AI technologies to convert user text descriptions into physical robots. The process begins with a text-to-3D generative model, which creates a 3D physical design of the robot’s body based on the user's description. This basic body design is then converted into a moving robot model capable of carrying out tasks by incorporating real-world manufacturing constraints, such as the placement of electronic components and the functionality and placement of joints. The system uses evolutionary algorithms and reinforcement learning to co-optimize the robot's shape, movement abilities and control software, ensuring it can perform tasks efficiently and effectively.
"This isn’t just about generating cool-looking robots,” said Ryan Ringel, co-first author of the paper and an undergraduate student in Chen’s laboratory. “The AI understands physics and biomechanics, producing designs that are actually functional and efficient."
For example, if a user simply types a short description such as “a frog robot that tracks my speed on command” or “an energy-efficient walking robot that looks like a dog,” Text2Robot generates a manufacturable robot design that resembles the specific request within minutes and has it walking in a simulation within an hour. In less than a day, a user can 3D-print, assemble and watch their robot come to life.
“This rapid prototyping capability opens up new possibilities for robot design and manufacturing, making it accessible to anyone with a computer, a 3D printer and an idea,” said Zachary Charlick, co-first author of the paper and an undergraduate student in the Chen lab. “The magic of Text2Robot lies in its ability to bridge the gap between imagination and reality.”
Text2Robot has the potential to revolutionize various aspects of our lives. Imagine children designing their own robot pets or artists creating interactive sculptures that can move and respond. At home, robots could be custom designed to assist with chores, such as a trash can that navigates a home’s specific layout and obstacles to empty itself on command. In outdoor environments, such as a disaster response scenario, responders may desire different types of robots that can complete various tasks under unexpected environmental conditions.
The framework currently focuses on quadrupedal robots, but future research will expand its capabilities to a broader range of robotic forms and integrate automated assembly processes to further streamline the design-to-reality pipeline.
“This is just the beginning,” said Jiaxun Liu, co-first author of the paper and a second-year Ph.D. student in Chen’s laboratory. “Our goal is to empower robots to not only understand and respond to human needs through their intelligent ‘brain,’ but also adapt their physical form and functionality to best meet those needs, offering a seamless integration of intelligence and physical capability.”
At the moment, the robots are limited to basic tasks like walking by tracking speed commands or walking on rough terrains. But the group is looking into adding sensors and other gadgets into the platform’s abilities, which would open the door to climb stairs and avoiding dynamic obstacles.
“The future of robotics is not just about machines; it’s about how humans and machines collaborate to shape our world,” added Chen. “By harnessing the power of generative AI, this work brings us closer to a future where robots are not just tools but partners in creativity and innovation.”
This work was supported by the DARPA FoundSci program (HR00112490372) and the Army Research Laboratory STRONG program (W911NF2320182, W911NF2220113).
CITATION: “Text2Robot: Evolutionary Robot Design from Text Descriptions.” Ryan P. Ringel, Zachary S. Charlick, Jiaxun Liu, Boxi Xia and Boyuan Chen. IEEE International Conference on Robotics and Automation (ICRA 2025).
Project Website: Text2Robot: Evolutionary Robot Design from Text Descriptions - Research Blog
General Robotics Lab Website: http://generalroboticslab.com
# # #
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Text2Robot: Evolutionary Robot Design from Text Descriptions