News Release

Self-learning robot hands

Cluster of Excellence CITEC presents new system that learns how to grasp objects

Grant and Award Announcement

Bielefeld University

Robot Hands

image: Even though the robot hands are strong enough to crush the apple, they dole out their strength for a fine-touch grip that also won't damage delicate objects. This is made possible by connecting tactile sensors developed at CITEC with intelligent software. view more 

Credit: Photo: Bielefeld University

The system was developed as part of the large-scale research project Famula at Bielefeld University's Cluster of Excellence Cognitive Interaction Technology (CITEC). The knowledge gained from this project could contribute to future service robots, for instance, that are able to independently adapt to working in new households. CITEC has invested approximately one million Euro in Famula. In a new "research_tv" report from Bielefeld University, the coordinators of the Famula project explain the new innovation.

"Our system learns by trying out and exploring on its own - just as babies approach new objects," says neuroinformatics Professor Dr. Helge Ritter, who heads the Famula project together with sports scientist and cognitive psychologist Professor Dr. Thomas Schack and robotics Privatdozent Dr. Sven Wachsmuth.

The CITEC researchers are working on a robot with two hands that are based on human hands in terms of both shape and mobility. The robot brain for these hands has to learn how everyday objects like pieces of fruit, dishes, or stuffed animals can be distinguished on the basis of their color or shape, as well as what matters when attempting to grasp the object.

The Human Being as the Model

A banana can be held, and a button can be pressed. "The system learns to recognize such possibilities as characteristics, and constructs a model for interacting and re-identifying the object," explains Ritter.

To accomplish this, the interdisciplinary project brings together work in artificial intelligence with research from other disciplines. Thomas Schack's research group, for instance, investigated which characteristics study participants perceived to be significant in grasping actions. In one study, test subjects had to compare the similarity of more than 100 objects. "It was surprising that weight hardly plays a role. We humans rely mostly on shape and size when we differentiate objects," says Thomas Schack. In another study, test subjects' eyes were covered and they had to handle cubes that differed in weight, shape, and size. Infrared cameras recorded their hand movements. "Through this, we find out how people touch an object, and which strategies they prefer to use to identify its characteristics," explains Dirk Koester, who is a member of Schack's research group. "Of course, we also find out which mistakes people make when blindly handling objects."

System Puts Itself in the Position of Its "Mentor"

Dr. Robert Haschke, a colleague of Helge Ritter, stands in front of a large metal cage with both robot arms and a table with various test objects. In his role as a human learning mentor, Dr. Haschke helps the system to acquire familiarity with novel objects, telling the robot hands which object on the table they should inspect next. To do this, Haschke points to individual objects, or gives spoken hints, such as in which direction an interesting object for the robot can be found (e.g. "behind, at left"). Using color cameras and depth sensors, two monitors display how the system perceives its surroundings and reacts to instructions from humans.

"In order to understand which objects they should work with, the robot hands have to be able to interpret not only spoken language, but also gestures," explains Sven Wachsmuth, of CITEC's Central Labs. "And they also have to be able to put themselves in the position of a human to also ask themselves if they have correctly understood." Wachsmuth and his team are not only responsible for the system's language capabilities: they have also given the system a face. From one of the monitors, Flobi follows the movements of the hands and reacts to the researchers' instructions. Flobi is a stylized robot head that complements the robot's language and actions with facial expressions. As part of the Famula system, the virtual version of the robot Flobi is currently in use.

Understanding Human Interaction

With the Famula project, CITEC researchers are conducting basic research that can benefit self-learning robots of the future in both the home and industry. "We want to literally understand how we learn to 'grasp' our environment with our hands. The robot makes it possible for us to test our findings in reality and to rigorously expose the gaps in our understanding. In doing so, we are contributing to the future use of complex, multi-fingered robot hands, which today are still too costly or complex to be used, for instance, in industry," explains Ritter.

The project name Famula stands for "Deep Familiarization and Learning Grounded in Cooperative Manual Action and Language: From Analysis to Implementation." The project has been running since 2014 and is limited to October 2017 at the moment. Eight research groups from the Cluster of Excellence CITEC are working together on the project. Famula is one of four large-scale projects at CITEC; other projects include a robot service apartment, the walking robot Hector, and the virtual coaching space ICSpace. As part of the Excellence Initiative of the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG), CITEC is funded by state and federal governments (EXC 277).

###

Contact:

Prof. Dr. Helge Ritter, Bielefeld University
Cluster of Excellence Cognitive Interaction Technology (CITEC)
Telephone: +49 521 106-12123
Email: helge@techfak.uni-bielefeld.de


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.