News Release

Cooperative hunting requires less brainpower than previously thought

Peer-Reviewed Publication

Nagoya University

Figure 1

image: 

Cooperative hunting requires less brainpower than previously thought

view more 

Credit: Kazushi Tsutsui

Researchers at Nagoya University in Japan have found that cooperative hunting, in which two or more predators collaborate to capture prey, does not require sophisticated cognitive processes in the brain. Rather, cooperation can emerge on the basis of a simple set of rules and experience. Not only do these findings have important implications for understanding the evolution of cooperative behaviour among animals, but they may also help to develop collaborative artificial intelligence (AI) systems. Such systems have the potential to serve as virtual companions in tactical training situations, such as team sports and driving simulations. The study was published in eLife and was led by Kazushi Tsutsui, Kazuya Takeda, and Keisuke Fujii.

Past research has linked cooperative hunting to mammals that display complex social behaviors, such as lions and chimpanzees. However, similar behaviors have also been found in species with less advanced cognitive abilities, such as crocodiles and fish. This suggests that a simpler mechanism may be responsible for this form of cooperation.

To investigate this puzzle, Tsutsui and his collaborators created a computational model in which AI agents learn to hunt together, using deep reinforcement learning. Deep reinforcement learning is a process in which behaviors are reinforced by being rewarded after performing them. Researchers train algorithms to learn through interaction with the environment and receiving rewards for specific actions. Using deep neural networks, these algorithms can process inputs such as position and velocity and make autonomous decisions.

Programmed with reinforcement learning capabilities, AI predator agents learned to collaborate in hunting by interacting with the environment through a sequence of states, actions, and rewards, with the goal of selecting actions that maximize future rewards. The predator agents cooperated because of the effectiveness of their actions and the anticipation of a reward (the prey) to be divided among the group after a successful hunt.

During the simulations, the AI predators exhibited distinct and complementary roles, similar to the behavior of animals that engage in cooperative hunting. For example, one agent would chase the prey, while another would ambush it. As the number of predators increased, the success rate increased, and the time required for hunts decreased.

In a final test, AI agents played the role of predators, and human participants acted as prey. Despite facing initial difficulties, such as confusion caused by unexpected human movements, the trained AI agents worked together and captured their human prey. This shows how successful cooperative hunting does not require complex cognitive processes and suggests that predators in the real world may also learn to collaborate through a simple set of decision rules.

"Our predator agents learned to collaborate using reinforcement learning, without requiring complex cognitive mechanisms akin to theory of mind," Tsutsui said. "This suggests that cooperative hunting can evolve in a wider range of species than previously thought." 

The research team expects that their discoveries will lead to new field studies on decision-making in predator-prey dynamics. Moreover, this project shows the potential to advance cooperative AI systems, which could have positive effects in other domains that require collaborative solutions, such as autonomous driving and traffic management. 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.