News Release

Computer conflicts needn't lead to disaster

Reports and Proceedings

New Scientist

HAL, the infamous killer supercomputer from the movie 2001: A Space Odyssey, might not have gone mad had it been fitted with a new "conflict resolution" system developed for the American military.

The system, developed at the University of Southern California with funding from the Defense Advanced Research Projects Agency, exploits the benefits of methodical argument to help bickering computer programs resolve their differences and come up with a satisfactory result.

Called the collaborative negotiation system based on argumentation (CONSA), the new system is designed to work with SOAR, a software package used for designing small tasking programs known as software agents. When these agents are imbued with human-like cognitive traits such as beliefs and desires, CONSA allows them to work out their differences. It has already been successfully tested on an agent-controlled robot football team and a helicopter combat simulation in which agents replaced people.

In 2001, which chronicles a doomed mission to Jupiter, the legendary HAL 9000 computer appears to go haywire and kills off most of the crew. Later, however, it emerges that HAL was merely trying to resolve conflicts in its programming. It had been told that the success of the mission took precedence over the crew's safety. But it was also instructed to protect the crew and never lie to them.

If HAL had been running agents that represented different parts of its programming, and was using CONSA, it might well have been able to cope, says computer scientist Milind Tambe at the Information Sciences Institute at USC.

The system, developed by Tambe and Hyuckchul Jung, is different from any other sort of negotiation package in that it doesn't simply try to find a halfway house between the conflicting parties through auction-like bidding. Instead, the system uses argumentation strategies designed to benefit the team rather than the individual.

To do this it makes two assumptions: that the different parties involved in the argument are willing to collaborate, and that the worst-case scenario will occur. This ensures that it will look for the best solution for the team as a whole and take minority views into account, rather than merely compromising between the strongest views.

The distinction is clear when you consider helicopter simulations, says Tambe, when disagreements between two agents about the enemy's location can't sensibly be solved by averaging out two conflicting options. "It would seem quite inappropriate if they started getting involved in auctions to solve such a conflict," says Tambe.

Instead, CONSA allows agents to cycle through proposals and counterproposals, letting each one justify its claim. Other parties not involved in the conflict can also contribute by providing additional information that may help resolve the issue. All agents can make new proposals based on their beliefs and any additional data they may have learned in the debate, until eventually the differing factions coincide and a consensus is reached.

###

Author: Duncan Graham-Rowe

New Scientist issue 15th January 2000

PLEASE MENTION NEW SCIENTIST AS THE SOURCE OF THIS STORY AND, IF PUBLISHING ONLINE, PLEASE CARRY A HYPERLINK TO : http://www.newscientist.com


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.