A pitiful sound from tinny speakers, sad virtual eyes, trembling robot arms: it doesn’t take much to feel sorry for a robot. This is the conclusion of a study by Marieke Wieringa, who will be defending her PhD thesis at Radboud University on 5 November. But she warns that our human compassion could also be exploited: just wait until companies find a revenue model for emotional manipulation by robots.
Objectively, we know that a robot cannot experience pain. Still, under certain circumstances, people can be slightly more inclined to believe that a robot is in pain, provided they are manipulated in the right way. “If a robot can pretend to experience emotional distress, people feel guiltier when they mistreat the robot”, Wieringa explains.
Boring task or shaking robots
Through several tests, Wieringa and her colleagues studied how people respond to violence against robots. “Some participants watched videos of robots that were either mistreated or treated well. Sometimes, we asked participants to give the robots a shake themselves. We tried out all variations: sometimes the robot didn’t respond, sometimes it did – with pitiful sounds and other responses that we associate with pain.” In the tests, it soon appeared that the tormented robot triggered more pity: participants were less willing to shake the robot again. “If we asked the participants to shake a robot that showed no emotion, then they didn’t seem to have any difficulty with it at all.”
In one of the tests, the participants were asked to choose: complete a boring task or give the robot a shake. If participants chose to shake the robot for longer, it meant that they didn’t have to carry out the task for as long. “Most people had no problem shaking a silent robot, but as soon as the robot began to make pitiful sounds, they chose to do the boring task instead.”
Tamagotchi
Wieringa warns that it is just a question of time before organisations exploit emotional manipulation. “People were obsessed with Tamagotchis for a while: virtual pets that successfully triggered emotions. But what if a company made a new Tamagotchi that you had to pay to feed as a pet? That’s why I am calling for governmental regulations that establish when it’s appropriate for chatbots, robots and other variants to be able to express emotions.”
But Wieringa doesn’t think that a complete ban on emotions would be good either. “It’s true that emotional robots would have some advantages. Imagine therapeutic robots that could help people to process certain things. In our study, we saw that most participants found it good when the robot triggered pity: according to them, it signalled that violent behaviour is not OK. Still, we need to guard against risks for people who are sensitive to ‘fake’ emotions. We like to think we are very logical, rational beings that don’t fall for anything, but at the end of the day, we are also led by our emotions. And that’s just as well, otherwise we’d be robots ourselves.”