News Release

People’s trust in AI systems to make moral decisions is still some way off

Peer-Reviewed Publication

University of Kent

Psychologists warn that AI’s perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions. 

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent’s School of Psychology explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas. 

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors—human or AI—who align with principles that prioritise individuals over abstract outcomes.

Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.

Dr Jim Everett led the research at Kent, alongside Dr Simon Myers at the University of Warwick. 

Dr Jim Everett who led the research at Kent said: ‘Trust in moral AI isn't just about accuracy or consistency—it’s about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems, therefore there is a major need to understand how to bridge the gap between AI capabilities and human trust.’

The research paper ‘People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors’ is published by Cognition (Everett, J [University of Kent]; Myers, S [University of Warwick]). 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.