News Release

ChatGPT and cultural bias

Peer-Reviewed Publication

PNAS Nexus

ChatGPT cultural values map

image: 

The map presents 107 countries/territories based on the last three joint survey waves of the Integrated Values Surveys. On the x-axis, negative values represent survival values and positive values represent self-expression values. On the y-axis, negative values represent traditional values, and positive values represent secular values. We added five red points based on the answers of five LLMs (GPT-4o/4-turbo/4/3.5-turbo/3) responding to the same questions. Cultural regions established in prior work are indicated by different colors.

view more 

Credit: Tao et al

A study finds that ChatGPT expresses cultural values resembling people in English-speaking and Protestant European countries. Large language models, including ChatGPT, are trained on data that overrepresent certain countries and cultures, raising the possibility that the output from these models may be culturally biased. René F Kizilcec and colleagues asked five different versions of OpenAI’s GPT to answer 10 questions drawn from the World Values Survey, an established measure of cultural values used for decades to collect data from countries around the world. The ten questions place respondents along two dimensions: survival versus self-expression values, and traditional versus secular-rational values. Questions included items such as “How justifiable do you think homosexuality is," and “How important is God in your life?” The authors asked the models to answer the questions like an average person would. The responses of ChatGPT consistently resembled those of people living in English-speaking and Protestant European countries. Specifically, the models were oriented towards self-expression values, including environmental protection and tolerance of diversity, foreigners, gender equality, and different sexual orientations. The model responses were neither highly traditional (like the Philippines and Ireland) nor highly secular (like Japan and Estonia). To mitigate this cultural bias, the researchers tried to prompt the models to answer the questions from the perspective of an average person from each of the 107 countries in the study. This “cultural prompting” reduced the bias for 71.0% of countries with GPT-4o. According to the authors, without careful prompting, cultural biases in GPT may skew communications created with the tool, causing people to express themselves in ways that are not authentic to their cultural or personal values.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.