A study finds that ChatGPT expresses cultural values resembling people in English-speaking and Protestant European countries. Large language models, including ChatGPT, are trained on data that overrepresent certain countries and cultures, raising the possibility that the output from these models may be culturally biased. René F Kizilcec and colleagues asked five different versions of OpenAI’s GPT to answer 10 questions drawn from the World Values Survey, an established measure of cultural values used for decades to collect data from countries around the world. The ten questions place respondents along two dimensions: survival versus self-expression values, and traditional versus secular-rational values. Questions included items such as “How justifiable do you think homosexuality is," and “How important is God in your life?” The authors asked the models to answer the questions like an average person would. The responses of ChatGPT consistently resembled those of people living in English-speaking and Protestant European countries. Specifically, the models were oriented towards self-expression values, including environmental protection and tolerance of diversity, foreigners, gender equality, and different sexual orientations. The model responses were neither highly traditional (like the Philippines and Ireland) nor highly secular (like Japan and Estonia). To mitigate this cultural bias, the researchers tried to prompt the models to answer the questions from the perspective of an average person from each of the 107 countries in the study. This “cultural prompting” reduced the bias for 71.0% of countries with GPT-4o. According to the authors, without careful prompting, cultural biases in GPT may skew communications created with the tool, causing people to express themselves in ways that are not authentic to their cultural or personal values.
Journal
PNAS Nexus
Article Title
Cultural bias and cultural alignment of large language models
Article Publication Date
17-Sep-2024