News Release

How could artificial intelligence be used globally to increase fairness in the distribution of public social services?

International consortium of researchers coordinated by Mainz University presents analysis results in an open access volume of articles

Reports and Proceedings

Johannes Gutenberg Universitaet Mainz

Artificial intelligence (AI) is increasingly being used in many countries worldwide to provide public social services, assisting in entitlement decisions to state-paid pensions and unemployment benefits, assessments of asylum applications, and assignments of places at kindergartens. AI technology is intended to help apply fairness criteria in the allocation of this kind of support to individuals and to assess potential beneficiaries accordingly. However, fairness criteria vary from country to country. In India, for example, the caste system influences the distribution of social benefits. In China, access to social services is determined by a "good citizenship" score. But the concepts of fairness in terms of access to limited public resources also vary within Europe. These are major findings from the participative research undertaken by members of the AI FORA – Artificial Intelligence for Assessment project, which were published recently in an online open access volume. The project, coordinated by Johannes Gutenberg University Mainz (JGU), ran for three and half years. Other participants included the German Research Center for Artificial Intelligence in Kaiserslautern, the University of Augsburg, and the University of Surrey in the UK. The Volkswagen Foundation provided about EUR 1.5 million to finance the project, which was completed in December 2024.

Comparison of AI-based social assessment in nine countries across four continents

The open access volume, extending to some 300 pages, compares the status quo and the desired scenarios of AI-supported social evaluations in nine countries across four continents: Germany, Spain, Estonia, Ukraine, the USA, Nigeria, Iran, India, and China. "The case studies make apparent the extent to which criteria for being granted state services are determined by culture- and context-related factors. Even within societies, there are very different perspectives that are subject to constant reconsideration and negotiation. This must be reflected in the production of AI technology. Therefore, it is not sufficient to develop a single, standardized AI system for social assessment in public service provision and deploy it worldwide. We need flexible, dynamic, and adaptive systems. Their development requires the participation of all social stakeholders, including vulnerable groups, to design participatory, context-sensitive, and fair AI," emphasized Professor Petra Ahrweiler of JGU's Institute of Sociology, who coordinated the Al FORA project. The AI FORA researchers are currently preparing another publication in which they will outline the policy-relevant modeling and simulation results of the project. They aim to demonstrate how artificial intelligence can be improved to address fairness and discrimination issues in the allocation of public social services.

 

Related links:

 

Read more:


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.