News Release

New AI framework aims to remove bias in key areas such as health, education, and recruitment

Researchers at the University of Navarra present this new prediction methodology that could help governments and companies eliminate algorithmic discrimination and ensure fairness in critical decision-making

Peer-Reviewed Publication

Universidad de Navarra

University of Navarra DATAI researchers

image: 

Caption: From left to right: Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo

view more 

Credit: Manuel Castells

Researchers from the Data Science and Artificial Intelligence Institute (DATAI) of the University of Navarra (Spain) have published an innovative methodology that improves the fairness and reliability of artificial intelligence models used in critical decision-making. These decisions significantly impact people's lives or the operations of organizations, as occurs in areas such as health, education, justice, or human resources. 

The team, formed by researchers Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo, has developed a new theoretical framework that optimizes the parameters of reliable machine learning models. These models are AI algorithms that transparently make predictions, ensuring certain confidence levels. In this contribution, the researchers propose a methodology able to reduce inequalities related to sensitive attributes such as race, gender, or socioeconomic status.

Machine Learning, one of the leading scientific journals in artificial intelligence and machine learning, presents this study. It combines advanced prediction techniques (conformal prediction) with algorithms inspired by natural evolution (evolutionary learning). The derived algorithms offer rigorous confidence levels and ensure equitable coverage among different social and demographic groups. Thus, this new AI framework provides the same reliability level regardless of individuals' characteristics, ensuring fair and unbiased results.

"The widespread use of artificial intelligence in sensitive fields has raised ethical concerns due to possible algorithmic discriminations," explains Armañanzas Arnedillo, principal investigator of DATAI at the University of Navarra. "Our approach enables businesses and public policymakers to choose models that balance efficiency and fairness according to their needs, or responding to emerging regulations. This breakthrough is part of the University of Navarra's commitment to fostering a responsible AI culture and promoting ethical and transparent use of this technology.”

Application in real scenarios

Researchers tested this method on four benchmark datasets with different characteristics from real-world domains related to economic income, criminal recidivism, hospital readmission, and school applications. The results showed that the new prediction algorithms significantly reduced inequalities without compromising the accuracy of the predictions. "In our analysis, we found, for example, striking biases in the prediction of school admissions, evidencing a significant lack of fairness based on family financial status," notes Alberto García Galindo, DATAI predoctoral researcher at the University of Navarra and first author of the paper. "In turn, these experiments demonstrated that, on many occasions, our methodology manages to reduce such biases without compromising the model's predictive ability. Specifically, with our model, we found solutions in which discrimination was practically completely reduced while maintaining prediction accuracy." The methodology offers a 'Pareto front' of optimal algorithms, "which allows us to visualize the best available options according to priorities and to understand, for each case, how algorithmic fairness and accuracy are related".

According to the researchers, this innovation has vast potential in sectors where AI must support reliable and ethical critical decision-making. Garcia Galindo points out that their method "not only contributes to fairness but also enables a deeper understanding of how the configuration of models influences the results, which could guide future research in the regulation of AI algorithms." The researchers have made the code and data from the study publicly available to encourage further research applications and transparency in this emerging field.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.