“As we eagerly adopt Al models, we need to take a moment to think about the potential biases that they may contain.”
BUFFALO, NY - November 11, 2024 – A new editorial was published in Oncotarget's Volume 15 on November 7, 2024, titled “ Beyond the hype: Navigating bias in AI-driven cancer detection.”
In this editorial, researchers from the Mayo Clinic emphasize the need to address potential biases in Artificial Intelligence (AI) tools used for cancer detection to ensure fair and equitable healthcare. Authors Yashbir Singh, Heenaben Patel, Diana V. Vera-Garcia, Quincy A. Hathaway, Deepa Sarkar, and Emilio Quaia discuss the risks of biased AI systems, which can lead to disparities in diagnosis and treatment outcomes across diverse patient groups.
While AI is transforming cancer care through early diagnosis and improved treatment planning, this study warns that AI models trained on limited or non-diverse data may misdiagnose or overlook certain populations, particularly those in underserved communities, thereby increasing healthcare disparities. As explained in the editorial, “For example, if an AI model is trained on Caucasian patients, it may struggle to detect skin cancer accurately in patients with darker skin, leading to missed diagnoses or false positives.” Such biases could result in unequal access to early diagnosis and treatment, ultimately leading to poorer health outcomes for certain groups. Beyond racial bias, factors such as socioeconomic status, gender, age, and geographic location can also affect the accuracy of AI in healthcare.
The authors propose a comprehensive approach to developing fair AI models in healthcare, highlighting six key strategies. They first emphasize the importance of using diverse and representative datasets to improve diagnostic accuracy across all demographics. Rigorous testing and validation across various population groups are necessary before AI systems are widely implemented. To promote ethical AI use, models should be transparent in their decision-making processes, enabling clinicians to recognize and address potential biases. The researchers also advocate for collaborative development involving data scientists, clinicians, ethicists, and patient advocates to capture a range of perspectives. Continuous monitoring and regular audits are essential to detect and correct biases over time. Finally, training healthcare providers on AI’s strengths and limitations will empower them to use these tools responsibly and make informed interpretations.
“The goal should not merely be to create AI systems that are more accurate than humans but to develop technologies that are fundamentally fair and beneficial to all patients.”
The authors also urge regulatory bodies, such as the U.S. Food and Drug Administration (FDA), to implement updated frameworks specifically aimed at addressing AI bias in healthcare. Policies that promote diversity in clinical trials and incentivize the development of fair AI systems will help ensure that AI benefits reach all populations equitably. They caution against over-reliance on AI without a full understanding of its limitations, as unchecked biases could undermine patient trust and slow the adoption of valuable AI technologies.
In conclusion, as AI continues to transform cancer care, the healthcare sector must prioritize fairness, transparency, and robust AI regulation to ensure that it serves all patients without bias. By addressing bias from development through to implementation, AI can fulfill its promise of creating a fair and effective healthcare system for everyone.
Continue reading: DOI: https://doi.org/10.18632/oncotarget.28665
Correspondence to: Yashbir Singh - singh.yashbir@mayo.edu
Keywords: cancer, AI bias, cancer detection, healthcare disparities, medical ethics, AI regulation
Click here to sign up for free Altmetric alerts about this article.
About Oncotarget:
Oncotarget (a primarily oncology-focused, peer-reviewed, open access journal) aims to maximize research impact through insightful peer-review; eliminate borders between specialties by linking different fields of oncology, cancer research and biomedical sciences; and foster application of basic and clinical science.
Oncotarget is indexed and archived by PubMed/Medline, PubMed Central, Scopus, EMBASE, META (Chan Zuckerberg Initiative) (2018-2022), and Dimensions (Digital Science).
To learn more about Oncotarget, visit Oncotarget.com and connect with us on social media:
X
Facebook
YouTube
Instagram
LinkedIn
Pinterest
Spotify, and available wherever you listen to podcasts
Click here to subscribe to Oncotarget publication updates.
For media inquiries, please contact media@impactjournals.com.
Oncotarget Journal Office
6666 East Quaker St., Suite 1
Orchard Park, NY 14127
Phone: 1-800-922-0957 (option 2)
Journal
Oncotarget
Method of Research
Commentary/editorial
Subject of Research
Not applicable
Article Title
Beyond the hype: Navigating bias in AI-driven cancer detection
Article Publication Date
7-Nov-2024
COI Statement
Authors have no conflicts of interest to declare.