News Release

AI screening for heart failure clinical trial speeds up enrollment, study finds

Mass General Brigham researchers compared a generative AI clinical trial screening tool to manual screening and demonstrated that AI was significantly more efficient at screening and enrolling patients eligible for a heart failure clinical trial

Peer-Reviewed Publication

Mass General Brigham

Artificial intelligence (AI) can rapidly screen patients for clinical trial enrollment, according to a new study published in JAMA and led by Mass General Brigham researchers. Their novel AI-assisted patient screening tool significantly improved the speed of determining eligibility and enrollment in a heart failure clinical trial compared to manual screening. These findings suggest that using AI can be cheaper than conventional methods and speed up the research process, which could mean patients get earlier access to proven, effective treatments.

“Seeing this AI capability accelerate screening and trial enrollment this substantially in the context of a real-world randomized prospective trial is exciting,” said co-senior author Samuel (Sandy) Aronson, ALM, MA, executive director of IT and AI Solutions for Mass General Brigham Personalized Medicine and senior director of IT and AI Solutions for the Accelerator for Clinical Transformation. “We look forward to using this capability to assist as many trials as we can.”

The study randomized 4,476 patients to be either manually screened or screened using generative AI to see if they were eligible for the Co-Operative Program for Implementation of Optimal Therapy in Heart Failure (COPILOT-HF) trial.

In the AI arm of the study, a generative AI tool called RAG-Enabled Clinical Trial Infrastructure for Inclusion Exclusion Review (RECTIFIER) assessed clinical notes and other pieces of information in patients’ electronic health records to determine if they met key eligibility criteria for the heart failure study. Criteria included symptoms, chronic diseases, and current and past medications, among others. Study staff then conducted a short and rapid review of the patient charts that the AI-generated tool assessed as eligible for any outstanding issues with being considered.

In the other arm of the study, research staff manually reviewed patients’ charts to determine if they met the eligibility criteria.

Charts were screened by RECTIFIER or by the study staff over a set period of time. The AI-assisted screening process was far more efficient, screening 458 eligible patients compared to the 284 patients screened by study staff.

Following this process, patient navigators called patients deemed eligible to see if they would be willing to participate in the study. The navigators were not aware of whether the patients had been screened by the AI tool or a human, so as not to introduce bias. In the AI group, 35 patients enrolled in the trial, compared to 19 patients in the manual group.

“The rate of enrollment in the AI-enabled arm was almost double the rate of enrollment in the manual arm. This means that AI could almost halve the time it takes to complete enrollment in a trial,” said lead author Ozan Unlu, MD, a fellow in Clinical Informatics at Mass General Brigham and a fellow in Cardiovascular Medicine at Brigham and Women's Hospital.

Because previous research has shown that AI can introduce bias, the researchers conducted race, gender, and ethnicity analyses on patients who were enrolled via the manual screening process and those enrolled via AI-assisted screening. They found no significant differences.

The study follows an earlier “proof of concept” study by Blood, Aronson, Unlu and colleagues, which was published in June in NEJM AI. That study showed in a retrospective review of health records, that the RECTIFIER tool was slightly more accurate in identifying patient charts that met the heart failure trial’s eligibility criteria, compared to manual screening. This new research validates the tool to be highly effective in an active clinical setting.

 “Our next goal is to expand the AI screening tool’s use outside of Mass General Brigham,” said co-senior author Alexander Blood, MD, MSc, a cardiologist at Brigham and Women’s Hospital and associate director of the Accelerator for Clinical Transformation at Mass General Brigham. “By adjusting the eligibility questions that the RECTIFIER tool asks of the medical record notes, AI screening can be applied to trials assessing cancer treatments, diabetes interventions, and many others.”

Authorship: In addition to Aronson, Blood, and Unlu, Mass General Brigham authors include Matthew Varugheese, Jiyeon Shin, Samantha M. Subramaniam, David Walter Jacques Stein, John J. St. Laurent, Charlotte J. Mailly, Marian J. McPartlin, Fei Wang, Michael F. Oates, Christopher P. Cannon, Benjamin M. Scirica, and Kavishwar B. Wagholikar.

Disclosures: Aronson reported receiving grants from Boehringer Ingelheim, Better Therapeutics, Foresite Labs, Milestone Pharmaceutical, Novo Nordisk, and Pfizer, personal fees from Nest Genomics and Harvard Medical School. Blood reported receiving grants from Boehringer Ingelheim, Better Therapeutics, Foresite Labs, Milestone Pharmaceutical, Novo Nordisk, Pfizer, General Electric Health, personal fees from Alnylam, Milestone Therapeutics, NODE Health, Walgreens Health, Medscape, Color Health, Corcept Therapeutics, Nference Inc, Withings, and Arsenal Capital Partners and having equity in Knownwell Health, Porter Health, and Signum Technologies. Unlu reported receiving funding from the National Heart, Lung, and Blood Institute (award T32HL007604).

Funding: This study was funded by the Accelerator for Clinical Transformation (ACT).

Paper cited: Unlu, O et al. “Manual versus AI-Assisted Clinical Trial Screening Using Large-Language Models (MAPS-LLM)” JAMA DOI: doi:10.1001/jama.2024.28047

For More Information:


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.