In an ischaemic stroke, an artery in the brain is blocked by blood clots and the brain cells can no longer be supplied with blood as a result. Doctors must therefore act quickly and unblock the artery with the help of catheters. During the so-called mechanical thrombectomy, a lot of data has to be recorded and then transferred to various registers. Dr Nils Lehnen, senior physician at the Clinic for Diagnostic and Interventional Neuroradiology and Paediatric Neuroradiology at the University Hospital Bonn (UKB), has now discovered in a study that ChatGPT could be a great help in this data transfer. The results have now been published in the specialist journal "RADIOLOGY".
When did the patient arrive, when was a CT scan performed, when was the first puncture, when could the blood flow be restored,... During mechanical thrombectomy, a range of data must be recorded in the patient report and then manually transferred to various registers for the clinical outcome and for prospective studies. "This is a labour-intensive task that is also prone to transcription errors," says Dr Nils Lehnen, who also conducts research at the University of Bonn. "We therefore asked ourselves whether an AI such as ChatGPT could perform this transfer faster and possibly even more reliably."
In radiology, ChatGPT is already being tested in various procedures - for example, in the simplification of reports or in answering patient questionson breast cancer screening. However, whether ChatGPT can correctly extract data from free-text reports of a mechanical thrombectomy for a database and simultaneously generate clinical data was previously unexplored and was the research objective of this new study.
Dr Lehnen's research group first created a German prompt for ChatGPT and tested it on 20 reports in order to identify errors and subsequently adapt the prompt. After the correction, the data extraction using ChatGPT was tested on 100 internal reports from the UKB. For optimal comparison, an experienced neuroradiologist also compiled the results without seeing the ChatGPT evaluation. The researchers then compared the results and found that ChatGPT had correctly extracted 94 per cent of data entries and no post-processing was required. The researchers only considered the ChatGPT data entries that exactly matched that of the expert to be correct. Any deviations, such as additional symbols, punctuation marks or synonyms, were categorised as incorrect.
To validate these results, the researchers tested a further 30 external reports with the same prompt. ChatGPT achieved 90 per cent correct data entries.
"This suggests that ChatGPT could be an alternative to manually retrieving this data," says Dr Lehnen. "However, the reports and the prompt were only created by us in German, so the results of our study may need to be confirmed for other languages. In addition, we still observed poor results for certain data points, which shows that human supervision is still needed. However, we expect that further optimisation of the prompt will further improve the results and that ChatGPT can make work easier in this area in the future."
Journal
Radiology