A new study in the academic journal Machine Learning: Health discovers that ChatGPT can accelerate patient screening for clinical trials, showing promise in reducing delays and improving trial success rates.
Researchers at UT Southwestern Medical Centre used ChatGPT to assess whether patients were eligible to take part in clinical trials and were able to identify suitable candidates within minutes.
Clinical trials, which test new medications and procedures on the public, are vital for developing and validating new treatments. But many trials struggle to enrol enough participants. According to a recent study , up to 20% of National Cancer Institute (NCI)-affiliated trials fail due to low enrolment. This not only inflates costs and delays results, but also undermines the reliability of new treatments.
Currently, screening patients for trials is a manual process. Researchers must review each patient's medical records to determine if they meet eligibility criteria, which takes around 40 minutes per patient. With limited staff and resources, this process is often too slow to keep up with demand.
Part of the problem is that valuable patient information contained in electronic health records (EHRs) is often buried in unstructured text, such as doctors' notes, which traditional machine learning software struggles to decipher. As a result, many eligible patients are overlooked because there simply isn't enough capacity to review every case. This contributes to low enrolment rates, trial delays and even cancellations, ultimately slowing down access to new therapies.
To counter this problem, the researchers have looked at ways of speeding up the screening process by using ChatGPT. Researchers used GPT-3.5 and GPT-4 to analyse 74 patients' data to see if they qualified for a head and neck cancer trial.
Three ways of prompting the AI were tested:
- Structured Output (SO): asking for answers in a set format.
- Chain of Thought (CoT): asking the model to explain its reasoning.
- Self-Discover (SD): letting the model figure out what to look for.
The results were promising. GPT-4 was more accurate than GPT-3.5, though slightly slower and more expensive. Screening times ranged from 1.4 to 12.4 minutes per patient, with costs between $0.02 and $0.27.
"LLMs like GPT-4 can help screen patients for clinical trials, especially when using flexible criteria," said Dr. Mike Dohopolski, lead author of the study. "They're not perfect, especially when all rules must be met, but they can save time and support human reviewers."
This research highlights the potential for AI to support faster, more efficient clinical trials - bringing new treatments to patients sooner.
The study is one of the first articles published in IOP Publishing's Machine Learning series™ , the world's first open access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.
The same research team have worked on a method that allows surgeons to adjust patients' radiation therapy in real time whilst they are still on the table. Using a deep learning system called GeoDL, the AI delivers precise 3D dose estimates from CT scans and treatment data in just 35 milliseconds. This could make adaptive radiotherapy faster and more efficient in real clinical settings.