A new type of chatbot could reliably help people decide what to do about their symptoms — and do so based on guidance that is both medically sound and easy to understand. The chatbot could help reduce unnecessary hospital visits and ensure that those who need care seek it sooner.
A team co-led by engineers at the University of California San Diego published their work in Nature Health .
The artificial intelligence-powered tool is designed to improve self-triage, which is the decision-making process people use to assess how serious their symptoms are before contacting a doctor. Today, more people turn to online searches or chatbots for quick answers. However, information from these sources can be overwhelming, impersonal or medically unverified. That can lead to unnecessary emergency visits or delayed care.
Enter a new chatbot that provides guidance based on trusted medical protocols. It mirrors how a patient might be guided through self-triage by using symptom-based flowcharts to decide whether to self-care, schedule a visit or seek emergency care. The system follows well established protocols while adapting to back-and-forth conversations where the patient describes their symptoms in their own words.
The conversational AI system is trained on 100 step-by-step medical flowcharts developed by the American Medical Association. "It can be further adapted to accommodate provider-specific protocols, which gives healthcare organizations full control over the clinical logic their patients encounter," said study senior author Edward Wang, a professor in both the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering and the Design Lab.
"Our system uses these flowcharts to ground the conversation with the patient," said study first author Yujia (Nancy) Liu, an electrical and computer engineering Ph.D. student at the UC San Diego Jacobs School of Engineering. Liu co-led the study with Wang and Xin Liu, a senior research scientist at Google Research.
Take, for example, a simulated conversation in which a patient consults the chatbot about abdominal pain. Three AI agents work together behind the scenes to guide the conversation. Based on the patient's description of their symptoms, the first AI agent identifies the issue and selects the appropriate medical flowchart while factoring in details such as age and sex. The chatbot proceeds with the next question prescribed by the flowchart. The second AI agent interprets the patient's response — it can do so even when the response is not a simple "yes" or "no" — and determines the next question to ask. The third AI agent translates clinical questions into patient-friendly language so they're easier to answer. For instance, instead of asking, "Is the pain severe?", the chatbot might ask, "How bad is the pain on a scale of 1 to 10?" The chatbot continues through the flowchart until it can recommend whether to monitor symptoms or seek medical attention.
This approach ensures that the chatbot gathers the information it needs from the patient. It is also more transparent. "Large language models are powerful, but they're a black box," Wang said. "We do not know how they generate their responses, and that makes it hard to verify or trust them. But with this system, every recommendation can be traced back to a clinician-validated flowchart."
The researchers tested the chatbot across more than 30,000 simulated conversations. It selected the correct medical flowchart about 84% of the time and followed the decision-making steps with over 99% accuracy, even when users described symptoms in different ways.
The researchers note that the chatbot is designed as a support tool and not a replacement for clinicians. "It can offload triage tasks from clinicians by providing patients reliable medical guidance at home," Yujia (Nancy) Liu said. "Clinicians could also review the conversations and step in when needed."
So far, the system has primarily been tested using simulated conversations. The team plans to partner with hospitals and test the chatbot on real patients.
Next steps also include developing a mobile app version, as well as supporting voice input, multiple languages and image sharing. Such features would make the chatbot accessible to more users, including older adults and non-English speakers. Ultimately, the goal is to integrate the chatbot into electronic health record systems.
Full study: " A multi-agent framework combining large language models with medical flowcharts for self-triage ." Co-authors include Sophia Yu, Hongyue Jin, Mattheus Ramsis and Lianhui Qin, UC San Diego; Jessica Wen, Kaiser Permanente; Alexander Qian and Terrence Lee, UC San Francisco; and Gi Won Choi, Korea University Ansan Hospital.