Young Researchers Eye AI, Big Data in Healthcare's Future

Eindhoven University of Technology

Healthcare is changing rapidly, and according to three young researchers connected to TU/e and Catharina Hospital in Eindhoven, that change has only just begun. During a roundtable discussion, PhD students Tineke de Vries, Maud Kortman, and Carlijn Buck talk about their vision for the future of AI in healthcare. The conclusion: artificial intelligence will become an indispensable partner for doctors, but we must never lose sight of its limitations.

Source: Catharina Hospital

"The huge difference between then and now?" says Tineke de Vries . "AI can discover patterns that we would never see ourselves. Whereas doctors and researchers use statistics to find connections between pre-selected variables, an AI model learns directly from enormous amounts of data. Think of thousands of ECGs, X-rays, or other data," says the researcher.

New way of looking at things

She continues: "After all, a doctor cannot 'just compare 10,000 ECGs' to look for subtle differences. AI can. This creates a new way of looking at things: not focused on why something happens, but on what happens. For example, AI can see that a certain heart rhythm occurs more often in a specific patient group, even before we understand why." Nevertheless, De Vries emphasizes that statistics are not disappearing. "You need both. Statistics teach you something about causes; AI shows you patterns you didn't know about yet."

Carlijn Buck, Maud Kortman and Tineke de Vries (from left to right). Photo: Jarno Verhoef/Catharina Hospital
Carlijn Buck, Maud Kortman and Tineke de Vries (from left to right). Photo: Jarno Verhoef/Catharina Hospital

A tool, not a new doctor

The three researchers do not see AI as a replacement for doctors, but as a tool. Carlijn Buck compares it to driving assistants in cars: "Lane assist, the driving aid that keeps you in the right lane, can be annoying at times, but it's usually just handy. That's how we should view AI: as something that thinks along with you and helps you, but which you can always take control of."

Essential collaboration

An AI model can assess a lung X-ray, predict a cardiac arrhythmia, or warn if a patient's condition is deteriorating. But it remains a tool. "A model can perform one specific task very well," says De Vries. "A doctor looks at the patient as a whole. They see the person, and they have a gut feeling." According to Maud Kortman , this collaboration is essential: "We can develop AI, but without doctors who understand it and want to work with it, it will remain on the shelf."

Responsibility and trust

An essential theme in this regard is responsibility. What if an AI system makes a mistake? Buck: "We accept that people make mistakes, but not technology. A doctor can explain why he chooses something, but also explain why something went wrong. Algorithms cannot always say why they make a choice. That makes it difficult."

This raises ethical questions. "Who is liable if a model misclassifies something?" Kortman wonders aloud. "If an AI makes the right call 999 times out of 1,000, but gets it wrong that one time, who is to blame for that one patient?" According to De Vries, it is essential to realize that human care is far from perfect.

We often forget how imperfect we are now. AI is not flawless, but in some areas, it can already surpass the best doctors of today. It's just that we are more accepting OF mistakes made by humans.

Tineke de Vries

The power of data and its pitfalls

AI is only as good as the data it is trained with. That is where the most significant risk lies. Kortman: "If you only use male ECGs, the model will not learn to recognize women's heart rates properly." The same applies to population groups or hospitals: "A model that works in Eindhoven will not automatically work the same way in Amsterdam because the population is different. You would actually have to train AI for each hospital, tailored to the variation in patients," she says.

That is why researchers spend about 80 percent of their time on data quality and only 20 percent on the model itself. In addition, hospitals must be technically prepared. "Healthcare systems are built to track a single patient," explains De Vries, "not to deliver and analyze data in bulk."

The Ludwig platform, an initiative of the Catharina Hospital, provides secure access to medical data that can be used to train AI, making this transition possible. But that process takes time. "It's always a balancing act between innovation, privacy, and patient safety. All of these things are very important, but they slow down implementation," says Buck.

From test phase to practical application

Although the number of AI projects in healthcare is growing, only a small proportion actually end up in clinical practice. This is due to strict regulations. "And rightly so," says De Vries. "We are talking about human lives."

"TECHNOLOGY IS MOVING FASTER THAN LEGISLATION. FIRST COMES INNOVATION, THEN THE FRAMEWORK."

Carlijn Buck

Hopeful signs

Nevertheless, the three see hopeful signs. Within the Catharina Hospital, AI is not only being developed but also tested in collaboration with doctors, the AI Expertise Center, clinical physics, and ICT teams. "That makes all the difference," says Kortman. "There is a great willingness to experiment here. That's the only way we can learn how AI can really help."

Strengthening each other

The three researchers also praise the collaboration within Eindhoven MedTech Innovation Center ( e/MTIC ). In addition to TU/e and Catharina Hospital, Philips, Maxima Medical Center, and Kempenhaeghe are also part of this partnership. "Clinical and technical expertise come together there and reinforce each other. Through this collaboration, we are narrowing the gap between technology and practice," says Buck.

"YOU LEARN A LOT FROM EACH OTHER THROUGH THE DIFFERENT EXPERTISE. IT REALLY HELPS TO ENSURE THAT INNOVATIONS ARE NOT ONLY RESEARCHED THEORETICALLY, BUT ALSO ACTUALLY APPLIED IN HEALTHCARE PRACTICE. AND THAT HELPS THE PATIENT."

Maud Kortman

New knowledge, new possibilities

What motivates these young researchers most to conduct their research? De Vries sums it up succinctly: "We are going to discover knowledge that we don't even know we don't know yet. AI can monitor 24 hours a day, recognize patterns that escape the human eye, and pick up early signs of disease." Buck: "A nurse cannot stay with one patient all day; AI can, without judging or getting tired." Kortman also sees opportunities for more personalized care: "AI can help predict who will benefit from a particular treatment and who will not. This allows us to reduce suffering and provide more targeted treatment."

Human reflection

The three PhD students expect AI to be ubiquitous in 30 years, but always in interaction with people. Buck: "Doctors must learn to work with AI, just as we are learning to do now." Nevertheless, they warn against overestimating its capabilities. AI cannot solve everything, and certainly not without human reflection. "If the data you use is not good, or you use a model for something other than what it was trained for, your tool will produce rubbish."

Kortman smiles: "I hope that when I'm a patient myself later on, a smart computer will evaluate my ECG. But I do want a human being to look at it afterwards."

De Vries concludes: "AI is not a new doctor. It is a mirror that helps us better understand what we were already doing and what we can still learn."

/TU/e Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.