Flex Models Boost Diagnosis, Prognosis, Treatment Response

Harvard Medical School

Artificial intelligence is poised to transform the practice of medicine through the design and deployment of AI models that can detect, diagnose, and render prognosis for a disease more rapidly than most human physicians can, and with similar or superior accuracy.

  • By MASS GENERAL BRIGHAM COMMUNICATIONS

So-called foundation models - trained on vast amounts of unlabeled data and usable in multiple clinical contexts for different purposes with minimal tweaking - offer a particularly tantalizing promise to reshape diagnosis and treatment.

Now, Harvard Medical School researchers at Brigham and Women's Hospital have developed two of the largest foundation models to date that can be used in pathology to read, interpret, and classify microscopy slides from patient tissues.

The new foundation models, called UNI and CONCH, performed well for more than 30 diagnostic tasks, including disease detection, disease diagnosis, organ transplant assessment, and rare disease analysis.

Furthermore, these models overcome limitations posed by current AI systems, which have specific and limited use only within the concrete scenarios they were trained for. By contrast, the new models performed well in the specific clinical tasks the researchers tested them on and showed promise in identifying new, rare, and more challenging diseases. The advances are described in two companion papers on UNI and CONCH published March 19 in Nature Medicine.

"Foundation models represent a new paradigm in medical artificial intelligence," said senior author Faisal Mahmood, HMS associate professor of pathology at Brigham and Women's. "These models are AI systems that can be adapted to many downstream, clinically relevant tasks. We hope that the proof of concept presented in these studies will set the stage for such self-supervised and multimodal large language models to be trained on larger and more diverse datasets."

UNI is a foundation model that interprets pathology images and is capable of recognizing disease in specific regions of interest on a given slide as well as gigapixel whole-slide images.

Trained using a database of more than 100 million tissue patches and more than 100,000 whole-slide images, UNI performed well across a variety of surgical pathology tasks ranging from detecting disease on images of tissues, diagnosing disease type and subtype, and using the features on an image to successfully determine disease prognosis.

Notably, UNI employs transfer learning - the use of previously acquired knowledge to new tasks - with remarkable accuracy. Across 34 tasks, including cancer classification and assessing transplanted tissue for signs of rejection, UNI was more versatile and outperformed established pathology AI models.

CONCH is a foundation model that understands both pathology images and language.

Trained on a database of more than 1.17 million image-text pairs, CONCH excelled in tasks such as identifying rare diseases and delineating areas of cancerous and benign tissue in a tissue sample to determine how much a tumor has grown and how far it has spread.

Because CONCH is trained on text, pathologists can interact with the model and ask it to search tissue images for features of interest. In a comprehensive evaluation across 14 tasks, CONCH outperformed standard models.

The research team has made the code for both models publicly available for use by other researchers.

Authorship, funding, disclosures

Co-authors of the UNI paper include Richard J. Chen, Tong Ding, Ming Y. Lu, Drew F. K. Williamson, Guillaume Jaume, Andrew H. Song, Bowen Chen, Andrew Zhang, Daniel Shao, Muhammad Shaban, Mane Williams, Lukas Oldenburg, Luca L. Weishaupt, Judy J. Wang, Anurag Vaidya, Long Phi Le, Georg Gerber, Sharifa Sahai, and Walt Williams.

Co-authors of the CONCH paper include Ming Y. Lu, Bowen Chen, Drew F. K. Williamson, Richard J. Chen, Ivy Liang, Tong Ding, Guillaume Jaume, Igor Odintsov, Long Phi Le, Georg Gerber, and Andrew Zhang. Additional co-authors include Anil V. Parwani.

This work was supported in part by the Brigham and Women's president's fund, Brigham and Women's and Massachusetts General Hospital Pathology, NIH NIGMS R35GM138216, and the Massachusetts Life Sciences Center.

Richard J. Chen, Ming Y. Lu, Bowen Chen, and Faisal Mahmood are inventors on two filed provisional U.S. patents corresponding to the methodological aspects of this work.

Adapted from a Mass General Brigham news release.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.