Breast cancer remains a leading cause of mortality among women worldwide, and accurate tumor segmentation in DCE-MRI is critical for diagnosis, treatment planning, and therapeutic monitoring. However, existing methods struggle with tumor heterogeneity—variable sizes, shapes, and textures—and the dynamic contrast enhancement patterns that provide key functional insights into tumor angiogenesis. Manual segmentation is labor-intensive, time-consuming, and prone to inter-observer variability, highlighting the need for automated, precise solutions.
RST2G addresses these challenges through a synergistic integration of residual learning, hybrid feature extraction, and spatiotemporal graph fusion, designed specifically for DCE-MRI's unique characteristics: (1) Multi-modal input integration; (2) Hybrid feature extraction (CFormerEncoder); (3) Residual-Guided Multi-Scale Refinement (MSR); (4) Spatiotemporal graph enhancement; (5) Hybrid loss function.
The researchers validated RST2G on two publicly available DCE-MRI datasets: (1) Breast-MRI-NACT-Pilot: 64 patients undergoing neoadjuvant chemotherapy, with pre-contrast and two post-contrast scans; (2) TCGA-BRCA: 139 patients with pre-contrast and 3–5 post-contrast scans.
Key results include: (1) Achieved a Dice Similarity Coefficient (DSC) of 61.8% on Breast-MRI-NACT-Pilot and 80.1% on TCGA-BRCA; (2) Minimized relative volume difference (RVD) to 1.8 voxels (Breast-MRI-NACT-Pilot), ensuring reliable tumor volume quantification; (3) Maintained strong performance on an external test set from a different medical center, demonstrating adaptability to real-world clinical variability; (4) Consistently captured complex tumor morphologies and boundaries, outperforming traditional U-Net variants, 3D volumetric models, and other 4D spatiotemporal methods in visual assessments.
RST2G's advanced performance addresses unmet clinical needs: (1) Automates tumor segmentation, reducing radiologists' workload and minimizing inter-observer variability; (2) Enables accurate tumor boundary delineation and volume measurement, supporting treatment planning (e.g., neoadjuvant chemotherapy response assessment) and longitudinal monitoring; (3) Grad-CAM visualizations confirm the model focuses on clinically relevant tumor regions, enhancing trust in its outputs.
The framework's computational efficiency—processing a single MRI volume in ~30 seconds with 10 GB GPU memory—supports near-real-time clinical deployment. Future research will focus on: (1) Validating on multi-center datasets to enhance generalization across different imaging protocols and scanner models; (2) Adapting to irregular temporal sampling in DCE-MRI (e.g., variable post-contrast scan timing). (3) Integrating with clinical workflows to support personalized breast cancer treatment.
RST2G represents a significant advancement in automated breast tumor segmentation, leveraging residual learning, spatiotemporal modeling, and graph fusion to overcome the limitations of existing methods. Its superior accuracy, robustness, and clinical interpretability position it as a valuable tool for improving breast cancer diagnosis and treatment. "By explicitly modeling the spatiotemporal dynamics of DCE-MRI, RST2G provides a more comprehensive representation of breast tumors," noted the study's corresponding authors. "This technology has the potential to transform clinical practice by enabling precise, efficient, and consistent tumor segmentation at scale."
Authors of the paper include Shaoli Xie, Lulu Xu, Chenyi Lei, Jinxiang Wang, Jason Wang, Zhibin Wang, Yiran Sun, Danyi Li, Fangfang Li, Rubing Lin, Hongwei Yang, Yang Xiao, Tianxu Lv, Yixuan Huang, Lingmi Hou, Junyan Li, and Maoshan Chen.
This work was supported by the National Natural Science Foundation of China (82503899), The Science and Technology Project of Sichuan Provincial Health Commission (Clinical Research Special Project, grant no. 23LCYJ003), the Clinical Research Grant of Wu Jieping Medical Foundation (grant no. 320.6750.2022-19-20), the Foundation of State Key Laboratory of Ultrasound in Medicine and Engineering (grant no. 2021KFKT015), the Shenzhen Science and Technology Program (grant nos. RCBS20231211090733052 and JCYJ- 20240813150221028), the Guangdong Basic and Applied Basic Research Foundation (grant nos. 2023A1515111044 and 2025-A1515012665), the Research Start-up Fund of Post-doctoral of SAHSYSU (grant no. ZSQYRSFPD0067), the Scientific Research Cooperation Project of North Sichuan Medical College (CBY25-ZXB04), and the Outstanding Youth Fund Project of Sichuan Provincial Natural Science Foundation (24NSFJQ0271).
The paper, "RST2G: Residual-Guided Spatiotemporal Transformer Graph Fusion Enhancement for Breast Cancer Segmentation in DCE-MRI" was published in the journal Cyborg and Bionic Systems on Mar. 23, 2026, at DOI: 10.34133/cbsystems.0502.