Seismology is undergoing significant change with the rise of Distributed Acoustic Sensing (DAS), a fast-growing technology that leverages existing fiber-optic cables—including those used for the Internet—into ultra-dense seismic networks with meter-scale sensor spacing. DAS provides a scalable and cost-effective way to monitor earthquakes from local to global scales, but it also poses a pressing challenge: the massive volume of data produced outpaces human capacity to analyze. For example, manual labeling earthquake signals is impractical at such scales. This 'labeled data bottleneck' has hindered the use of supervised learning models and prevents DAS from reaching its full potential in earthquake monitoring.
A collaborative team from the University of Montreal, Woods Hole Oceanographic Institution, and UC Berkeley has developed a novel model, DASFormer, that learns to monitor earthquakes from continuous DAS data on its own, effectively serving as an 'artificial seismologist'. Published (DOI: 10.1007/s44267-025-00085-y) in Visual Intelligence on July 15, 2025, the study introduces a self-supervised pretraining framework that can interpret earthquake signals by identifying anomalies without being told in advance what an earthquake looks like. This represents a transformative advance from a labor-intensive, human-dependent process to one that is automated, intelligent, and scalable.
How does DASFormer learn without labels? It acts as a forecaster, first learning to predict the 'normal' state of the world. The model trains itself on massive, unlabeled DAS datasets, learning the predictable spatiotemporal patterns of background signals such as traffic vibrations or environmental noise. When an earthquake occurs, its P- and S-phases appear as sharp, unpredictable anomalies that defy the model's predictions learned. By flagging these deviations, DASFormer effectively turns earthquake detection into an anomaly detection task. This is made possible by a two-stage, coarse-to-fine framework built upon Swin U-Net and Convolutional U-Net architectures, which captures both the high-level context and fine-grained detail of the DAS data simultaneously.
To validate its effectiveness, DASFormer was evaluated on a real-world DAS dataset from Ridgecrest, California, and benchmarked against 22 state-of-the-art forecasting and anomaly detection models. DASFormer achieved the highest performance across all evaluation metrics, with a peak ROC-AUC of 0.906 and an F1 score of 0.565, demonstrating its clear superiority.
"Rather than being limited by the time-consuming process of human annotation, DASFormer represents a seismic shift in how we approach earthquake monitoring with DAS", said Bang Liu, the team leader of the study. "We now have a scalable and powerful tool that can keep pace with the flood of DAS data, paving the way for new possibilities in earthquake science", added by Zhichao Shen, one of the corresponding authors.
The potential applications of this study are wide-ranging. The model has shown an ability to generalize across distinct environments, such as seafloor cables, highlighting its promise for use in logistically challenging settings. This versatility suggests that DASFormer could serve as a plug-and-play tool for a variety of global seismic monitoring. The study also demonstrates the model's potential to be fine-tuned for downstream tasks such as earthquake early warning. Ultimately, the goal is to leverage this self-supervised approach to build a foundation model for seismic intelligence, a powerful system capable of learning from vast unlabeled datasets to deliver automated, accurate, and scalable monitoring. Such advances could significantly enhance public safety and our understanding of earthquake physics.
Funding information
This work was supported by the Canada CIFAR AI Chair Program and the Canada NSERC Discovery Grant (RGPIN-2021-03115).
About the Authors
Dr. Bang Liu is an Associate Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Montreal (UdeM). He is a member of the RALI laboratory (Applied Research in Computer Linguistics) of DIRO, a member of Institut Courtois of UdeM, an associate member of Mila – Quebec Artificial Intelligence Institute, and a Canada CIFAR AI (CCAI) Chair. His research interests primarily lie in the areas of natural language processing, multimodal & embodied learning, theory and techniques for AGI (e.g., understanding and improving large language models), and AI for science (e.g., health, material science, XR).
Dr. Zhichao Shen is a seismologist and Postdoctoral Investigator at the Department of Geology and Geophysics, Woods Hole Oceanographic Institution. His research interests focus on seismic applications of Distributed Acoustic Sensing (DAS) on both land and seafloor.
About Visual Intelligence
Visual Intelligence is an international, peer-reviewed, open-access journal devoted to the theory and practice of visual intelligence. This journal is the official publication of the China Society of Image and Graphics (CSIG), with Article Processing Charges fully covered by the Society. It focuses on the foundations of visual computing, the methodologies employed in the field, and the applications of visual intelligence, while particularly encouraging submissions that address rapidly advancing areas of visual intelligence research.