Lawrence Livermore National Laboratory (LLNL) capped a milestone week at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC25) with renewed leadership in supercomputing on the Top500, a Gordon Bell Prize win for real-time tsunami forecasting and a slate of sessions that underscored the Lab's expanding role at the intersection of high-performance computing (HPC) and AI.
With more than 16,000 attendees, nearly 560 exhibitors and one of the largest technical programs in conference history, SC25 highlighted a community accelerating toward an era defined by exascale systems, AI-enhanced science and increasingly complex computational workflows. LLNL's presence, which included dozens of sessions, including tutorials, workshops, paper presentations and birds-of-a-feather meetings was felt across virtually every major event of the week. Under the leadership of LLNL's Principal Deputy Associate Director for Computing Lori Diachin, who served as the conference's general chair, SC25 was a reminder of LLNL's outsized impact on the supercomputing landscape and reflected the Lab's decades-long influence on HPC innovation and community direction.
El Capitan retains No. 1 in the world
At the Nov. 17 press conference opening SC25, Diachin welcomed the global HPC community to St. Louis and highlighted the scale of this year's program and the strength of the technical program, pointing to the record number of exhibitors, the second-highest number for registrations in conference history, and the over 600 paper submissions as evidence of a vibrant, expanding research ecosystem.
The press conference featured the release of the Nov. 2025 Top500 list, where LLNL's flagship supercomputer El Capitan again claimed the top spot, reaffirming its status as the fastest system ever verified, reaching 1.809 exaFLOPs (quintillion calculations per second) on the High Performance Linpack (HPL) benchmark of computing performance. It also remained atop the High Performance Conjugate Gradients and HPL-MxP mixed-precision benchmarks, earning that trifecta for the second straight time. El Capitan's overall benchmark performance reflects a system that continues to scale up and can excel at traditional simulations, memory-intensive workloads and AI-driven computations with an energy efficiency not seen at exascale to date.
Funded by the National Nuclear Security Administration's Advanced Simulation and Computing program, El Capitan's world-class capability is built on a partnership among LLNL, HPE and AMD, paired with an extensive Lab-developed software ecosystem for system management, numerical libraries, scheduling, performance engineering and large-scale AI workflows.

Gordon Bell Prize win highlights strong year for Lab scientific achievements
Major science accomplishments earned El Capitan and the Lab recognition in the conference's most prestigious competition. LLNL, along with collaborators the University of Texas at Austin's Oden Institute and Scripps Institution of Oceanography won the Association for Computing Machinery Gordon Bell Prize for their breakthrough in real-time, physics-based tsunami forecasting on El Capitan. With the help of the Lab-developed MFEM finite element library, the team demonstrated that an exascale digital twin can transform deep-ocean pressure data into localized tsunami predictions in under 0.2 seconds - roughly 10 billion times faster than conventional modeling methods - enabling rapid warning and dramatically reducing false alarms.
Widely considered the most prestigious award in supercomputing, the Gordon Bell Prize recognizes impactful scientific and computational milestones - in this case, the team's unprecedented combination of numerical modeling, GPU-accelerated performance and real-time inference at exascale.
"We're really excited to win the Gordon Bell Prize," said Tzanio Kolev, LLNL computational mathematician and co-author of the study. "This has been a really amazing project and an amazing collaboration for us, and we can't wait to use the power of El Capitan and finite-element algorithms to bring similar simulations and similar benefits to more applications."
The collaborative work on El Capitan began with an impromptu conversation at last year's SC conference, something team members and LLNL's Diachin pointed to as emblematic of what the event makes possible. The stunning visuals of tsunami wave propagation that the team created on El Capitan were also featured in SC25's "Art of HPC" exhibition, a program Diachin - who paints as a hobby - was particularly passionate about. The exhibit brought together visual works inspired by scientific computing, simulation and data, offering attendees a creative perspective on computational research.
In addition to contributing to the Gordon Bell Prize-winning tsunami effort, El Capitan was represented by a second Gordon Bell finalist for a record-scale rocket-exhaust simulation, which pushed a fully coupled fluid-chemistry model to unprecedented resolution on exascale-class hardware. The effort demonstrated how next-generation architectures can capture complex rocket exhaust plume physics with far greater fidelity than previously possible.
LLNL also earned multiple honors in the annual HPCwire Readers' and Editors' Choice Awards and a 2025 Hyperion Research HPC Innovation Excellence Award, recognizing the Lab's leadership on El Capitan, software ecosystems and real-world scientific impact with exascale-class hardware.
Finally, an LLNL team led by research scientist Harshitha Menon was an SC25 Best Poster Award finalist for their work on interpretability of large language models (LLMs) for HPC code, focusing on how these models generate, optimize and reason about parallel scientific software.
Menon's team is investigating whether LLMs understand foundational HPC constructs and why models produce the outputs they do. The goal, she said, is to improve trust, reliability and verifiability when using AI tools for high-stakes HPC coding tasks.
"LLMs are incredible at generating code, but we don't really understand how they are doing it and whether they understand the constructs such as parallelism, concurrency and correctness," Menon said. "In our mission space, HPC codes are making decisions that impact lives, so it's extremely important that we focus on those [trust and reliability] aspects."

SC showcases advances in AI-accelerated scientific computing
The surge in AI-driven scientific computing and its ability to accelerate - not replace -HPC discovery was a ubiquitous topic throughout SC, and LLNL researchers played a significant role in showcasing next-generation AI models designed for frontier HPC systems.
During his talk at the Department of Energy (DOE) booth on Nov. 18, LLNL research scientist Nikoli Dryden presented the team's work on ElMerFold, which builds on recent record-setting work showing that exascale-class hardware can drive the fastest protein-folding workflow ever recorded, generating more than 2,400 structures per second and shrinking an eight-day computation to about 11 hours on El Capitan.
Dryden explained how the performance comes from LLNL's optimized AI workflow, including unified CPU-GPU memory on El Capitan's Accelerated Processing Units (APUs) from AMD, node-local storage using El Capitan's Rabbits from HPE and a machine learning system - which enabled the team to produce the large-scale distillation data underpinning the newly released OpenFold3 model. Dryden highlighted how these advances position ElMerFold for frontier-scale science, noting, "We want to make AI-for-science models that fully exploit these systems."
Dryden also described how the model's precision, architecture optimizations and scalable training strategies are deployable on exascale-class systems, enabling new frontiers in molecular modeling and predictive biology.
DOE outlines national direction for HPC, AI and quantum
DOE Undersecretary for Science Dario Gil spoke at the DOE booth on Nov. 19 for one of the most-anticipated and well-attended sessions on the exhibit hall floor, foreshadowing goals and themes that would later align with the Genesis Mission announced by DOE on Nov. 24. In discussing the need for nationally coordinated effort in AI, Gil captured the urgency driving the field.
"It is impossible to achieve our national goals without leveraging the entire strength of the ecosystem that has been built; but that requires a lot of creativity on how we partner, on how we co-invest, and on how we design shared agendas," Gil explained. "There is a need for a unified effort in HPC, AI and quantum computing and to bring all of DOE, industry and academia together."
Gil also highlighted DOE's unique advantages, including decades of scientific data, experimental facilities and mission-driven computing, and the role of national labs as AI-for-science scales dramatically.
LLNL flexes HPC muscle across SC25
Despite challenges and last-minute pivots due to the 43-day federal government shutdown, LLNL's presence permeated SC's technical program with tutorials on large-scale workflows, AI-accelerated science, exascale performance and system software, as well as birds-of-a-feather sessions on HPC-AI convergence, programming models and simulation pipelines. Other sessions highlighted LLNL's contributions across agentic AI, fusion science, GPU acceleration, numerical methods, scientific machine learning and open-source software. On the exhibit floor, the DOE booth showcased demonstrations in LLNL's evolving agentic AI for fusion research and Flux.
LLNL employees also helped guide the next generation of HPC researchers, as students and early-career attendees were led through the Students@SC program, which included key contributions from LLNL Computing Workforce Manager Marisol Gamboa, Workforce Administrator Jamie Lewis and Organizational Development Consultant Andrekka "AJ" Lanier, who emphasized how students can translate classroom experience into real-world impact and how strong foundational skills and problem-solving abilities matter as much as technical specialization. Students received practical resume and interview advice, particularly the importance of communicating technical ideas clearly and walking through reasoning steps - skills needed for the collaborative, interdisciplinary nature of HPC work at the national laboratories.
"Don't measure yourself by a failure, measure yourself by your recovery rate," Gamboa counseled attendees during a Students@SC panel.
The Lab also maintained its influential representation in SC conference leadership and major SC committees, led by Diachin, who shaped both the event's identity and its execution, working for over two years on the conference. Diachin oversaw SC25's theme - "HPC Ignites"- and its visual design, expanded outreach programs, supported the growth of the Art of HPC exhibit and guided the event through a year marked by the government shutdown, visa challenges and travel disruptions to deliver a complex, high-traffic conference with professionalism and adaptability.
"I am proud of my team of volunteers who just knocked it out of the park," Diachin said. "I'm proud of the new things we were able to do, and that we were able to pivot so effectively when problems came up."
Diachin emphasized that SC is not just about papers and systems, but about building enduring connections. Her broader message to the community: SC remains a catalyst for HPC, where conversations spark collaborations that become scientific breakthroughs.
"I'm really hoping attendees came away excited, that they learned something new and that they got to meet at least one new person and grow their professional network," Diachin said.
