ORNL Clinches Five Gordon Bell Prize Finalist Spots

Photo of the Frontier supercomputer at Oak Ridge National Laboratory featuring black panels with a stylized compass logo and the word FRONTIER illuminated under red overhead lighting.
Five simulation projects run on the ORNL Frontier supercomputer have been named finalists for the Gordon Bell Prize. Credit: ORNL, U.S. Dept. of Energy

Five breakthrough simulation projects conducted on the Frontier supercomputer at Oak Ridge National Laboratory have been named finalists for the Association for Computing Machinery's Gordon Bell Prize . Four of the projects are in the running for the main prize - annually awarded to recognize outstanding achievements in high-performance computing (HPC) - and one project is contending for a special prize.

This year's prizes will be presented at the International Conference for High Performance Computing, Networking, Storage, and Analysis , or SC25, which takes place Nov. 16-21 in St. Louis, Missouri.

Main prize finalists:

Shows a cube-like computational grid, likely a representation of electronic wavefunction density
Researchers used ORNL's Frontier to model coupling between electrons and phonons in settings including this system of lithium hydride atoms. Results could pave the way for the next generation of quantum computing, sensing and communications technologies through more efficient control of quantum materials and their properties. Credit: Chih-En Hsu, Tamkang University

BerkeleyGW: Investigating a fundamental concept of quantum physics

Researchers from Lawrence Berkeley National Laboratory and the University of Southern California used Frontier to shed new light on the quantum many-body problem, one of the fundamental concepts of quantum physics. With their first-principles code, BerkeleyGW , the team was able to simulate the coupling of correlated electrons and phonons and their effects on quantum materials.

Frontier's exascale power, capable of more than 1.4 quintillion calculations per second, enabled the team to model the coupling between electrons and phonons within a system of up to 2,742 silicon atoms and a system of up to 17,574 atoms of lithium hydride. The simulations ultimately reached speeds of more than an exaflop, or a quintillion calculations per second - some of the fastest performed on the machine - modeling more than half a million interactions at double precision, a computing standard that relies on 64 bits of memory per calculation for maximum accuracy.

The methods developed for BerkeleyGW by the team, which includes ORNL Distinguished Research Scientist Trey White, open new possibilities for developing the next generation of quantum computing, as well as sensing and communications technologies, through more efficient control of quantum materials and their properties.

A composite visualization from a cosmological simulation showing the large-scale structure of the universe. The left panel displays a full-cluster view highlighting the cosmic web of matter distribution. The four right panels show gas temperature (top left, red-yellow heat map), gas density (top right, blue-green), dark matter density (bottom left, deep blue), and stellar density (bottom right, red-pink specks). Labels indicate scale (64 h⁻¹ Mpc), redshift (z = 0.39), and cluster mass (M₂₀₀ = 1.0 × 10¹⁵ h⁻¹
This HACC simulation shows the universe about 10 billion years after the Big Bang. Right: Zoomed-in view showing dark matter, stars, and gas. Center: Gas nears 100 million Kelvin (about 180 million degrees Fahrenheit). Credit: Argonne National Laboratory, U.S. Dept. of Energy.

HACC: Calculating the largest-ever simulation of the universe

One year ago, researchers from Argonne National Laboratory accomplished the largest astrophysical simulation of the universe ever conducted using the Hardware/Hybrid Accelerated Cosmology Code, which was significantly updated by the Exascale Computing Project for exascale-class supercomputers.

The calculations they made on Frontier have set a new benchmark for cosmological hydrodynamics simulations and provide a new foundation for simulating the physics of atomic matter and dark matter simultaneously. The simulation size corresponds to surveys undertaken by large telescope observatories, a feat that until now has not been possible at this scale.

The simulations tracked 4 trillion particles, represented across 15 billion light-years of space - delivering a 15-fold leap in capability over the previous state-of-the-art simulations. The team, which includes ORNL Applied Mathematician Antigoni Georgiadou, pushed Frontier as close to the limits as possible by harnessing nearly 9,000 of its 9,402 nodes.

(Illustrates a computational fluid dynamics visualization, perhaps showing high-speed flow or combustion.)
Georgia Tech researchers used their open-source Multicomponent Flow Code on Frontier to examine rocket designs that feature clusters of engines. Predicting how all those engines' exhaust plumes may interact upon launch will help rocket designers avoid mishaps. Credit: Spencer Bryngelson, Georgia Institute of Technology

MFC/IGR: Unleashing the largest-ever CFD simulation of high-speed compressible fluid flow

Using Frontier, a team of researchers from the Georgia Institute of Technology and the Courant Institute of Mathematical Sciences at New York University conducted the largest-ever computational fluid dynamics (CFD) simulation of high-speed compressible fluid flows - with a resolution of over 200 trillion grid points.

The team simulated a 33-engine configuration, like the one used by the SpaceX Starship Super Heavy Booster, reflecting the aerospace industry's move toward first-stage multi-engine layouts in rocket design. The flow from the individual engines was modeled at 10 times the speed of sound, a regime at which gases behave violently and unpredictably due to extreme pressure and temperature shifts.

In addition to using their own open-source Multicomponent Flow Code, the researchers applied a new computational technique called information geometric regularization. They combined that technique with a unified CPU-GPU memory approach - optimizing memory usage between traditional CPUs and GPUs - to attain new levels of performance in CFD. The team was assisted in their computational efforts by ORNL scientific liaison Reuben Budiardja, group leader of Advanced Computing for Nuclear, Particle and Astrophysics.

Depicts a nanoscale device structure, possibly a conceptual model of quantum transistor arrays
Nanoribbon field-effect transistors are emerging as the latest, silicon-based technology that can operate at ultrascaled dimensions while providing the necessary computing power for modern electronic devices, from cell phones to GPUs. Credit: Nicolas Vertsch and Alexander Maeder from ETH Zurich.

QuaTrEx: Exploring the nanoscale world of next-generation transistors

Researchers from the Swiss Federal Institute of Technology in Zurich, Switzerland (ETH Zurich) developed a new framework to capture the quantum mechanical effects inherent in the designs of nanoscale electronic devices, such as next-generation transistors. The team's open-source code, Quantum Transport Simulations at the Exascale and Beyond (QuaTrEx), demonstrated a record-breaking level of performance on Frontier.

The ETH Zurich team used QuaTrEx on Frontier to simulate interactions in nanoribbon field-effect transistors (NRFETs) made of 42,240 atoms. It was the first device simulation based on a physics framework that accounts for such quantum (atomic level) effects in realistic structures, and this work can enable semiconductor engineers to make more reliable design predictions for nanoscale electronic components such as NRFETs.

QuaTrEx produced transistor simulations roughly an order of magnitude larger than previous quantum transport simulations of interacting electrons in materials. Using 37,600 AMD GPUs, QuaTrEx sustained full-precision (64-bit) performance while simulating a system of 42,240 atoms at 1.15 exaflops - very close to Frontier's June 2025 High Performance Linpack score of 1.353 exaflops .

Shows two globes, likely Earth simulation visualizations.
The second version of the Oak Ridge Base Foundation Model for Earth System Predictability, or ORBIT, delivers weather forecasts in unprecedented detail with help from Frontier, the flagship supercomputer at ORNL. Credit: Jason Smith/ORNL, U.S. Dept. of Energy

Special prize finalist:

ORBIT-2: Improving the world's largest AI model for weather prediction

The second version of the Oak Ridge Base Foundation Model for Earth System Predictability (ORBIT-2) dials in up-to-the-minute forecasts down to the front-doorstep level of precision. Training on Frontier pushed the artificial intelligence model past longtime computational limits to achieve unparalleled speed and accuracy in its predictions.

ORBIT-2 builds on the team's initial ORBIT model , a previous nominee for the Gordon Bell Special Prize. The first model focused on long-range weather forecasting and generated highly accurate predictions up to 30 days in advance. ORBIT-2 refines those forecasts to precise locations and likely consequences. The model continually evaluates predictions for accuracy and rates confidence levels for each forecast to help control for errors. Previously, results like these once required a large supercomputer running for days to obtain predictions this specific. Now, such results can be obtained in milliseconds at close to 99 percent accuracy.

The ORBIT-2 team also includes Xiao Wang, Prasanna Balaprakash, Dan Lu, Jong-Youl Choi, Takuya Kurihana, Isaac Lyngaas, Hong-Jun Yoon, David Pugmire, Ming Fan, Nasik Nafi, Aristeidis Tsaris, Maliha Hossain, Dali Wang, Peter Thornton, and Moetasim Ashfaq of ORNL; Ashwin M. Aji of AMD; Mohamed Wahib of the Riken Center for Computational Science in Kobe, Japan; and Xi Xiao of the University of Alabama at Birmingham.

Frontier is managed by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL

UT-Battelle manages ORNL for DOE's Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE's Office of Science is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science . - OLCF Staff

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.