High-Performance Computing Hurdles Threaten US Innovation

High-performance computing, or HPC for short, might sound like something only scientists use in secret labs, but it's actually one of the most important technologies in the world today. From predicting the weather to finding new medicines and even training artificial intelligence , high-performance computing systems help solve problems that are too hard or too big for regular computers.

Author

  • Jack Dongarra

    Emeritus Professor of Computer Science, University of Tennessee

This technology has helped make huge discoveries in science and engineering over the past 40 years. But now, high-performance computing is at a turning point, and the choices the government, researchers and the technology industry make today could affect the future of innovation, national security and global leadership.

High-performance computing systems are basically superpowerful computers made up of thousands or even millions of processors working together at the same time. They also use advanced memory and storage systems to move and save huge amounts of data quickly.

With all this power, high-performance computing systems can run extremely detailed simulations and calculations. For example, they can simulate how a new drug interacts with the human body, or how a hurricane might move across the ocean. They're also used in fields such as automotive design, energy production and space exploration.

Lately, high-performance computing has become even more important because of artificial intelligence. AI models, especially the ones used for things such as voice recognition and self-driving cars, require enormous amounts of computing power to train. High-performance computing systems are well suited for this job. As a result, AI and high-performance computing are now working closely together, pushing each other forward.

I'm a computer scientist with a long career working in high-performance computing. I've observed that high-performance computing systems are under more pressure than ever, with higher demands on the systems for speed, data and energy. At the same time, I see that high-performance computing faces some serious technical problems.

Technical challenges

One big challenge for high-performance computing is the gap between how fast processors are and how well memory systems can keep up with the processors' output. Imagine having a superfast car but being stuck in traffic - it doesn't help to have speed if the road can't handle it. In the same way, high-performance computing processors often have to wait around because memory systems can't send data quickly enough. This makes the whole system less efficient.

Another problem is energy use. Today's supercomputers use a huge amount of electricity, sometimes as much as a small town. That's expensive and not very good for the environment. In the past, as computer parts got smaller, they also used less power. But that trend, called Dennard scaling , stopped in the mid-2000s. Now, making computers more powerful usually means they use more energy too. To fix this, researchers are looking for new ways to design both the hardware and the software of high-performance computing systems.

There's also a problem with the kinds of chips being made. The chip industry is mainly focused on AI, which works fine with lower-precision math like 16-bit or 8-bit numbers. But many scientific applications still need 64-bit precision to be accurate. The greater the bit count, the more digits to the right of the decimal point a chip can process, hence the greater precision. If chip companies stop making the parts that scientists need, then it could become harder to do important research.

This report discusses how trends in semiconductor manufacturing and commercial priorities may diverge from the needs of the scientific computing community, and how a lack of tailored hardware could hinder progress in research.

One solution might be to build custom chips for high-performance computing, but that's expensive and complicated. Still, researchers are exploring new designs, including chiplets - small chips that can be combined like Lego bricks - to make high-precision processors more affordable.

A global race

Globally, many countries are investing heavily in high-performance computing. Europe has the EuroHPC program , which is building supercomputers in places such as Finland and Italy. Their goal is to reduce dependence on foreign technology and take the lead in areas such as climate modeling and personalized medicine. Japan built the Fugaku supercomputer , which supports both academic research and industrial work. China has also made major advances, using homegrown technology to build some of the world's fastest computers. All of these countries' governments understand that high-performance computing is key to their national security, economic strength and scientific leadership.

The United States, which has been a leader in high-performance computing for decades, recently completed the Department of Energy's Exascale Computing Project . This project created computers that can perform a billion billion operations per second. That's an incredible achievement. But even with that success, the U.S. still doesn't have a clear, long-term plan for what comes next. Other countries are moving quickly, and without a national strategy, the U.S. risks falling behind.

I believe that a U.S. national strategy should include funding new machines and training for people to use them. It would also include partnerships with universities, national labs and private companies. Most importantly, the plan would focus not just on hardware but also on the software and algorithms that make high-performance computing useful.

Hopeful signs

One exciting area for the future is quantum computing . This is a completely new way of doing computation based on the laws of physics at the atomic level. Quantum computers could someday solve problems that are impossible for regular computers. But they are still in the early stages and are likely to complement rather than replace traditional high-performance computing systems. That's why it's important to keep investing in both kinds of computing.

The good news is that some steps have already been taken. The CHIPS and Science Act , passed in 2022, provides funding to expand chip manufacturing in the U.S. It also created an office to help turn scientific research into real-world products. The task force Vision for American Science and Technology , launched on Feb. 25, 2025, and led by American Association for the Advancement of Science CEO Sudip Parikh, aims to marshal nonprofits, academia and industry to help guide the government's decisions. Private companies are also spending billions of dollars on data centers and AI infrastructure.

All of these are positive signs, but they don't fully solve the problem of how to support high-performance computing in the long run. In addition to short-term funding and infrastructure investments, this means:

  • Long-term federal investment in high-performance computing R&D, including advanced hardware, software and energy-efficient architectures.
  • Procurement and deployment of leadership-class computing systems at national labs and universities.
  • Workforce development, including training in parallel programming, numerical methods and AI-HPC integration.
  • Hardware road map alignment, ensuring commercial chip development remains compatible with the needs of scientific and engineering applications.
  • Sustainable funding models that prevent boom-and-bust cycles tied to one-off milestones or geopolitical urgency.
  • Public-private collaboration to bridge gaps between academic research, industry innovation and national security needs.

High-performance computing is more than just fast computers. It's the foundation of scientific discovery, economic growth and national security. With other countries pushing forward, the U.S. is under pressure to come up with a clear, coordinated plan. That means investing in new hardware, developing smarter software, training a skilled workforce and building partnerships between government, industry and academia. If the U.S. does that, the country can make sure high-performance computing continues to power innovation for decades to come.

The Conversation

Jack Dongarra receives funding from the NSF and the DOE.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).