Today's technologies depend increasingly on computers and artificial intelligence - largely powered by data centers, which have become essential U.S. infrastructure. Over the past two decades, data centers have proliferated quickly, driving up demand for electricity to power high-performance computing chips, as well as water and energy for cooling.
Lawrence Berkeley National Laboratory (Berkeley Lab) has been at the forefront of research on this evolution, conducting pioneering analysis and partnering with industry - from top AI companies to utilities and grid operators - to help ensure the reliable, around-the-clock supply of energy and cooling that modern data centers demand. Researchers are analyzing and quantifying the energy implications of the data center industry's rapid expansion. They are also working with key players in the industry to identify best practices, support load forecasting, and optimize data centers and how they interact with the electric grid.
Here are seven ways Berkeley Lab is helping U.S. data centers run more reliably.
Tracking growth trends
Home to the U.S. Department of Energy's Center of Expertise for Data Center Energy (CoE), Berkeley Lab is identifying the resources that will be needed to fuel this fast-growing sector and helping to ensure stable, reliable infrastructure that runs as efficiently as possible. In December last year, the Lab's researchers updated the seminal United States Data Center Energy Usage Report, providing a comprehensive picture of water and electricity needs for data centers from 2014 through 2028.
The Data Center Energy Usage Report notes that data center electricity use nearly tripled between 2016 and 2023. By 2028, the report's authors found, data centers could account for as much as 12% of U.S. electricity consumption. Berkeley Lab continues to incorporate industry feedback to ensure the most accurate projections and will provide more frequent updates to keep pace with this rapidly evolving sector.

Fueling the AI revolution with built-in cool tech
Berkeley Lab computing and energy research staff have long played critical roles in managing the energy impacts of data centers. They have been optimizing supercomputing for efficiency, which is critical to improving data center energy consumption. And they have partnered with industry to design supercomputer systems housed in the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab to save energy and water. Both NERSC's Perlmutter system, installed in 2021, and its next flagship system Doudna, due in late 2026, are liquid, direct-to-chip cooled systems combined with ambient air cooling to maximize energy efficiency.
Instead of using refrigerant systems for cooling, the NERSC facility uses natural air conditioning provided by ambient air and cooling towers, which rejects data center waste heat directly to the outside environment. A two-year effort reduced non-IT power consumption by 42%, with annual savings of over 2 million kWh of electricity and a half million gallons of water, reducing utility bills by approximately $200,000 per year. (Non-IT power is used to support high-performance computers, including power used for cooling). Lessons learned from the project could be applied to high-performance computing centers across the country.


Packing more compute power on microchips
While efficient computing is critical to improving data center efficiency, microchips are key to improving processing speeds and energy consumption. Berkeley Lab has led pioneering research in building advanced transistors, paving the way for energy-efficient computer microchips that could perform better and require less energy than conventional silicon chips. In a recent breakthrough, researchers created groundbreaking microcapacitors with ultrahigh energy and power density, potentially transforming on-chip energy storage for electronic devices. Berkeley Lab researchers also demonstrated how a new approach to light-manipulating materials called optoelectronics could convert photons directly into information instead of images, potentially reducing the energy currently used by a computer to transmit and analyze images. In another advance, researchers developed an open-source 3D simulation framework that could offer industry a much faster and cheaper approach to developing energy-efficient microchips by modeling the atomistic origins of physical phenomena in electronic materials.

Standardizing cooling strategies for the age of AI
Data centers are the backbone of essential computing services, from securely processing and storing data in the cloud to providing essential infrastructure for artificial intelligence. Servers and other critical hardware generate heat while processing this data; and without proper and efficient cooling, this waste heat can compromise a data center's performance. Liquid cooling can transfer heat away from components like CPUs and GPUs more efficiently than air cooling, and as data centers adapt to the evolving demands of AI, new liquid cooling technologies with even better efficiencies will be needed to support this growing infrastructure.
Berkeley Lab has partnered with industry to develop specifications for liquid cooling of data centers down to the chip level. The Lab continues to collaborate with industry today to address the dramatically greater power and cooling demands of AI-focused systems.
In collaboration with the Energy Efficiency High-Performance Computing Working Group, led by Lawrence Livermore National Laboratory, Berkeley Lab developed specifications for liquid-cooled server racks or cabinets, facilitating broader adoption of efficient liquid cooling solutions. This included industry-standard specifications for the transfer-fluid covering system materials and operation. The liquid cooling transfer fluid specification was further refined with the Open Compute Project and issued as a guideline.
Reducing waste with best practices and online assessment tools
Technology and expertise developed by Berkeley Lab are helping public and private data center operators invest saved resources into other needs that can drive competitiveness. Through the Center of Expertise for Data Center Energy (CoE), the Lab has developed a suite of diagnostic tools, best practices, and technical guidance to enable more effective and competitive data center operations:
- The DC Pro Tool assesses baseline energy performance and identifies major areas for improvement
- The Air Management Tool Suite evaluates airflow and cooling strategies
- The Electrical Power Chain Tool maps efficiency opportunities in UPS systems and electrical distribution
These tools allow operators to pinpoint inefficiencies, test retrofits, and model performance impacts before investing in upgrades. Targeted improvements - such as refined airflow management, upgraded computer room air handler controls, and the adoption of thermosyphon-based cooling - achieved an estimated 8% reduction in cooling energy use and over 1 million gallons of water savings annually. These system-level enhancements also improved fault tolerance and operational flexibility, supporting the scalability required by next-generation high-performance computing.
Optimizing data center energy consumption with simulations
Building simulation tools developed by Berkeley Lab are helping solve key problems in data centers.
Meta uses the Lab's Modelica Buildings Library, a free open-source library with dynamic simulation models for building energy management systems, to optimize energy and water use. Carrier, the world's largest HVAC manufacturer, uses Modelica to operate co-located data centers and to develop cooling systems and after-market services for hyperscale data centers.
Berkeley Lab researchers are also partnering with a team led by the University of Maryland to develop a data center modeling tool, MOSTCOOL (Multi-Objective Simulation Tool for Cooling Optimization and Operational Longevity), under the ARPA-E COOLERCHIPS program. MOSTCOOL is a simulation software tool set that can be used to optimize the design of data centers, including power and thermal management systems for lower cooling energy demand and lower cost, while maintaining high reliability and availability. The team is responsible for developing and integrating energy modeling capability (cooling systems and waste heat recovery) using the EnergyPlus engine.
Planning for the future of data center efficiency
Berkeley Lab has supported both the data center and electric industries in planning for a future where computing power has a stable foundation to grow. Best practices and tools from the lab have been adopted from small server rooms to hyperscale cloud facilities. To share knowledge, the Data Center Energy Practitioner Training Program educates the workforce needed to realize upgrades. Consulting with industry, the program regularly updates its curriculum to reflect the state of the art in key areas such as IT equipment, air management, cooling systems, and electrical systems.
In October, close to 150 attendees from industry participated in a listening session jointly hosted by the Lab with partner BP Castrol at the 2025 OCP Global Summit. This event solicited input on prioritizing the most challenging technical barriers, including topics like powering and cooling high-density compute equipment and microchips, and identifying industry-wide practices and trends in the selection of IT equipment across data center types and workloads. This feedback will be used to calibrate industry-wide models of U.S. data center energy use.