Light-Powered Computers May Slash AI Energy Use

Penn State

UNIVERSITY PARK, Pa. — A key problem facing artificial intelligence (AI) development is the vast amount of energy the technology requires, with some experts projecting AI datacenters to be responsible for over 13% of global electricity usage by 2028. According to Xingjie Ni , associate professor of electrical engineering at the Penn State School of Electrical Engineering and Computer Science, the key to addressing this roadblock could lie in computers powered by light instead of circuitry.

Ni and his team recently developed a prototype device that can accelerate and dramatically reduce the energy cost of AI computation, which they detailed in a paper published today (Feb. 11) in Science Advances . Their system routes light through an "infinity mirror" like loop of tiny optical elements, encoding data directly into the beams of light and capturing the resulting light patterns with a microscopic camera. AI models powered by this light-processing unit run faster and require far less energy than conventional electronic computing systems to complete tasks and perform calculations.

In the following Q&A, Ni discussed optical computing, how this new approach is more efficient than previous optical systems and the impacts this research could have on the future of AI and computing technology.

Q: What is optical computing? How is it different from traditional computing technology?

Ni: Traditional computers encode data into binary 1s and 0s and perform operations with electronic circuits, a very flexible and reliable approach, but one that consumes significant energy and generates a lot of heat. Optical computing is a way to process information using light instead of electricity — rather than relying on billions of electronic transistors to do calculations step by step, systems feed light through carefully designed optical components like lenses or mirrors, encoding calculations and relevant answers directly into these patterns of light.

Optical computing offers key advantages for certain math-heavy tasks because photons, the atomic building blocks of light, don't interact with each other under normal conditions. This means many light signals can pass through the same system simultaneously, allowing optical computers to process large data sets incredibly quickly. These transformations happen at the speed of light, leading to very low latency, and they can be highly energy efficient because much of the computation can be performed with minimally powered or even passive optical components.

Q: How has optical computing been used in AI previously? How does your approach improve its implementation?

Ni: Since light can process many signals at once and travel extremely fast, these systems can, in principle, execute tasks like pattern recognition at high-speed using little energy. This is why optical computing has been explored as an AI accelerator that performs the "heavy math" at the core of many AI models. In most prior demonstrations, however, light handles only the linear, or straightforward, part of computation, where doubling the input doubles the output, and multiple inputs combine predictably.

The decision-making that makes AI powerful is nonlinear in nature – meaning the output isn't proportional to the input, or that you could input a little bit of information or power and receive a much larger response. This behavior, which drives the highly complex functions AI models can execute, has previously been done electronically or by using specialized optical materials and high-input power. However, that means these actions require extra conversions between optical and electronic signals — resulting in slower, more complex, power-hungry hardware.

Our approach targets this bottleneck directly. Instead of relying on high optical power and special materials to create the needed nonlinear behavior, we use a compact multi-pass optical loop, like an "infinity mirror," in which the light pattern effectively "builds up" a nonlinear relationship between the input data and the output over repeated passes between the mirrors. The core of our system is built from widely available components — like what's used in everyday LCD displays and LED lights — rather than exotic materials or high-power lasers. By arranging these familiar elements in a multi-pass loop, we can produce the energy AI needs, while remaining incredibly compact and efficient.

Q: According to the paper, your optical computing approach could significantly reduce the size and energy requirements of AI systems. What impacts would this have on industry?

Ni: Companies are spending enormous amounts on electricity and cooling as AI usage has grown – in many data centers, the biggest problem facing operation is the energy used and heat generated by the GPUs powering AI models, not just the lack of GPUs. If the most computation-heavy parts of AI could be done with a smaller, energy-efficient optical module, it would ease that bottleneck. Companies could offer the same capabilities for less overhead cost, which translates to cheaper, more sustainable AI services for consumers.

Today, many devices still must send data back to a cloud because the local hardware can't run advanced models without draining the device's battery or overheating. Shrinking the size and power of AI hardware would push intelligence outward — into cameras, sensors, cars, drones, factory robots and medical devices — so they can respond in real time, keep sensitive data local and rely less on constant connectivity.

Q: What's next for this work?

Ni: Going forward, our goal is to turn this proof of concept into an optical computing module that is programmable, robust and ready to deploy. We want to offer developers the flexibility to tune the module's behavior for different tasks, rather than relying on whatever nonlinearity it naturally produces. We're working to shrink the setup into a compact unit that can plug into real computing platforms, so the optical part does more of the work end-to-end with minimal electronic overhead, and have plans to scale up to larger more realistic workloads over time.

Although we don't see this replacing electronic computing, it could substantially accelerate it. Conventional electronics would handle general control, memory and flexibility, while the compact optical module takes on specific, high-volume computations that drive much of AI's cost and energy use. If this technology matures into something that can plug into today's platforms, we could power AI models with smaller, faster and more sustainable hardware.

Other co-authors affiliated with Penn State include Iam-Choon Woo , William E. Leonhard Professor of Electrical Engineering; Zhiwen Liu , professor of electrical engineering; Bofeng Liu, Xu Mei and Sadman Shafi, electrical engineering doctoral candidates at Penn State; and Tunan Xia, a senior photonics test engineer at Voyant Photonics, who completed his doctorate in optical design at Penn State during this work.

This work was supported by the Air Force Office of Scientific Research and the U.S. National Science Foundation.

At Penn State, researchers are solving real problems that impact the health, safety and quality of life of people across the commonwealth, the nation and around the world.

For decades, federal support for research has fueled innovation that makes our country safer, our industries more competitive and our economy stronger. Recent federal funding cuts threaten this progress.

Learn more about the implications of federal funding cuts to our future at Research or Regress .

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.