AI at Light Speed Now Possible

Aalto University

Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially artificial intelligence, yet they extend beyond the simple maths we're familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik's cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.

Today, every task in AI, from image recognition to natural language processing, relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.

Motivated by this pressing problem, international research collaboration led by Dr. Yufeng Zhang from the Photonics Group at Aalto University's Department of Electronics and Nanoengineering has unlocked a new approach that performs complex tensor computations using a single propagation of light. The result is single-shot tensor computing, achieved at the speed of light itself.

'Our method performs the same kinds of operations that today's GPUs handle, like convolutions and attention layers, but does them all at the speed of light,' says Dr. Zhang. 'Instead of relying on electronic circuits, we use the physical properties of light to perform many computations simultaneously.'

To achieve this, the researchers encoded digital data into the amplitude and phase of light waves, effectively turning numbers into physical properties of the optical field. When these light fields interact and combine, they naturally carry out mathematical operations such as matrix and tensor multiplications, which form the core of deep learning algorithms. By introducing multiple wavelengths of light, the team extended this approach to handle even higher-order tensor operations.

'Imagine you're a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins,' Zhang explains. 'Normally, you'd process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple 'optical hooks' that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel.'

Another key advantage of this method is its simplicity. The optical operations occur passively as the light propagates, so no active control or electronic switching is needed during computation.

'This approach can be implemented on almost any optical platform,' says Professor Zhipei Sun, leader of Aalto University's Photonics Group. 'In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption.'

Ultimately, the goal is to deploy the method on the existing hardware or platforms established by major companies, says Zhang, who conservatively estimates the approach will be integrated to such platforms within 3-5 years.

'This will create a new generation of optical computing systems, significantly accelerating complex AI tasks across a myriad of fields,' he concludes.

The research was published in Nature Photonics on November 14th, 2025.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.