Researchers at the University of California, Los Angeles (UCLA) have developed an optical computing framework that performs large-scale nonlinear computations using linear materials. Reported in eLight, a journal of the Springer Open, the study demonstrates that diffractive optical processors—thin, passive material structures composed of phase-only layers—can compute numerous nonlinear functions simultaneously, executed rapidly at extreme parallelism and spatial density, bound by the diffraction limit of light.
Nonlinear operations underpin nearly all modern information-processing tasks, from machine learning and pattern recognition to general-purpose computing. Yet, implementing such operations optically has remained a challenge, as most nonlinear optical effects are weak, power-hungry, or slow. The UCLA team demonstrated universal nonlinear function approximation using linear optical materials by encoding input variables of nonlinear functions into the phase of an optical wavefront and then processing them through an optimized, static diffractive optical architecture made entirely of linear materials. Each diffraction-limited output pixel corresponds to a unique nonlinear function, enabling extreme parallelism within a compact, passive optical system.
"This work shows that nonlinear computation can emerge from purely linear optical interactions when the information is structurally encoded in the phase space of light," said Aydogan Ozcan, Chancellor's Professor of Electrical and Computer Engineering at UCLA and the corresponding author of the study. "By exploiting diffractive processing and wavefront encoding, we have unlocked a powerful class of optical systems that compute nonlinear functions at large scale and speed with a massive spatial density."
UCLA researchers established theoretical and empirical proofs that these diffractive processors act as universal nonlinear function approximators—capable of realizing any arbitrary set of bandlimited nonlinear functions, including multi-variate and complex-valued functions that are all-optically cascadable. They also reported the successful approximation of typical nonlinear activation functions commonly used in digital neural networks, including sigmoid, tanh, ReLU (rectified linear unit), and softplus functions.
The researchers further demonstrated, through numerical simulations, the parallel computation of one million distinct nonlinear functions, accurately executed at wavelength-scale spatial density at the output plane of an optimized, static diffractive optical processor. They also reported an experimental validation of their architecture using a compact optical setup comprising a spatial light modulator and an image sensor, which successfully learned and executed tens of distinct nonlinear functions simultaneously.
The study's framework is scalable to much larger systems by leveraging high-end image sensors with hundreds of megapixels to potentially compute hundreds of millions of nonlinear functions – all in parallel. Such a capability could advance ultrafast analog computing, neuromorphic photonics, and high-throughput optical signal processing—achieved without nonlinear optical materials or electronic post-processing.
The authors of this work are Dr. Md Sadman Sakib Rahman, Yuhang Li, Xilin Yang, Dr. Shiqi Chen, and Professor Aydogan Ozcan, all at the UCLA Samueli School of Engineering. This research was supported by the US Department of Energy Office of Basic Energy Sciences, Materials Sciences and Engineering Division. Dr. Ozcan is also an Associate Director of the California NanoSystems Institute (CNSI).