Diffractive Network Advance Boosts Optical Computing

SPIE--International Society for Optics and Photonics

State-of-the-art neural networks heavily rely on linear operations, such as matrix-vector multiplications and convolutions. While dedicated processors like GPUs and TPUs exist for these operations, they have limitations in terms of power consumption and bandwidth. Optics is better suited for such operations because of its inherent parallelism, large bandwidth, and computation speed.

Diffractive deep neural networks (D2NN), also known as diffractive networks, constitute an emerging optical computing architecture. These task-specific networks are constructed from spatially engineered thin surfaces and can passively perform computational tasks at speed-of-light propagation through an ultrathin volume. The spatial features of these diffractive surfaces are learned through a one-time design process, and the optimized surfaces are then fabricated to create the physical hardware of the diffractive optical network.

A recent publication in Advanced Photonics Nexus by a team led by Aydogan Ozcan, Chancellor's Professor and the Volgenau Chair for Engineering Innovation at UCLA, introduces a method to perform complex-valued linear operations with diffractive networks under spatially incoherent illumination. Previous research by the same group showed that diffractive networks with sufficient degrees of freedom can perform arbitrary complex-valued linear transformations with spatially coherent light. However, under spatially incoherent light, these networks can perform arbitrary linear transformations of input optical intensities if the matrix elements defining the transformation are real and non-negative. Given that spatially incoherent illumination sources are more prevalent and easier to access, there is a growing need for spatially incoherent diffractive processors to handle data beyond just non-negative values.

By incorporating preprocessing and postprocessing steps to represent complex numbers by a set of non-negative real numbers, UCLA researchers have extended the processing power of spatially incoherent diffractive networks to the domain of complex numbers. They demonstrated that such incoherent diffractive processors can be designed to perform an arbitrary complex-valued linear transformation with negligible error if there is a sufficient number of optimizable phase features within the diffractive design, which needs to scale up with the dimensions of the input and output complex vector spaces.

The researchers showcased the practical application of their novel scheme through the encryption and decryption of complex-valued images using spatially incoherent diffractive networks. Beyond visual image encryption, these processors hold potential applications in various fields, such as in autonomous vehicles for ultrafast and low-power processing of natural scenes. The adaptability of spatially incoherent diffractive processors to handle data beyond non-negative values makes them valuable in diverse scenarios.

For details, see the original Gold Open Access article by Xilin Yang, Md Sadman Sakib Rahman, Bijie Bai, Jingxi Li, and Aydogan Ozcan, "Complex-valued universal linear transformations and image encryption using spatially incoherent diffractive networks," Adv. Photon. Nexus 3(1) 016010 (2024), doi 10.1117/1.APN.3.1.016010.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.