Boson Sampling Breakthrough in Quantum AI Applications

Okinawa Institute of Science and Technology Graduate University

For over a decade, researchers have considered boson sampling-a quantum computing protocol involving light particles-as a key milestone toward demonstrating the advantage of quantum methods over classical computing. But while previous experiments showed that boson sampling is hard to simulate with classical computers, practical uses have remained out of reach. Now, in Optica Quantum, researchers from the Okinawa Institute of Science and Technology (OIST) present the first practical application of boson sampling for image recognition, a vital task across many fields, from forensic science to medical diagnostics. Their approach uses just three photons and a linear optical network, marking a significant step towards low energy quantum AI systems.

Harnessing quantum complexity

Bosons-particles like photons that follow Bose-Einstein statistics-exhibit complex interference effects when passed through certain optical circuits. In boson sampling, researchers inject single photons into one such circuit, then measure the output probability distribution after they interfere.

To understand how such sampling works, think of marbles on a pegboard. When the marbles are dropped, if you sample the probability distribution of where the marbles land, it forms a bell curve. However, the results are completely different when running this same experiment using single photons. They display wave-like properties, so can interfere with one another, and interact with their environment very differently from large objects. This means that they display very complex probability distributions, which are hard for classical computing methods to predict.

For small objects like beads or marbles, when dropped down a pegboard, the likelihood of landing in a particular place follows the normal distribution, creating a bell curve shape. The same is not true of photons, which display a complex distribution that classical computers find difficult to predict.
Matemateca (IME/USP)/Rodrigo Tetsuo Argenton
For small objects like beads or marbles, when dropped down a pegboard, the likelihood of landing in a particular place follows the normal distribution, creating a bell curve shape. The same is not true of photons, which display a complex distribution that classical computers find difficult to predict.

From quantum reservoir to image recognition

In this paper, the researchers developed a new quantum AI method for image recognition based on boson sampling. In their simulated experiment, they began by generating a complex photonic quantum state, onto which simplified image data was encoded.

Three different sets of input data are simplified using PCA. Single photons are injected into a random optical circuit to generate a complex quantum state. The simplified data is encoded onto this state, which then passes through a second interferometer to form the quantum reservoir. Photon detection reveals a boson sampling probability distribution, which is combined with the original image data and fed into a simple, trainable linear classifier to make predictions.
In their simulated system, image data is first simplified using a process called principal component analysis (PCA), which reduces the amount of information while preserving key features. A complex photonic state is generated, onto which this data is encoded, before being processed in the quantum reservoir -where interference between photons produces rich, complex patterns used for image recognition.This system requires training only at the final stage-a simple linear classifier-making the overall approach both efficient and effective for accurate image recognition.
Sakurai et al., 2025
In their simulated system, image data is first simplified using a process called principal component analysis (PCA), which reduces the amount of information while preserving key features. A complex photonic state is generated, onto which this data is encoded, before being processed in the quantum reservoir -where interference between photons produces rich, complex patterns used for image recognition.This system requires training only at the final stage-a simple linear classifier-making the overall approach both efficient and effective for accurate image recognition.

The researchers used grey scale images from three different data sets as input. Since each pixel is in grey scale, the information is easy to represent numerically, and could be compressed using principal component analysis (PCA) to retain key features. This simplified data was encoded into the quantum system by adjusting the properties of single photons. The photons then passed through a quantum reservoir-a complex optical network-where interference created rich, high-dimensional patterns. Detectors recorded photon positions, and repeated sampling built a boson sampling probability distribution. This quantum output was combined with the original image data and processed by a simple linear classifier. This hybrid approach preserved information and outperformed all comparably sized machine learning methods that the researchers tested, providing highly accurate image recognition across all data sets.

"Although the system may sound complex, it's actually much simpler to use that most quantum machine learning models." explained Dr Akitada Sakurai, first author of this study, and member of the Quantum Information Science and Technology Unit. "Only the final step-a straightforward linear classifier-needs to be trained. In contrast, traditional quantum machine learning models typically require optimization across multiple quantum layers."

Professor William J Munro, co-author and head of the Quantum Engineering and Design Unit, added, "What's particularly striking is that this method works across a variety of image datasets without any need to alter the quantum reservoir. That's quite different from most conventional approaches, which often must be tailored to each specific type of data."

Unlocking new frontiers in image recognition

Whether it's analyzing handwriting from a crime scene, or identifying tumors in MRI scans, image recognition plays a vital role in many real-world applications. The promising results of this study found that this quantum approach identified images with higher accuracy than similarly sized machine learning methods, opening new avenues in quantum AI.

"This system isn't universal- it can't solve every computational problem we give it," noted Professor Kae Nemoto, head of the Quantum Information Science and Technology Unit, Center Director of the OIST Center for Quantum Technologies, and co-author on this study. "But it is a significant step forward in quantum machine learning, and we're excited to explore its potential with more complex images in the future".

Funding: This work is supported in part by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) under Grant No. JPMXS0118069605

About the OIST Center for Quantum Technologies

Established in 2022, the OIST Center for Quantum Technologies (OCQT) is an international hub for research and talent development in quantum technology. Guided by Japan's Quantum Future Society Vision, OCQT serves as a central platform for international collaborative research, interdisciplinary exploration of quantum technologies, and the development of talent with global mobility. The center also supports international exchange through workshops and summer schools and promotes collaboration with industry as well as technology transfers to nurture the next generation of international quantum researchers.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.