Revolutionizing Neural Networks for Inverse Imaging Reliability

Intelligent Computing

Uncertainty estimation is critical to improving the reliability of deep neural networks. A research team led by Aydogan Ozcan at the University of California, Los Angeles, has introduced an uncertainty quantification method that uses cycle consistency to enhance the reliability of deep neural networks in solving inverse imaging problems.

This research was published Dec. 21 in Intelligent Computing, a Science Partner Journal.

Deep neural networks have been used to solve inverse imaging problems, such as image denoising, super-resolution imaging and medical image reconstruction, in which the goal is to create an ideal image using the raw image data actually captured, often after some degradation. However, deep neural networks sometimes produce unreliable results, and in some contexts, incorrect predictions can have severe consequences. Models that can quantitatively estimate how certain they are about their output can perform better at detecting abnormal situations, such as anomalous data and outright attacks.

The new method for estimating network uncertainty uses a physical forward model, which serves as a computational representation of the underlying processes governing the input–output relationship. By combining this model with a neural network and executing forward–backward cycles between the input and output data, uncertainty accumulates and can be effectively estimated.

The theoretical underpinning of the method involves establishing the bounds of cycle consistency, defined as the difference between adjacent outputs in the cycle. The researchers derived upper and lower bounds for cycle consistency, demonstrating its relationship with the uncertainty of the output of the neural network. The study considered cases where cycle outputs diverged and cases where they converged, providing expressions for both scenarios. The derived bounds can be used to estimate uncertainty even without knowledge of the ground truth.

The efficacy of the new method was demonstrated through two experiments:

1. Detection of image corruption

For this task, the researchers focused on one type of inverse problem called image deblurring. They created some noise-corrupted and uncorrupted blurry images and applied an image-deblurring network that was pre-trained on uncorrupted data to deblur those images. Then, they trained a machine learning model to classify the images as corrupted or uncorrupted through forward-backward cycles. They found that using their cycle consistency metrics for estimating network uncertainty and bias made the final classification more accurate.

2. Detection of out-of-distribution images

For this second task, the authors extended their method to image super-resolution problems. They collected three types of low-resolution images: anime, microscopy, and face images, and trained three super-resolution neural networks, one for each image type. Then, each of these super-resolution networks was tested on three types of images, where a machine learning algorithm learned to distinguish training-testing data distribution discrepancies based on the forward-backward cycles. For example, when tested with the anime-image super-resolution network, low-resolution microscopy and facial images were "out-of-distribution," that is, not what the network was trained for; the algorithm accurately detected these out-of-distribution cases to alert the users. Results for the other two networks were similar. When compared with other methods, the cycle-consistency-based method had better overall accuracy for identifying out-of-distribution images.

The researchers anticipate that their cycle-consistency-based uncertainty quantification method will significantly contribute to enhancing the reliability of neural network inferences in inverse imaging problems. Additionally, the method could find applications in uncertainty-guided learning. This study marks a significant step toward addressing the challenges associated with uncertainty in neural network predictions, paving the way for more reliable and confident deployment of deep learning models in critical real-world applications.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.