News Release

New method for addressing the reliability challenges of neural networks in inverse imaging problems

Researchers develop cycle-consistency-based uncertainty quantification technique

Peer-Reviewed Publication

Intelligent Computing

Corrupted input image detection using cycle-consistency-based uncertainty and bias estimators.

image: 

(A) Left: The sharp image (ground truth) and the motion blur kernel used. Right: The noise-corrupted input images and the deblurred outputs. (B) Projection of the data onto the 2D space formed by the 2 most important attributes (cycle-consistency-based uncertainty estimators). (C) Detection accuracy of the new method and 2 baseline methods. (D) Estimated and actual uncertainty and the importance of each attribute for classification.

view more 

Credit: Ozcan Lab @UCLA.

Uncertainty estimation is critical to improving the reliability of deep neural networks. A research team led by Aydogan Ozcan at the University of California, Los Angeles, has introduced an uncertainty quantification method that uses cycle consistency to enhance the reliability of deep neural networks in solving inverse imaging problems.

This research was published Dec. 21 in Intelligent Computing, a Science Partner Journal.

Deep neural networks have been used to solve inverse imaging problems, such as image denoising, super-resolution imaging and medical image reconstruction, in which the goal is to create an ideal image using the raw image data actually captured, often after some degradation. However, deep neural networks sometimes produce unreliable results, and in some contexts, incorrect predictions can have severe consequences. Models that can quantitatively estimate how certain they are about their output can perform better at detecting abnormal situations, such as anomalous data and outright attacks.

The new method for estimating network uncertainty uses a physical forward model, which serves as a computational representation of the underlying processes governing the input–output relationship. By combining this model with a neural network and executing forward–backward cycles between the input and output data, uncertainty accumulates and can be effectively estimated.

The theoretical underpinning of the method involves establishing the bounds of cycle consistency, defined as the difference between adjacent outputs in the cycle. The researchers derived upper and lower bounds for cycle consistency, demonstrating its relationship with the uncertainty of the output of the neural network. The study considered cases where cycle outputs diverged and cases where they converged, providing expressions for both scenarios. The derived bounds can be used to estimate uncertainty even without knowledge of the ground truth.

The efficacy of the new method was demonstrated through two experiments:

1. Detection of image corruption

For this task, the researchers focused on one type of inverse problem called image deblurring. They created some noise-corrupted and uncorrupted blurry images and applied an image-deblurring network that was pre-trained on uncorrupted data to deblur those images. Then, they trained a machine learning model to classify the images as corrupted or uncorrupted through forward-backward cycles. They found that using their cycle consistency metrics for estimating network uncertainty and bias made the final classification more accurate.

2. Detection of out-of-distribution images

For this second task, the authors extended their method to image super-resolution problems. They collected three types of low-resolution images: anime, microscopy, and face images, and trained three super-resolution neural networks, one for each image type. Then, each of these super-resolution networks was tested on three types of images, where a machine learning algorithm learned to distinguish training-testing data distribution discrepancies based on the forward-backward cycles. For example, when tested with the anime-image super-resolution network, low-resolution microscopy and facial images were "out-of-distribution," that is, not what the network was trained for; the algorithm accurately detected these out-of-distribution cases to alert the users. Results for the other two networks were similar. When compared with other methods, the cycle-consistency-based method had better overall accuracy for identifying out-of-distribution images.

The researchers anticipate that their cycle-consistency-based uncertainty quantification method will significantly contribute to enhancing the reliability of neural network inferences in inverse imaging problems. Additionally, the method could find applications in uncertainty-guided learning. This study marks a significant step toward addressing the challenges associated with uncertainty in neural network predictions, paving the way for more reliable and confident deployment of deep learning models in critical real-world applications.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.