Deep learning-based uncertainty quantification for quality assurance in hepatobiliary imaging-based techniques (IMAGE)
Caption
Figure 2: Architectural frameworks for uncertainty quantification in hepatobiliary image processing. (A) T1ρ mapping framework with integrated uncertainty estimation. The probabilistic neural network processes T1ρ-weighted images to simultaneously generate refined T1ρ maps and uncertainty estimates. This framework achieved <3% relative mapping error while reducing scan times from 10 to 6 seconds (Huang et al., 2023). The uncertainty-weighted training enables effective ROI refinement and identification of unreliable regions. (B) UP-Net dual-module physics-driven architecture. The first module employs GAN-based artifact suppression to reduce radial streak artifacts in stack-of-radial MRI data. The second module uses a bifurcated UNet structure for parameter mapping, generating quantitative maps (fat fraction and R2* ) along with uncertainty estimates. This framework dramatically improved processing efficiency (79 ms/slice vs. 3.2 min/slice) while maintaining accuracy through uncertainty-weighted training (Shih et al., 2023). Color coding indicates different processing stages: input data (blue), intermediate processing modules (orange/purple), and output maps (green for quantitative maps, red for uncertainty estimates). Arrows show the data flow through each framework, illustrating how uncertainty quantification is integrated into the processing pipeline. Both architectures demonstrate the evolution toward real-time processing capabilities while maintaining robust uncertainty estimation for clinical applications.
Credit
Copyright: © 2025 Singh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Usage Restrictions
With credit to the original source.
License
Original content