News Release

Unrolling a rain-guided detail recovery network for singleimage deraining

Peer-Reviewed Publication

Beijing Zhongke Journal Publising Co. Ltd.

Visual comparison of the deraining results obtained using the various methods (a) an image from Rain100H and (b) on two images from Rain100H.

image: The traditional DSC method barely removes the rain streaks from the images with heavy rain, and the background is indistinguishable. DDN, RESCAN, and PReNet remove most rain streaks; however, the results of the DDN often show large blurry areas. RESCAN and PReNet also tend to blur the image details. Moreover, they all produce certain visual artifacts, as shown in the enlarged images in this Figure. DRD and our method better preserve the details of the images than the other methods. However, the deraining images obtained using our method are better in terms of detail preservation and visual quality than those obtained using DRD. In summary, our proposed method effectively removes rain streaks while obtaining clearer image details than those obtained using the other methods. view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.

Rain streaks of different shapes, sizes, and directions obscure image background scenes, resulting in image degradation, including intensity fluctuation, color distortion, or even content alteration. Such degradation impairs the visual quality of an image and leads to undesirable performance of many outdoor computer vision systems that require high-quality images. Therefore, image deraining must be performed, and effective deraining methods should be developed. In this study, we addressed the problem of single-image rain removal.

We propose a novel unrolling rain-guided detail recovery network (URDRN) for single-image deraining. In the proposed URDRN model, to recover the texture detail loss due to over-deraining, an effective rain clue is utilized for guidance. In addition, to extract rain accurately, a context aggregation attention network (CAAN) is introduced to fully exploit global high-level semantic information, as global information has been proven to help rain extraction. Moreover, the proposed URDRN is unrolled into two sub-networks, which has two benefits. In each sub-network, the data fidelity term for establishing the imaging model is guaranteed and reinforced by the network input, and rain/image priors are implicitly captured from the data by the corresponding sub-network structure. Our contributions are summarized as follows:

• Unlike other deraining approaches that recover lost details by the regularization of a complex loss functionin a unified framework or by simply ignoring further background detail recovery, our approach involves using a rain clue to guide detail recovery effectively.

• Unlike other deep-learning-based deraining methods that ignore the data fidelity term and priors hidden in images, the proposed model is unrolled into two sub-networks in a unified framework, bridging the gap between data learning and optimization to a certain degree.

• Extensive experiments demonstrate that the proposed model outperforms other state-of-the-art models on both synthetic and real rain images in terms of both subjective visual experience and objective evaluation metrics.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.