News Release

AIDEDNet: anti-interference and detail enhancement dehazing network for real-world scenes

Peer-Reviewed Publication

Higher Education Press

AIDEDNet: anti-interference and detail enhancement dehazing network for real-world scenes

image: The overall structure of the proposed Anti-Interference and Detail Enhancement Dehazing Network (AIDEDNet). In this network, red rectangle marks interference module and green rectangle marks compensation module view more 

Credit: Higher Education Press Limited Company

The haze phenomenon seriously interferes the image acquisition and reduces the image quality. Due to many uncertain factors, dehazing is typically a challenge in image processing. The existing deep learning-based dehazing approaches apply atmospheric scattering model (ASM), which originally comes from traditional dehazing methods. However, the data set trained in deep learning does not match well this model with three reasons. Firstly, the atmospheric illumination in ASM is obtained from prior experience, which is not accurate for dehazing real-scene. Secondly, it is difficult to get the depth of outdoor scenes for ASM. Thirdly, the haze is a complex natural phenomenon, and it is difficult to find an accurate physical model and related parameters to describe this phenomenon.

To address this challenge, a research team led by Fazhi HE published their new research in Frontiers of Computer Science (2023, Vol. 17, Issue 2) co-published by Higher Education Press and Springer Nature.

The team proposes a black box method, in which the haze is considered an image quality problem without using any physical model such as ASM. Analytically, he team propose a novel dehazing equation to combine two mechanisms: interference item and detail enhancement item. The interference item estimates the haze information for dehazing the image, and then the detail enhancement item can repair and enhance the details of the dehazed image. Based on the new equation, an Anti-Interference and Detail Enhancement Dehazing Network(AIDEDNet) is designed, which is dramatically different from existing dehazing networks in that the proposed network is fed into the haze-free images for training. Specifically, a new way to construct haze patch on the flight of network training is proposed. The patch is randomly selected from the input images and the thickness of haze is also randomly set.

In the future, at least four directions will be explored but not limited. The first one is to further improve the proposed method with multi-scale and multi-feature convolutional network. The second one is to apply the proposed method to other image processing fields, such as image deraining, image denoising and restoration. The third one is to extend the proposed method to process video dehazing. The forth one is to extend the proposed idea and approach to 3D data.

###

Research Article

Jian ZHANG, Fazhi HE, Yansong DUAN, Shizhen YANG. AIDEDNet: anti-interference and detail enhancement dehazing network for real-world scenes. Front. Comput. Sci., 2023, 17(2): 172703 https://doi.org/10.1007/s11704-022-1523-9

 

About Frontiers of Computer Science (FCS)

FCS was launched in 2007. It is published bimonthly both online and in print by HEP and Springer. Prof. Zhi-Hua Zhou from Nanjing University serves as the Editor-in-Chief. It aims to provide a forum for the publication of peer-reviewed papers to promote rapid communication and exchange between computer scientists. FCS covers all major branches of computer science, including: architecture, software, artificial intelligence, theoretical computer science, networks and communication, information systems, multimedia and graphics, information security, interdisciplinary, etc. The readers may be interested in the special columns "Perspective" and "Excellent Young Scholars Forum".

FCS is indexed by SCI(E), EI, DBLP, Scopus, etc. The latest IF is 2.669. FCS solicits the following article types: Review, Research Article, Letter.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.