Feature Attention Parallel Aggregation Network for Single Image Haze Removal
Feature Attention Parallel Aggregation Network for Single Image Haze Removal
Blog Article
Images captured in hazy weather often suffer from color distortion and texture blur due to turbid media suspended in the atmosphere.In this paper, we propose a Feature Attention Parallel Aggregation Network (FAPANet) to restore a clear image directly from the corresponding hazy input.It adopts socksmith santa cruz the encoder-decoder structure while incorporating residual learning and attention mechanism.FAPANet consists of two key modules: a novel feature attention aggregation module (FAAM) and an adaptive feature fusion module (AFFM).
FAAM recalibrates features by integrating channel attention and pixel attention in parallel to stimulate useful information and suppress redundant features.The shallow and deep layers of neural networks tend to characterize the low-level and high-level semantic features of images, respectively, so we introduce AFFM to fuse these two features adaptively.Meanwhile, a joint loss function, composed of L1 loss, perceptual loss, and structural similarity (SSIM) loss, is employed in the training stage for better results with more here vivid colors and richer details.Comprehensive experiments on both synthetic and real-world images demonstrate the impressive performance of the proposed approach.