Net
Layer Decomposition Learning Based on Discriminative Feature Group Split with Bottom-Up Intergroup Feature Fusion for Single Image Deraining
페이지 정보
PUBLICATION | IEEE Access, 2024 |
AUTHORS | Yunseon Jang, Duc-Tai Le, Chang-Hwan Son, Hyunseung Choo |
ABSTRACT
Abstract
As rain streaks hinder feature extraction based on image gradients, the performance of computer vision algorithms such as pedestrian detection and lane detection can be negatively affected. Image deraining is an essential pre-processing step for the algorithms to work reliably in adverse weather conditions. However, detail and texture information in the object and background areas can be lost during deraining due to the structural similarity between rain streaks and object details. To address this, we present a novel layer decomposition learning network (LDLNet) to separate rain streak and object detail layers of the image and remove rain streaks effectively. The proposed LDLNet consists of two parts: discriminative group feature split and group feature merging. The former arranges sparse residual attention modules in series to extract spatial contextual features of rain images following a novel bottom-up inter-group feature fusion approach. The group feature merging part aggregates all the group features with different image appearances into a residual image. Experimental results reveal that the proposed approach makes group features more discriminative, represented by different image appearances including rain streaks and object details. LDLNet achieves superior rain removal and detail preservation in both synthetic datasets and real-world rainy images compared to the state-of-the-art rain removal models.
As rain streaks hinder feature extraction based on image gradients, the performance of computer vision algorithms such as pedestrian detection and lane detection can be negatively affected. Image deraining is an essential pre-processing step for the algorithms to work reliably in adverse weather conditions. However, detail and texture information in the object and background areas can be lost during deraining due to the structural similarity between rain streaks and object details. To address this, we present a novel layer decomposition learning network (LDLNet) to separate rain streak and object detail layers of the image and remove rain streaks effectively. The proposed LDLNet consists of two parts: discriminative group feature split and group feature merging. The former arranges sparse residual attention modules in series to extract spatial contextual features of rain images following a novel bottom-up inter-group feature fusion approach. The group feature merging part aggregates all the group features with different image appearances into a residual image. Experimental results reveal that the proposed approach makes group features more discriminative, represented by different image appearances including rain streaks and object details. LDLNet achieves superior rain removal and detail preservation in both synthetic datasets and real-world rainy images compared to the state-of-the-art rain removal models.