A multi-scale end-to-end visible and infrared image enhancement fusion method
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Infrared and visible light image fusion is a hot topic in the field of image processing, aiming to merge the comple mentary information from source images to produce a fusion image containing richer information. However, the images used for fusion are often taken under extreme lighting conditions at night, which greatly affects the quality of the visible light images and leads to poor final fusion results, and most of the existing image fusion algorithms do not take into account the lighting factor. For this reason, we propose a multi-scale end-to-end image enhancement fusion method that enhances the illumination of the image while realizing image fusion, which greatly improves the quality of the fused image. The model is based on the existing autoencoder fusion network design, which includes four parts: visible image encoder, infrared image encoder, decoder and fusion module, and the training is accomplished by a novel three-stage training strategy. In the first stage the visible image enhancement consisting of visible image encoder and decoder is trained, and in the second stage the autoencoder consisting of infrared image encoder and decoder is trained. The fusion module is then trained in the third stage. This staged training strategy not only ensures effective learning of each component, but also provides more accurate and robust performance for image fusion in complex environments. Experimental results on public domain datasets show that our end-to-end fusion network achieves superior visual performance results compared to existing methods. The code is available at https://gitee.com/zhangyuchen12316/fusion