Multispectral YOLO: Generic Feature Fusion Framework for Solar Active Region Detection
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Monitoring solar phenomena, such as sunspots and active regions, is crucial for ensuring astronaut safety, telecommunications reliability, and predicting terrestrial events like auroras. Traditional methods for detecting these phenomena have limitations in accuracy and baseline maintenance. This paper presents a novel deep learning object detection method that leverages multispectral image data from satellites to enhance the detection of "sunspots" and active regions. Utilizing images from the SDO satellite and annotations from the DeepSDO dataset, we constructed a new dataset composed of aligned observations from HMI Ic, AIA 211\,\AA, and AIA 335\,\AA. We adapted and developed a stock YOLOv5-based model capable of handling and fusing any number of input images. Two fusion methodologies, early and late fusion, and three different fusion modules --- CatFuse (simple concatenation), CBAMC (CBAM-based module), and TransEnc (transformer encoder) --- were implemented and tested. Our critical evaluation of the models, supported by statistical analysis, proved the developed models to be statistically significantly different among themselves at a p-value of 0.05, and helped us to identify the best-performing model: CatFuse with early fusion, which achieved a mAP@0.5:0.95 of 0.52 and a mAP@0.5 of 0.94. This result was marginally better than the best baseline (YOLOv5 with a single HMI image) and comparable to other state-of-the-art models, demonstrating a modest but consistent improvement of multispectral image fusion for this task.