Unified method for image reconstruction and super-resolution of SFA video sequences

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Spectral filter array (SFA) cameras provide a cost-effective, single-shot solution for spectral imaging across multiple bands. However, due to the inherent nature of the technology, SFA cameras typically exhibit lower spatial resolution compared to traditional color cameras, and the sparse spatial sampling rate makes demosaicking a challenging task. Existing methods either process frames independently, leading to aliasing artifacts, or apply sequential demosaicking and super-resolution, which fails to fully exploit the temporal redundancy available in multi-frame sequences. In this paper, we propose a novel joint multi-frame demosaicking and super-resolution framework based on deep convolutional networks. Unlike prior work, our approach simultaneously reconstructs high-resolution spectral images while leveraging temporal redundancy from adjacent frames, significantly reducing aliasing artifacts and improving spectral fidelity. Through extensive experimentation on large synthetic spectral video datasets, Our method achieves a PSNR of 32.17 dB, outperforming the best competitor (29.83 dB), with enhanced visual quality. We further validate our approach on real SFA data captured with the CMS-C camera (Silios Technologies), demonstrating its practical applicability and robustness in real-world scenarios. The codes and datasets are available at github.com/HamidFsian/MultiFrameDemoSR

Article activity feed