Pushing boundaries in remote sensing: A comprehensive review of deep learning for spatial super-resolution

2026-02-10

Mohamed Aymen Ben Khalifa, Mourad El Koundi, Imed Riadh Farah,
Pushing boundaries in remote sensing: A comprehensive review of deep learning for spatial super-resolution,
Remote Sensing Applications: Society and Environment,
Volume 40,
2025,
101809,
ISSN 2352-9385,
https://doi.org/10.1016/j.rsase.2025.101809.
(https://www.sciencedirect.com/science/article/pii/S2352938525003623)
Abstract: Remote sensing image spatial super-resolution (RSISR) leverages deep learning to overcome the limitations in spatial detail that are critical for applications such as precision agriculture, environmental monitoring, and urban planning. Deep learning (DL) has transformed RSISR, with advances in spatial detail reconstruction being driven by convolutional neural networks (CNNs), generative adversarial networks (GANs), transformers, and diffusion models (DDPMs). This comprehensive review synthesizes 749 papers from 2018 to 2025, assessing deep learning models, datasets, and techniques for enhancing spatial resolution in remote sensing imagery. It presents a refined taxonomy of DL-based RSISR models, critically comparing their accuracy, efficiency, and applicability across the AID, DOTA, UC Merced, and NWPU-RESISC45 benchmark datasets using PSNR, SSIM, and LPIPS. Convolutional neural networks (CNNs), notably residual and attention-based architectures, have historically led to RSISR, yet generative models like DDPM are increasingly prominent. Key challenges, including large-scale scenes, multi-band data, atmospheric noise, and computational complexity, are assessed in relation to model performance. By synthesizing trends and proposing future directions, including physics-informed approaches, this review offers a rigorous foundation for advancing deep learning in RSISR, supporting precision-driven Earth observation.
Keywords: Remote sensing; Super-resolution; Deep learning; Image enhancement; Spatial super-resolution