Thesis

Satellite image cloud removal : learning within and beyond the sample

Creator
Rights statement
Awarding institution
  • University of Strathclyde
Date of award
  • 2023
Thesis identifier
  • T16755
Person Identifier (Local)
  • 201983200
Qualification Level
Qualification Name
Department, School or Faculty
Abstract
  • Earth observation technologies constitute a powerful set of tools for understanding the systems and ongoing processes that occur on Earth. The technology largely focuses on satellite imaging, which relies on observation from remote positions, away from the planet’s surface. As a consequence, a large portion of the satellite imagery in the optical spectrum is hindered by the presence of clouds in the atmosphere, which obstruct a clear view of the ground. While it is difficult to prevent this issue at the acquisition stage, it may be possible to approach the problem from a data-based perspective by processing images affected by clouds. More precisely, the cloud removal technology aims to approximate the features of the ground obscured by the clouds present in a given image. Given the established power and versatility of Deep Learning methods for image synthesis problems, the recent solutions to the cloud removal problem are primarily focused on deep neural networks. Still, state-of-the-art techniques are limited in at least several aspects; they often cannot easily adapt to new signal representations, or easily ingest new types of guidance signals. Trained on limited datasets, they are generally run under the risk of overfitting. Furthermore, the evaluation of these models is often performed on non-ideal validation data, where the cloud-free ground truth is often divergent from the theoretical ground truth corresponding to a given cloudy image. This work explores several themes related to these limitations and proposes solutions to overcome some of them. Several novel methods for performing cloud removal or satellite image inpainting are proposed, most of them operating in an internal learning setting, where no dataset-based training is performed. These methods either rely on the existing information in the inference sample or priors captured by models trained on different tasks, such as vision-language models. The key advantage of these techniques is their flexibility and ability to adjust to diverse data scenarios, with different numbers of channels and guidance signals. The related problem of evaluating cloud removal solutions and training on reliable data is also explored, and consequently, a novel framework of SatelliteCloudGenerator for simulating clouds and shadows in optical multi-spectral images is proposed. The key advantage of the approach is a high degree of control over the features of the generated clouds, based on a set of adjustable parameters. The quality of the simulated images is further demonstrated by applying models trained exclusively on simulated data to real images. Finally, the question of the benefits of the proposed internal learning and language-based techniques, compared to an externally trained model, is treated by testing these approaches on a common dataset with both historical (Sentinel-2) and cross-sensor (Sentinel-1) guidance. It is found that a performance gap remains between the internal learning and language-based methods when compared to externally trained solutions, despite a promising level of performance.
Advisor / supervisor
  • Tachtatzis, Christos
Resource Type
DOI

Relations

Items