Thesis
Turbulence mitigation with deep learning for image recognition
- Creator
- Rights statement
- Awarding institution
- University of Strathclyde
- Date of award
- 2026
- Thesis identifier
- T17600
- Person Identifier (Local)
- 201857505
- Qualification Level
- Qualification Name
- Department, School or Faculty
- Abstract
- The classification and tracking of targets is a task commonly performed over large distances, and it can be impacted by atmospheric turbulence. As light travels from the target, towards the camera lens, it is distorted due to fluctuations in the refractive index of air. When incident on a camera lens, such a distorted wave results in blur and warping of the final image, therefore increasing the difficulty of post-acquisition tasks such as tracking. It is therefore desirable for such distortions to be minimised. Whilst traditional turbulence mitigation techniques are able to improve the quality of such images, the ever increasing availability of deep learning has allowed huge progress in many computer vision and image processing applications. This therefore prompts an investigation into its applicability for turbulence mitigation. This thesis presents such an investigation. In order to train deep learning networks, a substantial dataset is required. However, due to difficulties in data acquisition for turbulence mitigation, simulation tools are an obvious alternative to apply turbulence effects onto clean images. This not only allows any image contents to be used, but also allows full control over the atmospheric conditions. This thesis presents the development of a turbulence simulator, where the light from a point source is propagated through a simulated atmosphere using a splitstep propagation method. Using this simulator, several datasets were synthesised and used throughout the remainder of the work. Leveraging the data generated through the simulator, an investigation into the applicability of deep learning for turbulence mitigation was undertaken by utilising existing off-the-shelf deep learning architectures not originally designed for turbulence mitigation. By retraining these networks on turbulent data, their ability to shift domain was tested, where it was found that deep learning could indeed be applied to turbulence mitigation; however more success would be gained by using video sequences as opposed to single images. With this information, two video processing models were tested, EDVR and DATUM. It was found that, by altering the loss function of these models to prioritise perceptual image quality, post-mitigation classification could be greatly improved. In this context, it was also found that turbulence mitigation literature over relies upon common image quality metrics such as PSNR and SSIM. As tasks such as classification and tracking are the motivation behind turbulence mitigation, in this research it is argued that such metrics are ill-suited for turbulence mitigation algorithm quality assessment. Therefore, to gain an understanding of which metrics would be better suited for this task, an investigation was undertaken, where metric scores were compared with classification accuracy. The metrics that best aligned with this accuracy were identified as the optimal metrics for turbulence mitigation quality assessment. In this, it was found that the DISTS metric was the best full-reference metric, whilst Q-ALIGN was found to be the best no-reference metric. The full scope of this thesis therefore covers the varied aspects of turbulence mitigation research. From data sparsity and simulation, to removal and quality assessment.
- Advisor / supervisor
- Di Caterina, Gaetano
- Lamb, Robert
- Resource Type
- DOI
Relations
Items
| Thumbnail | Title | Date Uploaded | Visibility | Actions |
|---|---|---|---|---|
|
|
PDF of thesis T17600 | 2026-02-11 | Public | Download |