Thesis

Dictionary learning for scalable sparse image representation

Creator
Rights statement
Awarding institution
  • University of Strathclyde
Date of award
  • 2016
Thesis identifier
  • T14300
Person Identifier (Local)
  • 201050667
Qualification Level
Qualification Name
Department, School or Faculty
Abstract
  • Modern era of signal processing has developed many technical tools for recording and processing large and growing amount of data together with algorithms specialised for data analysis. This gives rise to new challenges in terms of data processing and modelling data representation. Fields ranging from experimental sciences, astronomy, computer vision,neuroscience mobile networks etc., are all in constant search for scalable and efficient data processing tools which would enable more effective analysis of continuous video streams containing millions of pixels. Therefore, the question of digital signal representation is still of high importance, despite the fact that it has been the topic of a significant amount of work in the past. Moreover, developing new data processing methods also affects the quality of everyday life, where devices such as CCD sensors from digital cameras or cell phones are intensively used for entertainment purposes. Specifically, one of the novel processing tools is signal sparse coding which represents signals as linear combinations of a few representational basis vectors i.e., atoms given an overcomplete dictionary. Applications that employ sparse representation are many such as denoising, compression, and regularisation in inverse problems, feature extraction, and more. In this thesis we introduce and study a particular signal representation denoted as the scalable sparse coding. It is based on a novel design for the dictionary learning algorithm, which has proven to be effective for scalable sparse representation of many modalities such as high motion video sequences, natural and solar images. The proposed algorithm is built upon the foundation of the K-SVD framework originally designed to learn non-scalable dictionaries for natural images. The scalable dictionary learning design is mainly motivated by the main perception characteristics of the Human Visual System (HVS) mechanism. Specifically, its core structure relies on the exploitation of the spatial high-frequency image components and contrast variations in order to achieve visual scene objects identification at all scalable levels. The implementation of HVS properties is carried out by introducing a semi-random Morphological Component Analysis (MCA) based initialisation of the scalable dictionary and the regularisation of its atom's update mechanism. Subsequently, this enables scalable sparse image reconstruction. In general, dictionary learning for sparse representations leads to state-of-the-art image restoration results for several different problems in the field of image processing. Experiments in this thesis show that these are equally achievable by accommodating all dictionary elements to tailor the scalable data representation and reconstruction, hence modelling data that admit sparse representation in a novel manner. Furthermore, achieved results demonstrateand validate the practicality of the proposed scheme making it a promising candidate for many practical applications involving both time scalable display, denoising and scalable compressive sensing (CS). Performed simulations include scalable sparse recovery for representation of static and dynamic data changing over time such as video sequences and natural images. Lastly, we contribute novel approaches for scalable denoising and contrast enhancement (CE), applied on solar images corrupted with pixel-dependent Poisson and zero-mean additive white Gaussian noise. Given that solar data contain noise introduced by charge-coupled devices within the on-board acquisition system these artefacts, prior to image analysis, have to be removed. Thus, novel image denoising and contrast enhancement methods are necessary for solar preprocessing.
Resource Type
DOI
Date Created
  • 2016
Former identifier
  • 9912521589302996

Relations

Items