Thesis

Quantitative assessment of vocal cord abduction and adduction to measure movement symmetry using flexible fibreoptic endoscopy

Creator
Rights statement
Awarding institution
  • University of Strathclyde
Date of award
  • 2020
Thesis identifier
  • T16111
Person Identifier (Local)
  • 201383304
Qualification Level
Qualification Name
Department, School or Faculty
Abstract
  • Vocal cord movement abnormalities are diagnosed by subjective visual assessment using endoscopy. Objective measures through image processing have been proposed in previous studies to overcome the drawback of subjectivity in the current clinical practice. However, they have mainly focussed on quantifying high-speed vocal cord vibrations using specialist expensive acquisition systems. An approach more applicable to routine clinics would be to quantify the slower vocal cord movements, i.e., abduction and adduction, which are recordable at normal camera capture rates. Moreover, in the UK, the flexible fibreoptic endoscope is preferred for primary diagnosis, but it renders poorer image quality than that by the rigid laryngoscope commonly used for objective assessment. Therefore, in this thesis, a generalisable technique is developed by quantifying vocal cord abduction and adduction through novel image processing techniques for videos acquired at the routine voice clinic. In the absence of publicly available data of vocal cord motion, acquired at the voice clinic using flexible fibreoptic endoscopy and normal camera capture, such a database is created in this work with 30 videos of normal and abnormal cases. A 5- category scale is designed for quantifying vocal motion because clinicians do not currently have a numerical grading system. Vocal cord motion in the video database is graded using the proposed rating scale by six clinicians through subjective visual assessment. Inter- and intra-rater agreement and reliability measures are computed to evaluate their performance using the scale. Furthermore, ground truth scores of vocal cord motion are obtained using the clinicians’ ratings for all the videos in the database. A novel framework is presented for the localisation and segmentation of the glottal area in a given image sequence of vocal cord abduction or adduction from the database. The challenges specific to abducting and adducting vocal cords from fibreoptic endoscopy videos are addressed since algorithms developed in previous studies for vibrating vocal cords with rigid endoscopy cannot be directly applied to the present database. In particular, the honeycomb artefact is suppressed, and a knowledge-based approach is proposed for glottis localisation and removal of spatial glottal drift, utilising a single user-defined reference point in one frame of a sequence. Techniques are proposed for image enhancement, initial contour estimation using SUSAN edge detection and thresholding, and glottal area segmentation with a localised region-based level set method. These techniques form a novel framework that accounts for the variation in shape, size and illumination of the glottal area in a sequence of abducting/adducting vocal cords. A novel model called SynGlotIm is developed to create synthetic image sequences of the glottal area during abduction and adduction. Analogous to the head phantoms used in MRI, this model is the first of its kind to synthesise glottal images over a realistic range of abduction angles, intensity inhomogeneity patterns of the glottal area, image contrast, blurring and noise, through modification of its input parameters. Four synthetic sequences that simulate real ones from the database are segmented. The similarity between the segmented contours from the synthetic and real images demonstrate that SynGlotIm can be used to validate segmentation algorithms. Thus, this technique serves as an alternative to the laborious and time-consuming process of generating manually marked ground truth contours by clinicians. The quantification of vocal cord abduction/adduction has so far only been achieved by measuring the angle between the straight edges of the vocal cords, which is prone to inaccuracies such as due to the tilt of the endoscope. Therefore, a novel approach is proposed wherein optical flow is used for motion estimation of the vocal cord edges and two optical flow features are extracted to generate a symmetry score from 0 to 1, where higher values indicate better symmetry. Of the two features, the Histogram of Oriented Optical Flow (HOOF) provides a better estimate of the degree of symmetry in paralysed cases. Furthermore, the Maximum Abduction Angle (MAA) of the glottis is calculated automatically. Finally, an improved estimate of movement symmetry during vocal cord abduction/adduction is obtained by training a Radial Basis Function (RBF) neural network with the HOOF symmetry scores and the MAA values to generate a quantitative score. Moreover, the granularity of the proposed technique allows categorisation of cases as normal, paresis and paralysis, which has not been achieved in other studies. The proposed technique is potentially useful for evaluating post-treatment outcomes and in challenging cases such as recognition of paresis.
Advisor / supervisor
  • Soraghan, John J.
  • Petropoulakis, L. (Lykourgos)
  • Lakany, Heba
Resource Type
DOI
Embargo Note
  • The digital copy of this thesis is currently under moratorium due to 3rd party copyright issues. If you are the author of this thesis please contact the library to resolve this issue.

Relations

Items