Thesis

Indoor navigation efficiency improvement in intelligent assistive systems (IAS) using neural networks

Creator
Rights statement
Awarding institution
  • University of Strathclyde
Date of award
  • 2022
Thesis identifier
  • T16295
Person Identifier (Local)
  • 201765106
Qualification Level
Qualification Name
Department, School or Faculty
Abstract
  • This thesis addresses the fundamental issue of indoor home navigation in Intelligent Assistive Systems (IAS). The issue of inefficient indoor home navigation exists in IAS because of the inefficient indoor home scene and object recognition. The problem is therefore addressed by developing different novel methods using neural networks in this thesis. Apart from addressing the mentioned problem, the developed novel methods also focus on addressing the problems associated with neural networks. The issues related to neural networks addressed in this thesis are the high total number of trainable parameters and the inability of neural networks to produce good accuracy on smaller datasets. A traditional Capsule Neural Network (CapsNet) is first used to implement indoor homes scene recognition for the first time. The CapsNet produced good accuracy, but it had a very high total number of trainable parameters. This led to the proposed development of NoSquashCapsNet. In NoSquashCapsNet, the squash function is removed from capsules (Capsules are the backbone of CapsNet that helps to acquire orientation of features in vector form), and Max Pool layers are introduced in the architecture. These modifications help to reduce the total number of parameters and remove the restriction of not changing the direction of vectors in capsules. The accuracies produced by both CapsNet and NoSquashCapsNet were lower but comparable with other networks and remained the same in both cases. The accuracies produced were on small datasets. Therefore, from the knowledge gained from implementing CapsNets for indoor home scene recognition, more efficient indoor object recognition networks were developed with more capabilities by restructuring and improving the initial designs. The proposed CapsNets developed for indoor object recognition are 1D CapsNetA and 1D CapsNetB. 1D CapsNets are developed to recognise 3D objects. The use of 3D object datasets makes it easier for CapsNets to capture the orientation of the objects. This will enable an IAS to recognise the objects from any viewpoint. Therefore, this method does not require the conversion of the 3D point cloud dataset to 3D voxel grids. Developing 1D CapsNets and using 3D datasets in 1D array format helps to reduce the total number of trainable parameters and produce comparable accuracy on smaller datasets. Therefore, an efficient system for recognising indoor objects present in any orientation is developed for IAS, enabling an IAS to handle any object when required. However, even if an IAS can now recognize any object present in any orientation, it will never be very useful if it cannot also recognise indoor home scenes properly. NoSquashCapsNet has produced an accuracy that is adequate for many tasks. However, it is not enough for an IAS expected to perform indoor home assistance for elderly or infirm people. Therefore, Convolutional Neural Networks (CNN) combinations were used to develop efficient indoor home scene recognition. The reason behind using the CNN combination is to perform indoor home scene recognition through multiple object detection. Multiple object detection is performed using transfer learning of a pre-trained Mask-RCNN (Mask-Region based Convolutional Neural Network) because of the instance segmentation performed by Mask-RCNN. Pre-trained Mask-RCNN helps to produce different object combinations for different indoor home scenes. Another CNN is also developed, which could be trained on object combinations produced by the pre-trained Mask-RCNN. The pre-trained Mask-RCNN’s output is connected to the newly developed CNN to perform the indoor home scene recognition. The connection produced the Mask-RCNN+CNN combination. Despite being trained on a very small dataset, this uniquely developed CNN combination surpassed all currently available techniques in terms of overall accuracy.
Advisor / supervisor
  • Di Caterina, Gaetano
  • Petropoulakis, L. (Lykourgos)
Resource Type
DOI

Relazioni

Articoli