Thesis

Factors influencing trust, reliance, performance and cognitive workload in human-agent collaboration

Creator
Rights statement
Awarding institution
  • University of Strathclyde
Date of award
  • 2022
Thesis identifier
  • T16390
Person Identifier (Local)
  • 201762126
Qualification Level
Qualification Name
Department, School or Faculty
Abstract
  • Increasingly, automated systems are being incorporated in collaborative environments where they are used to alleviate the cognitive load of human operators while increasing task performance. Automated agents are present in a variety of domains, from safety critical environments to leisure-oriented activities, and more and more, they are being considered as a virtual teammate rather than simple decision-aid tools. Trust is a key factor that will determine how much a human operator is willing to take into account or rely on the help provided by an automated agent. Past research on trust in automation highlights key elements that will influence its development, such as how the automated agent is perceived, how reliable the agent appears to be and how transparent its actions are. However, most related work make use of turn-based tasks where trust is measured post-hoc, which does not entirely capture the evolving aspect of trust. This thesis presents the development and use of a real-time collaborative game where human operators can choose the extent to which they rely on the help of automated agents displaying different behaviours and various levels of performance. We used different levels of task difficulty as well as survey instruments and the logging of task-specific behavioural information to elicit and measure variables that are important to understand the human-agent relationship such as trust, reliance, task performance, cognitive load or situational awareness. We ran four user-studies using this apparatus. The first study tested the effects of different levels of agent reliability and predictability on the human-agent relationship while the second study experimented with different types of agent errors. The third study tested the impact of different types of environmental uncertainty on the human-agent relationship while the fourth and final study measured the benefits of different kinds of visualisation-based decision-aid systems. Overall, this work sheds lights on under-investigated issues in Human-Agent Collaboration scenarios by providing insights on factors that are most likely to harm the human-agent relationship and underline how the behaviour of agents as well as the context of interaction can drastically alter a person’s attitude toward an automated agent.
Advisor / supervisor
  • Azzopardi, Leif
  • Halvey, Martin
Resource Type
DOI
Date Created
  • 2021

Relations

Items