Ingram, Martin and Pollick, Frank E. (2021). Calibrating Trust Towards An Autonomous Image Classifier, 2019. [Data Collection]. Colchester, Essex: UK Data Service. 10.5255/UKDA-SN-854151
The project will seek to investigate which parameters influence trust between artificial intelligences and human users. Our partner for this project, Qumodo, are a company dedicated to helping people interface with artificial intelligence; we will examine their Intelligent Iris system. Intelligent Iris is a modular data analysis system which is designed to facilitate human users in extracting meaningful results from large sets of data, including images (such as photos, medical scans, military sensor data etc.). The visual nature of this task makes it challenging as humans bring a wealth of social expectancies and uniquely human visual processes to understand an image. Fostering trust within man-machine teams is expected to improve both mental health and productivity. Guided by recent research into trust from domains like autonomous vehicles and social robotics, we will perform experiments to examine which parameters influence the calibration of trust when interacting with the image understanding software. We hope to advance a conceptual understanding of trust between man and machine and identify effective strategies to adjust system parameters to properly calibrate trust. These results will be valuable in advancing product development at Qumodo and will importantly inform the wider debate over how to design intelligent systems.
Data description (abstract)
Successful adoption of autonomous systems requires appropriate trust from human users, with trust calibrated to reflect true system performance. Autonomous image classifiers are one such example and can be used in a variety of settings to independently identify the contents of image data. We investigated users’ trust when collaborating with an autonomous image classifier system that we created using the AlexNet model (Krizhevsky et al., 2012). Participants collaborated with the classifier during an image classification task in which the classifier provided labels that either correctly or incorrectly described the contents of images. This task was complicated by the quality of the images processed by the human-classifier team: 50% of the trials featured images that were cropped and blurred, thereby partially obscuring their contents. Across 160 single-image trials, we examined trust towards the classifier, while we also looked at how participants complied with the classifier by accepting or rejecting the labels it provided. Furthermore, we investigated whether trust towards the classifier could be improved by increasing the transparency of the classifier’s interface, by displaying system confidence information in three different ways, which were compared to a control interface without confidence information. Results showed that trust towards the classifier was primarily based on system performance, yet this also was influenced by the quality of the images and individual differences amongst participants. While participants typically preferred classifier interfaces that presented confidence information, it did not appear to improve participants’ trust towards the classifier.
Data creators: |
|
|||||||||
---|---|---|---|---|---|---|---|---|---|---|
Sponsors: | Economic and Social Research Council, Scottish Graduate School of Social Science | |||||||||
Grant reference: | ES/P000681/1 | |||||||||
Topic classification: |
Science and technology Psychology |
|||||||||
Keywords: | TRUST, AUTOMATION, AUTONOMOUS IMAGE CLASSIFIER | |||||||||
Project title: | Calibrating Trust between Humans and Autonomous Systems | |||||||||
Grant holders: | Frank Pollick | |||||||||
Project dates: |
|
|||||||||
Date published: | 04 Feb 2021 21:25 | |||||||||
Last modified: | 04 Feb 2021 21:25 | |||||||||