McStay, Andrew John and Bakir, Vian (2025). UK National Survey on Attitudes to AI Companions: Aggregate Data, 2024. [Data Collection]. Colchester, Essex: UK Data Service. 10.5255/UKDA-SN-857731
Funded by the Responsible AI UK Impact Accelerator, project AEGIS sees Bangor University's Emotional AI Lab partnering with Japan’s National Institute of Informatics, the Institute of Electrical and Electronics Engineers (IEEE), Monash University (Indonesia) and engaging the Information Commissioner's Office (UK).
The goal of AEGIS is to host a series of workshops, assemble a diverse expert working group and develop a ‘technical standard’ to address use of emulated empathy in general-purpose artificial intelligence systems for human-AI partnerships.
Provisionally titled Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems, this IEEE standard will define ethical considerations, detail good practices, and augment and complement international human rights and regional law.
Use cases encompass general-purpose artificial intelligence products marketed as ‘empathic partners’, ‘personal AI’, ‘co-pilots’, ‘assistants’, and related phrasing for ‘human-AI partnering’. Current and nascent domains of use include work, therapy, education, life coaching, legal problems, fitness, entertainment, and more.
These systems raise ethical questions that are global in nature, yet benefitting from diverse ethical approaches, especially where systems feed into the design of human-centered technologies. Some ethical questions are familiar (e.g. transparency, accountability, bias and fairness), but others are specific and unique, including psychological interactions and dependencies, child appropriateness, fiduciary issues, animism, and manipulation through partnerships with general-purpose artificial intelligence systems.
The project augments the Emotional AI Lab's UK-Japan social science work by conducting a UK national demographically representative survey, and considering results in light of studies on AI ethics. It also sees global value in drawing a range of ethical frames of reference by which to account for human-AI partnerships, not least Japan and ethically aligned regions, given their long-standing interests in human-technology partnerships.
Data description (abstract)
To ascertain how the British public feel about AI companions, we conducted a UK-wide demographically representative national survey (implemented by professional company Walnut Unlimited - a human understanding agency, part of the Unlimited Group), across 10-12 December 2024, 2073 respondents aged 18 years or over, online omnibus). This was a part of a Responsible AI award, to create soft governance of autonomous systems that interact with human emotions and/or emulate empathy.
The survey asks 22 closed-ended, multiple-choice questions on AI companions. The first set of questions (Q.1-2) glean participants’ familiarity with, and usage of, companion apps. The second set of questions (Q.3-4) explore the acceptability of design features of AI companions. The third set of questions (Q.5-7) explore the broad benefits and concerns from using AI companions. The fourth set of questions (Q.8-13) explore views on children and companion apps. The fifth set of questions (Q.14-15) explore views on older adults and companion apps. The sixth set of questions (Q.16-18) explore views on mental health issues and companion apps. The seventh set of questions (Q.19-21) explore views on desired governance of companion apps to consider the practicalities of what societies should do about AI companions, if anything. The final question (Q.22) is an evaluative question on whether participants feel AI companions are generally a positive or negative addition to society.
Data creators: |
|
||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Contributors: |
|
||||||||||||
Sponsors: | Engineering and Physical Sciences Research Council | ||||||||||||
Grant reference: | EP/Y009800/1 | ||||||||||||
Topic classification: |
Media, communication and language Science and technology Demography (population, vital statistics and censuses) |
||||||||||||
Keywords: | AI, PUBLIC OPINION, ETHICS, REGULATIONS, CHILD SAFETY, OLD AGE, MENTAL HEALTH | ||||||||||||
Project title: | Sub-project title: Automated Empathy – Globalising International Standards (AEGIS): Japan and Ethically Aligned Regions. Part of a larger award titled: AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation. | ||||||||||||
Grant holders: | Andrew McStay, Vian Bakir, Phoebi Li, Ben Bland, Alexander Laffer | ||||||||||||
Project dates: |
|
||||||||||||
Date published: | 24 Mar 2025 10:27 | ||||||||||||
Last modified: | 24 Mar 2025 10:28 | ||||||||||||