Simultaneously Presented Facial and Contextual Information Influences Observers' Facial Expressions and Self-reports of Emotion, 2018-2019

Denk-Florea, Cristina-Bianca (2021). Simultaneously Presented Facial and Contextual Information Influences Observers' Facial Expressions and Self-reports of Emotion, 2018-2019. [Data Collection]. Colchester, Essex: UK Data Service. 10.5255/UKDA-SN-855096

An increasing volume of violent and distressing imagery is being shared online, and this provides a challenge to any organisation that moderates their online content. The problem is particularly acute in public policing and social services organisations that must analyse such imagery in the course of investigation or child protection. Such organisations have a duty of care to protect their employees, and ensure their welfare. In support of this goal, this project will perform research which will facilitate the development of novel digital tools to assist users and reduce the mental health burden created by viewing this imagery. This objective will be attained by working in collaboration with the Child Exploitation Online Protection command of the National Crime Agency and the artificial intelligence technology firm, Qumodo. Three strands of research will be performed: 1) Evaluating emotional image recognition in the context of image manipulations guided by artificial intelligence to reduce emotional impact while still retaining scene information. 2) Experiments in social neuroscience that evaluate the effectiveness of the image manipulations from Strand 1, and help better to understand the nature of how the emotional processing of distressing images might compete with cognitive processing. 3) Determining what are the potential risks of implementing this new technology and how can these risks be dealt with effectively to maximise benefit to both employees and their organisations.

Data description (abstract)

We receive emotional signals from different sources, including the face and its surrounding context. Previous research has shown the effect that facial expressions and contextual affective information have on people’s brain responses. This study measured physiological responses and ratings of affect to face-context compounds of varied emotional content. Forty-two participants freely viewed face-context and context-only natural threat, mutilation, happy, erotic and neutral scenes whilst corrugator, zygomatic and startle eyeblink responses were recorded. Concerning the emotional content presented, participants’ corrugator, zygomatic, startle eyeblink responses and their valence and arousal ratings varied with the stimuli valence and arousal matched the stimuli valence. Face-context threat and mutilation scenes elicited more negative emotional experiences and larger corrugator responses than context-only scenes. In contrast, happy face-context scenes elicited more positive emotional experiences and a decreased corrugator response. The zygomatic showed an enhanced response to face-context scenes, regardless of the valence of the scenes. Our results show that the simultaneous perception of emotional signals from faces and contextual information induce enhanced facial reactions and affective responses.

Data creators:
Creator Name Affiliation ORCID (as URL)
Denk-Florea Cristina-Bianca The University of Glasgow https://orcid.org/0000-0002-2617-846X
Sponsors: ESRC
Grant reference: ES/R500938/1
Topic classification: Psychology
Keywords: EMOTIONAL STATES, ATTITUDES, PERCEPTION, EMOTIONAL DEVELOPMENT
Project title: Design of Tools to Mitigate Psychological Stress in the Analysis of Disturbing Images
Grant holders: Cristina-Bianca Denk-Florea
Project dates:
FromTo
1 October 201731 October 2021
Date published: 28 Oct 2021 15:55
Last modified: 28 Oct 2021 15:55

Available Files

Data and documentation bundle

Data

Documentation

Read me

Downloads

data downloads and page views since this item was published

View more statistics

Altmetric

Edit item (login required)

Edit Item Edit Item