Furl, Nicholas
(2017).
Functional magnetic resonance imaging data of brain area activity when recognising facial expressions.
[Data Collection]. Colchester, Essex:
UK Data Archive.
10.5255/UKDA-SN-851780
Although a person's facial identity is immutable, faces are dynamic and undergo complex movements which signal critical social cues (viewpoint, eye gaze, speech movements, expressions of emotion and pain). These movements can confuse automated systems, yet humans recognise moving faces robustly.
Our objective is to discover the stimulus information, neural representations and computational mechanisms that the human brain uses when recognising social categories from moving faces. We will use human brain imaging to put an existing theory to the test. This theory proposes that recognition of changeable attributes (eg, expression) and facial identity are each recognised separately by two different brain pathways, each in a different part of the temporal lobe of the brain.
The evidence we provide might indeed support and fill in many gaps in this theory. Nevertheless, we expect instead to instantiate a new alternative theory. By this new theory, some brain areas can recognise both identities and expressions, using unified representations, with one of the two pathways specialised for representing movement. Thus, the successful completion of our project will provide a new theoretical framework sufficient to motivate improved automated visual systems and advance new directions of research on human social perception.
Data description (abstract)
Data resulting from an experiment which used brain scanning or functional magnetic resonance imaging (fMRI) to investigate the brain areas active when recognising facial expressions and to learn about how they are connected and how they communicate with each other.
The dataset consists of volumetric 3D scans of brains, necessarily stored in a special, purpose-made file format. The dataset also contains information necessary for analysing the data, i.e. stimuli and their onsets times. The dataset lastly contains participant ratings of the stimuli collected in a behavioural testing session, following scanning.
Our analyses of these data are reported in papers:
(1) Furl N, Henson RN, Friston KJ, Calder AJ. 2013. Top-down control of visual responses to fear by the amygdala. J Neurosci 33:17435-43.
(2) Furl N, Henson RN, Friston KJ, Calder AJ. 2013. Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus. Cereb Cortex. 2015 Sep; 25(9): 2876–2882.
Data creators: |
Creator Name |
Affiliation |
ORCID (as URL) |
Furl Nicholas |
MRC Cognition and Brain Sciences Unit, Cambridge |
|
|
Sponsors: |
ESRC
|
Grant reference: |
ES/I01134X/2
|
Topic classification: |
Psychology
|
Keywords: |
perception, fear, psychological research, perceptual processes
|
Project title: |
Neural Representation of the Identities and Expressions of Human Faces
|
Grant holders: |
nicholas furl, Richard Henson, Karl Friston, Andrew Calder
|
Project dates: |
From | To |
---|
26 September 2011 | 1 December 2015 |
|
Date published: |
12 Jan 2016 16:31
|
Last modified: |
14 Jul 2017 10:20
|
Temporal coverage: |
From | To |
---|
26 September 2011 | 30 June 2014 |
|
Collection period: |
Date from: | Date to: |
---|
26 September 2011 | 30 June 2014 |
|
Geographical area: |
cambridge, united kingdom |
Country: |
United Kingdom |
Data collection method: |
Functional magnetic resonance imaging (fMRI) data were collected from 18 healthy, right-handed participants (mean 18 years, 13 female). The experiment used a block design, with 18 main experiment runs and two localizer runs. All blocks were 11 s, comprised eight 1375 ms presentations of greyscale stimuli, and were followed by a 1 s interblock fixation interval. Participants fixated on a gray dot in the center of the display, overlaying the image, and pressed a key when the dot turned red for a random one-third of stimulus presentations. In each localizer run, participants viewed six types of blocks, each presented six times. Face blocks contained dynamic facial expressions taken from the Amsterdam Dynamic Facial Expression Set (van der Schalk et al., 2011) or the final static frames from the dynamic facial videos, capturing the expression apexes. Eight different identities (four male and four female) changed among neutral and disgust, fearful, happy, or sad expressions. The eight identities and four expressions appeared in a pseudo-random order, with each of the four expressions appearing twice. Object blocks included eight dynamic objects or the final static frames from the dynamic object videos, shown in a pseudo-random order. The low-level motion blocks consisted of dynamic random-dot pattern videos with motion-defined oriented gratings. The stimuli depicted 50% randomly luminous pixels, which could move at one frame per second horizontally, vertically, or diagonally left or right. Oriented gratings were defined by moving the dots within four strips of pixels in the opposite direction to the rest of the display, but at the same rate. Each motion direction was shown twice per block in a pseudo-random order. There were also corresponding low-level static blocks composed of the final static frames from the low-level motion videos. The remaining runs comprised the main experiment. Each of these main experiment runs had 12 blocks. Each block contained a distinct type of stimulus and was presented in a pseudorandom order. Six of the blocks contained faces, using the same four female and four male identities as in the localizer runs. In each block, all faces were either dynamic or static and showed just one of three expressions: disgust, happy, or fearful. The remaining six blocks were Fourier phase-scrambled versions of each of the six face blocks (dynamic videos were phase-scrambled in three dimensions). After scanning, participants made speeded categorizations of the emotion, expressed in the dynamic and static faces, as disgust, happy, or fearful and rated their emotional intensity on a 1–9 scale. They also rated on a 1–9 scale the intensity of the motion they perceived in each of the dynamic stimuli. Stimuli were presented for the same duration as in the fMRI experiment, and the next stimulus appeared once the participant completed a rating. The data collection methodology is also described in detail in the two papers listed amongst Related Resources and in the attached Readme file. |
Observation unit: |
Event/Process |
Kind of data: |
Numeric |
Type of data: |
Experimental data
|
Resource language: |
English |
|
Data sourcing, processing and preparation: |
This dataset contains functional brain scans (fMRI) stored in one of the volumetric 3D formats necessary for storing this type of data. We haved included the raw data from the scanner so that users can choose their pre-processing and analysis methods. Data can be viewed by installing free software (MRIcron).
The dataset contains meta-data related to the fMRI data. Most importantly, this includes the textual output files from the computer software used to control the experiment. The stimulus onset times reported in these output files are necessary for statistical analysis of fMRI data.
The dataset also contains behavioural data, collected from the participants following scanning. These are textual output files from the computer software controlling the experiment which contains information about the stimuli and their associated behavioural responses.
|
Rights owners: |
Name |
Affiliation |
ORCID (as URL) |
Furl Nicholas |
MRC Cognition and Brain Sciences Unit, Cambridge |
|
|
Contact: |
Name | Email | Affiliation | ORCID (as URL) |
---|
Furl, Nicholas | nicholas.furl@rhul.ac.uk | MRC Cognition and Brain Sciences Unit, Cambridge | Unspecified |
|
Notes on access: |
The Data Collection is available to any user without the requirement for registration for download/access
|
Publisher: |
UK Data Archive
|
Last modified: |
14 Jul 2017 10:20
|
|
Available Files
Data
Documentation
Publications
Software
Website
Edit item (login required)
 |
Edit Item |