Vuong, Quoc
(2017).
A neuropsychological approach to dissect face perception and perceptual expertise: Behavioural data.
[Data Collection]. Colchester, Essex:
UK Data Archive.
10.5255/UKDA-SN-852396
Recognising faces is at the heart of human social interactions. By adulthood, people are very good at extracting identity, sex, race, emotions, and social signals from faces. Therefore, impairments to this ability can drastically reduce their quality of life.
The aim of this project is to investigate the neural mechanisms underlying people’s ability to process faces and how these mechanisms adapt with experience. The approach is to test whether individuals with prosopagnosia can acquire expertise of novel non-face objects through training. These individuals had head trauma during adulthood that lead to damage in specific brain regions. These regions are thought to process only faces and no other object categories. However, these regions may be more generally involved in processing object categories for which people have expertise (eg, bird experts). In addition to neurological case studies, volunteers will also go through the training. Their brain will be scanned using magnetic resonance imaging to determine how the putative face-specific regions change over the course of training. Overall, the results will have an impact on clinical populations which can result in face recognition deficits, such as Alzheimer’s disease, stroke patients, and developmental disorders that affect social interactions (eg, Autism).
Data description (abstract)
This collection contains the response time and accuracy data during the training sessions and during the pre-training and post-training test sessions. We used an inversion task for the test sessions. During all sessions, subjects were responding to faces or novel objects.
The data for this study are organised into four collections. The first, 'Visual stimuli', contains the visual stimuli used throughout the study. These includes faces and novel three-dimensional (3D) objects rendered from different viewpoints. This collection also contains the 3D model and Matlab scripts to help create more stimuli. The second, 'EEG data', contains the processed EEG responses to faces and novel objects during the pre-training and post-training test sessions. We used the fast periodic visual stimulation (FPVS) paradigm. Subjects performed a fixation task during EEG data collection. The last collection, "'FMRI data', contains the raw functional data for brain responses to faces and novel objects. In this phase, we adapted the FPVS paradigm used in the EEG study to the FMRI study. Subjects performed a fixation task during FMRI data collection. We also acquired structural and diffusion imaging data. (Find the other collections under 'Related Resources')
Data creators: |
Creator Name |
Affiliation |
ORCID (as URL) |
Vuong Quoc |
Newcastle University |
|
|
Sponsors: |
Economic and Social Research Council
|
Grant reference: |
ES/J009075/1
|
Topic classification: |
Psychology
|
Keywords: |
psychophysics
|
Project title: |
A neuropsychological approach to dissect face perception and perceptual expertise
|
Grant holders: |
Quoc Vuong, Bruno Rossion
|
Project dates: |
From | To |
---|
1 October 2012 | 30 April 2016 |
|
Date published: |
01 Aug 2016 15:02
|
Last modified: |
11 Jul 2017 10:10
|
Collection period: |
Date from: | Date to: |
---|
1 October 2012 | 30 April 2016 |
|
Country: |
United Kingdom, Belgium |
Data collection method: |
Computer-based experiments to collect accuracy and response times.
|
Observation unit: |
Other |
Kind of data: |
Other |
Type of data: |
Experimental data
|
Resource language: |
English |
|
Rights owners: |
Name |
Affiliation |
ORCID (as URL) |
Vuong Quoc |
Newcastle University |
|
|
Contact: |
Name | Email | Affiliation | ORCID (as URL) |
---|
Vuong, Quoc | quoc.vuong@newcastle.ac.uk | Newcastle University | Unspecified |
|
Notes on access: |
Use of the data must include citation of the corresponding papers.
|
Publisher: |
UK Data Archive
|
Last modified: |
11 Jul 2017 10:10
|
|
Available Files
Data
Documentation
Read me
Edit item (login required)
|
Edit Item |