Sirri, L and Guerra, E and Linnert, S and Smith, E. S. and Reid, V and Parise, E
(2021).
International Centre for Language and Communicative Development: Infants’ Conceptual Representations by Meaningful Verbal and Nonverbal Sounds, 2014-2020.
[Data Collection]. Colchester, Essex:
UK Data Service.
10.5255/UKDA-SN-854906
The International Centre for Language and Communicative Development (LuCiD) will bring about a transformation in our understanding of how children learn to communicate, and deliver the crucial information needed to design effective interventions in child healthcare, communicative development and early years education.
Learning to use language to communicate is hugely important for society. Failure to develop language and communication skills at the right age is a major predictor of educational and social inequality in later life. To tackle this problem, we need to know the answers to a number of questions: How do children learn language from what they see and hear? What do measures of children's brain activity tell us about what they know? and How do differences between children and differences in their environments affect how children learn to talk? Answering these questions is a major challenge for researchers. LuCiD will bring together researchers from a wide range of different backgrounds to address this challenge.
The LuCiD Centre will be based in the North West of England and will coordinate five streams of research in the UK and abroad. It will use multiple methods to address central issues, create new technology products, and communicate evidence-based information directly to other researchers and to parents, practitioners and policy-makers.
LuCiD's RESEARCH AGENDA will address four key questions in language and communicative development:
1. ENVIRONMENT: How do children combine the different kinds of information that they see and hear to learn language?
2. KNOWLEDGE: How do children learn the word meanings and grammatical categories of their language?
3. COMMUNICATION: How do children learn to use their language to communicate effectively?
4. VARIATION: How do children learn languages with different structures and in different cultural environments?
The fifth stream, the LANGUAGE 0-5 PROJECT, will connect the other four streams. It will follow 80 English learning children from 6 months to 5 years, studying how and why some children's language development is different from others. A key feature of this project is that the children will take part in studies within the other four streams. This will enable us to build a complete picture of language development from the very beginning through to school readiness.
Applying different methods to study children's language development will constrain the types of explanations that can be proposed, helping us create much more accurate theories of language development. We will observe and record children in natural interaction as well as studying their language in more controlled experiments, using behavioural measures and correlations with brain activity (EEG). Transcripts of children's language and interaction will be analysed and used to model how these two are related using powerful computer algorithms.
LuciD's TECHNOLOGY AGENDA will develop new multi-method approaches and create new technology products for researchers, healthcare and education professionals. We will build a 'big data' management and sharing system to make all our data freely available; create a toolkit of software (LANGUAGE RESEARCHER'S TOOLKIT) so that researchers can analyse speech more easily and more accurately; and develop a smartphone app (the BABYTALK APP) that will allow parents, researchers and practitioners to monitor, assess and promote children's language development.
With the help of six IMPACT CHAMPIONS, LuCiD's COMMUNICATIONS AGENDA will ensure that parents know how they can best help their children learn to talk, and give healthcare and education professionals and policy-makers the information they need to create intervention programmes that are firmly rooted in the latest research findings.
Data description (abstract)
In adults, words are more effective than sounds at activating conceptual representations. We aimed to replicate these findings and extend them to infants. In a series of experiments using an eye tracker object recognition task, suitable for both adults and infants, participants heard either a word (e.g. cow) or an associated sound (e.g. mooing) followed by an image illustrating a target (e.g. cow) and a distracter (e.g. telephone). The results showed that adults reacted faster when the visual object matched the auditory stimulus and even faster in the word relative to the associated sound condition. Infants, however, did not show a similar pattern of eye-movements: only eighteen-month-olds, but not 9- or 12-month-olds, were equally fast at recognizing the target object in both conditions. Looking times, however, were longer for associated sounds, suggesting that processing sounds elicits greater allocation of attention. Our findings suggest that the advantage of words over associated sounds in activating conceptual representations emerges at a later stage during language development.
Data creators: |
|
Sponsors: |
Economic and Social Research Council
|
Grant reference: |
ES/L008955/1
|
Topic classification: |
Psychology
|
Keywords: |
INFANTS, LANGUAGE DEVELOPMENT, COGNITION, EYESIGHT
|
Project title: |
The International Centre for Language and Communicative Development
|
Alternative title: |
LuCiD WP3
|
Grant holders: |
Elena Lieven, Bob McMurray, Jeffrey Elman, Gert Westermann, Morten H Christiansen, Thea Cameron-Faulkner, Fernand Gobet, Ludovica Serratrice, Sabine Stoll, Meredith Rowe, Padraic Monaghan, Michael Tomasello, Ben Ambridge, Silke Brandt, Anna Theakston, Eugenio Parise, Caroline Frances Rowland, Colin James Bannard, Grzegorz Krajewski, Franklin Chang, Floriana Grasso, Evan James Kidd, Julian Mark Pine, Arielle Borovsky, Vincent Michael Reid, Katherine Alcock, Daniel Freudenthal
|
Project dates: |
From | To |
---|
1 September 2014 | 31 May 2020 |
|
Date published: |
26 Aug 2021 16:48
|
Last modified: |
26 Aug 2021 16:49
|
Collection period: |
Date from: | Date to: |
---|
1 September 2014 | 31 May 2020 |
|
Geographical area: |
Lancaster |
Country: |
United Kingdom |
Data collection method: |
We selected six objects with basic level nouns and their associated sounds (car, cow, dog, sheep, telephone, train), suitable for both adults’ and infants’ experiments. The auditory stimuli included spoken words and their associated sounds. A native female speaker recorded the words uttered in neutral and adult-directed speech (ADS); and the associated sounds were selected from the internet. For the infant experiments, a different native speaker recorded the stimuli in infant-directed speech (IDS).
The visual stimuli were images of the objects, selected online.
Adult Experiments (1A & 1B) -1A: Visual Identification Task; replication of Lupyan and Thompson-Schill (2012).
Thirty healthy adults (20 female; age range: 24;10 y to 42;9 y) sat in front of a 19” CRT monitor and were given a gamepad to respond by button-press. On each trial, participants heard either a word (e.g. ‘cow’) or an associated sound (e.g. cow meowing) while fixating a central black fixation cross on a grey screen, followed by an image. The inter stimulus interval (ISI) was 1000 ms. The images matched the auditory stimulus 50% of the time, and the order of trials was randomised. Each image remained on the screen for 2 seconds, and participants were instructed to respond as fast as possible by pressing a match (e.g. cow) or mismatch (e.g. telephone) button on a gamepad. The side (left and right buttons) of the correct response was counterbalanced across participants. After every response, participants received an auditory feedback for correct (a beep) or incorrect (a buzz) responses. As the image disappeared, another trial began. Across trials, each of the six objects was preceded by a word and a sound, match and mismatch, and repeated four times, yielding 96 verification trials. The experiment lasted approximately five minutes.
1B: Object Recognition Task - Twenty healthy adults sat at 50-70 cm in front of the computer screen. A Tobii X120 eyetracker (Tobii Pro, Stockholm, Sweden) located beneath the screen recorded their gaze at 60 Hz sampling rate. The eye tracker was first calibrated, using a five-point calibration (shrinking blue and red attention grabber) procedure delivered through Matlab® (v. 2013b). The calibration was controlled with a key press and repeated if necessary. Each trial began with the appearance of a black fixation cross centred on a grey screen for 1000 ms after which an auditory stimulus was played, a word (e.g. dog) or a sound (e.g. dog bark), while the fixation cross remained on the screen. The visual stimulus depicting two objects simultaneously – target (e.g. dog) and distractor (e.g. train) – appeared at 1000 ms ISI, and remained on the screen for 2000 ms while the eye tracker recorded participant’s gaze. After 2000 ms the image disappeared, and another trial began. The side of target and distractor was counterbalanced, resulting in one block of 24 trials. The experimental block was repeated 4 times, yielding 96 trials in total. The order of trials within a block and across participants was randomised. The experiment lasted approximately 9 minutes.
Infant Experiments (2A, 2B, 2C) - In Exp. 2A, thirty-two healthy 9-month-old infants (15 girls; age range: 8m13d to 9m28d) took part in the study. In Exp. 2B, there were thirty-two 12-month-olds (18 girls; age range: 11m14d to 12m27d), and in Exp. 2C twenty-three 18-month-old (11 girls; age range: 17m14 to 18m21d) infants. An additional forty infants took part in the study but were not included in the final sample due to an insufficient amount of trials per condition (word or sound; n=35), no familiarization phase (n=1), participating twice (at 9- and 12 months; n=1), low birth weight (<2500 kg; n=2) or premature (<37 weeks of gestation; n=1).
We adapted the procedure from Experiment 1B to infants, by adding a familiarization phase (using slide presentation (Microsoft Office 2016) on an iPad mini (7,9”) tablet); and, by increasing the time of the fixation cross on the screen to 3000 ms. During this time, caregivers were encouraged to maintain infants’ attention and interest in the task by saying for instance, “Oh look!” or “Look ….”. Infants sat on their caregiver’s laps, and caregivers were asked to sit at a 90° angle from their infant to ensure the eye tracker recorded infants’ eye movements only, and to facilitate the interaction between trials. Caregivers were also instructed to avoid verbal communication when the auditory and visual stimuli were displayed, pointing to the screen or naming the objects. The visual stimulus remained on the screen for 4.5 seconds while the eye tracker recorded infants’ gaze. After 4.5 seconds, the image disappeared, and another trial began. Infants were presented with one block of 24 trials in total. A break was taken when needed, and the experiment lasted approximately 5 minutes. |
Observation unit: |
Individual |
Kind of data: |
Numeric, Text, Other |
Type of data: |
Experimental data
|
Resource language: |
English |
|
Data sourcing, processing and preparation: |
Exp. 1A - The adult behavioural data was collected through Matlab® (v. 2014b). All incorrect responses were removed prior to analysis. Reaction times (RTs) below 200 ms and above 1500 ms were also excluded. RTs were analysed using 2 (auditory stimulus) x 2 (congruency) analysis of variance (ANOVA) on SPSS (v.22).
Exp. 1B and 2 (A, B, & C) - Two areas of interest that matched size and location of the displayed target and distractor images were defined using Matlab® (v. 2014b), and a summary of participants’ fixations with their duration and coordinates on the display was produced using the same software (e.g. first fixation duration and its location). After data pre-processing, we calculated fixation proportions for each of the images on the display in both stimulus type conditions (words vs. sounds) using R software (R Core Team, 2018). A value of 1 was given to an object when participants were fixating its region on the display at a given moment, while a value of 0 was given to the other region. If no fixation was detected by the eye tracker, both regions were given a 0 value. We defined fixation proportion as the percentage of looks to an object on each trial and across time. This measure was then aggregated, first by participant and stimulus type, and then into 100 ms time windows.
|
Rights owners: |
|
Contact: |
Name | Email | Affiliation | ORCID (as URL) |
---|
Sirri, Louah | l.sirri@mmu.ac.uk | Manchester Metropolitan University | ttp://orcid.org/0000-0001-5951-8320 | Allwood, Helen | helen.allwood@manchester.ac.uk | University of Manchester | Unspecified |
|
Notes on access: |
The Data Collection is available to any user without the requirement for registration for download/access.
|
Publisher: |
UK Data Service
|
Last modified: |
26 Aug 2021 16:49
|
|
Available Files
Data
Read me
Data collections
Publications
Website
Edit item (login required)
|
Edit Item |