Macdonald, Ross and Serratrice, Ludovica and Theakston, Anna and Lieven, Elena and Brandt, Silke
(2021).
International Centre for Language and Communicative Development: The Effect of Animacy on Children and Adult's Comprehension of Relative Clauses, 2014-2020.
[Data Collection]. Colchester, Essex:
UK Data Service.
10.5255/UKDA-SN-853926
The International Centre for Language and Communicative Development (LuCiD) will bring about a transformation in our understanding of how children learn to communicate, and deliver the crucial information needed to design effective interventions in child healthcare, communicative development and early years education.
Learning to use language to communicate is hugely important for society. Failure to develop language and communication skills at the right age is a major predictor of educational and social inequality in later life. To tackle this problem, we need to know the answers to a number of questions: How do children learn language from what they see and hear? What do measures of children's brain activity tell us about what they know? and How do differences between children and differences in their environments affect how children learn to talk? Answering these questions is a major challenge for researchers. LuCiD will bring together researchers from a wide range of different backgrounds to address this challenge.
The LuCiD Centre will be based in the North West of England and will coordinate five streams of research in the UK and abroad. It will use multiple methods to address central issues, create new technology products, and communicate evidence-based information directly to other researchers and to parents, practitioners and policy-makers.
LuCiD's RESEARCH AGENDA will address four key questions in language and communicative development: 1) ENVIRONMENT: How do children combine the different kinds of information that they see and hear to learn language? 2) KNOWLEDGE: How do children learn the word meanings and grammatical categories of their language? 3) COMMUNICATION: How do children learn to use their language to communicate effectively? 4) VARIATION: How do children learn languages with different structures and in different cultural environments?
The fifth stream, the LANGUAGE 0-5 PROJECT, will connect the other four streams. It will follow 80 English learning children from 6 months to 5 years, studying how and why some children's language development is different from others. A key feature of this project is that the children will take part in studies within the other four streams. This will enable us to build a complete picture of language development from the very beginning through to school readiness.
Applying different methods to study children's language development will constrain the types of explanations that can be proposed, helping us create much more accurate theories of language development. We will observe and record children in natural interaction as well as studying their language in more controlled experiments, using behavioural measures and correlations with brain activity (EEG). Transcripts of children's language and interaction will be analysed and used to model how these two are related using powerful computer algorithms.
LuciD's TECHNOLOGY AGENDA will develop new multi-method approaches and create new technology products for researchers, healthcare and education professionals. We will build a 'big data' management and sharing system to make all our data freely available; create a toolkit of software (LANGUAGE RESEARCHER'S TOOLKIT) so that researchers can analyse speech more easily and more accurately; and develop a smartphone app (the BABYTALK APP) that will allow parents, researchers and practitioners to monitor, assess and promote children's language development.
With the help of six IMPACT CHAMPIONS, LuCiD's COMMUNICATIONS AGENDA will ensure that parents know how they can best help their children learn to talk, and give healthcare and education professionals and policy-makers the information they need to create intervention programmes that are firmly rooted in the latest research findings.
Data description (abstract)
We investigated the influence of animacy on online processing of semantically reversible SRCs and ORCs using lexically inanimate items that were perceptually animate due to motion (e.g.,“where is the tractor that the cow is chasing”). Across three experiments we collected data from 141 children (aged 4;5–6;9) and 64 adults listened to sentences that varied in the lexical animacy of the NP1 head-noun (Animate/Inanimate) and relative clause (RC) type (SRC/ORC) with an animate NP2 , while viewing two images depicting opposite actions. As expected, inanimate head-nouns facilitated the correct interpretation of ORCs in children, however online data revealed children were more likely to anticipate a SRC as the RC unfolded when an inanimate head-noun was used, suggesting processing was sensitive to perceptual animacy.. Across the experiments, offline measures show that lexical animacy influenced children’s interpretation of ORCs, while online measures reveal that as RCs unfolded, children were sensitive to the perceptual animacy of lexically inanimate NPs, which was not reflected in the offline data.
Data creators: |
|
Sponsors: |
Economic and Social Research Council
|
Grant reference: |
ES/L008955/1
|
Topic classification: |
Psychology
|
Keywords: |
LANGUAGE DEVELOPMENT, LANGUAGE, LINGUISTICS
|
Project title: |
The International Centre for Language and Communicative Development
|
Alternative title: |
LuCiD WP10
|
Grant holders: |
Elena Lieven, Bob McMurray, Jeffrey Elman, Gert Westermann, Morten H Christiansen, Thea Cameron-Faulkner, Fernand Gobet, Ludovica Serratrice, Sabine Stoll, Meredith Rowe, Padraic Monaghan, Michael Tomasello, Ben Ambridge, Silke Brandt, Anna Theakston, Eugenio Parise, Caroline Frances Rowland, Colin James Bannard, Grzegorz Krajewski, Franklin Chang, Floriana Grasso, Evan James Kidd, Julian Mark Pine, Arielle Borovsky, Vincent Michael Reid, Katherine Alcock, Daniel Freudenthal
|
Project dates: |
From | To |
---|
1 September 2014 | 31 May 2020 |
|
Date published: |
26 Aug 2021 16:49
|
Last modified: |
26 Aug 2021 16:49
|
Collection period: |
Date from: | Date to: |
---|
1 September 2014 | 31 May 2020 |
|
Country: |
United Kingdom |
Data collection method: |
Across our three experiments, we tested 141 children. Participants were recruited from Reception and Year 1 classes from 3 primary schools across the North of England after obtaining ethical approval from the University Research Ethics Committee of the first author’s institution. All of the children were monolingual speakers of English and were developing typically according to class teachers’ reports. The schools received book tokens as thanks for their participation. Sixty-four adults took part in the eye-tracking task only. The adult participants were undergraduate and postgraduate university students and university administrators at the University of Manchester. No identifying information was retained from any participant.
The eye-tracking experiments used a 2 x 2 within-subjects design. The independent variables were the lexical animacy of the NP1 (animate or inanimate) and the type of relative clause used in the sentence (SRC or ORC). All lexically inanimate nouns were high on the perceptual animacy continuum as they were paired with just four verbs: “following”, “chasing”, “bumping” and “hitting”. These verbs were chosen as they allowed for semantically plausible reversible sentences with a lexically inanimate head.
With six items in each condition, 24 experimental items were used in this experiment. Each item was made up of an audio sentence and a visual display. The sentences had four versions one for each of the four 2 (RC type: SRC, ORC) x 2 (animacy of NP1: animate, inanimate) conditions.
The visual displays featured two transitive scenes in which the agent and the patient roles were reversed, for example a deer chasing a cow and a cow chasing a deer. Each item had an associated display with either an animate or inanimate head.
Hardware, software and eye movement recording: Eye Tracking Experiment The eye-tracking procedure was carried out on a Dell Precision M 4700 laptop computer and a Dell Latitude E 7450 Ultrabook, the latter of which has a 14-inch display that was used for stimulus presentation. The experiment was scripted and run using the SR Research Experiment Builder software. Eye movement behaviour was captured using a desk-mounted SR Research EyeLink 1000-Plus eye-tracker. This system uses corneal reflection and pupil position to calculate where a participant is fixating. Participants were positioned approximately 50 cm from the monitor and wore target stickers on their heads so that the tracker could track head position. Calibration involved the participant fixating on nine markers on the screen. Once calibrated, a verification procedure took place. If the verification procedure found mean spatial accuracy error to be more than 1.5 degrees or if any one of the spatial accuracy errors was greater than 2 degrees, calibration and verification procedures were repeated. Before each trial, participants fixated a marker in the middle of the screen. This "Drift Checking" procedure allowed the experimenter to see the estimated fixation point on their display and required the experimenter to accept the fixation in order to begin the trial. If the error for this procedure exceeded 1.5 degrees of visual angle on three consecutive trials, the calibration procedure was repeated. A Microsoft Sidewinder gamepad was used for participant responses.
Language Assessment The Test for the Reception of Grammar (TROG-2; Bishop, 2003) was used to measure children’s receptive syntactic skills. The test is a sentence-picture matching task with 20 blocks of 4 sentences each. The assessment was conducted and scored following the guidelines set out in the TROG-2 Manual.
Executive Function Assessment We used two tests from the computer-based Examiner battery (Kramer et al., 2014): the Flanker Task to measure inhibitory control, and the N-back Task to measure visual working memory. These tasks were edited to suit the age-range of this study; text was removed from the presentation, stimuli were enlarged and presentation time was slowed. The Flanker and N-back tasks were conducted on a 14” Lenovo laptop using the Examiner battery software and PsychoPy (Version 1.73.2; Peirce, 2007). In addition, we used the forward and backward Digit Span task from the Wechsler Intelligence Scale for Children (WISC-V; Wechsler, 2014) as a measure of verbal working memory. The experimenter read digits from a record sheet and the children responded orally.
Procedure: Children took part in two sessions approximately one week apart. In the first session they were administered the language assessment, and the executive function assessment over approximately 45 minutes; in the second session they took part in the 20-minute eye-tracking task. Adults only took part in the eye-tracking task. The order of the assessment tasks was kept constant across children: TROG-2, forward and backward digit span, flanker task and n-back task.
3.4.1) Eye-tracking task Testing for the children took place on school premises in a quiet space near their classroom. Adults were tested in a university lab and completed the same task as the children. Each participant was told that they would be playing a word and picture game. They were informed they would see two pictures on either side of the screen and that they would hear the recording of a lady speaking, after which they would choose the picture she was referring to by using the buttons on the gamepad. The participant then practiced pressing the “left” and “right” buttons on the gamepad. Once the experimenter was satisfied that the participant was comfortable with the gamepad, the eye-tracker was set-up, and the practice session started. In each practice trial, as well as the experimental and filler trials, the picture was displayed for 2000 ms before the sentence onset. At the point of onset of the final word in a sentence, the participant was able to press one of the two response buttons on the gamepad. Once a button was pressed on the gamepad the visual display would disappear. In the first three practice trials, the participant was shown a tick or a cross after the display disappeared, indicating whether their response was correct or incorrect. If correct, the participant was congratulated and encouraged to carry on. If incorrect, the experimenter explained why the response was incorrect and encouraged the participant to make sure they listened carefully and that they only pressed the button once they knew which picture the lady was speaking about. The final three practice trials did not involve the feedback stage. After completion of the practice stage the experimental/filler session began. Participants each carried out 36 randomized trials, using each experimental and filler item once. As there were 16 versions of each experimental item, we used 16 item-lists that were balanced for conditions, target location, and action-direction. Each list was used for four participants, meaning that each version of each item was used four times across all participants. |
Observation unit: |
Individual |
Kind of data: |
Numeric, Text |
Type of data: |
Experimental data
|
Resource language: |
English |
|
Rights owners: |
|
Contact: |
Name | Email | Affiliation | ORCID (as URL) |
---|
Macdonald, Ross | ross.macdonald@manchester.ac.uk | University of Manchester | Unspecified |
|
Notes on access: |
The Data Collection is available to any user without the requirement for registration for download/access.
|
Publisher: |
UK Data Service
|
Last modified: |
26 Aug 2021 16:49
|
|
Available Files
Data
Read me
Edit item (login required)
|
Edit Item |