Macdonald, Ross and Brandt, Silke and Theakston, Anna and Lieven, Elena and Serratrice, Ludovica
(2021).
International Centre for Language and Communicative Development: Discourse and Morpho-syntactic Effects on Children and Adult's Comprehension of Relative Clauses, 2014-2020.
[Data Collection]. Colchester, Essex:
UK Data Service.
10.5255/UKDA-SN-853925
The International Centre for Language and Communicative Development (LuCiD) will bring about a transformation in our understanding of how children learn to communicate, and deliver the crucial information needed to design effective interventions in child healthcare, communicative development and early years education.
Learning to use language to communicate is hugely important for society. Failure to develop language and communication skills at the right age is a major predictor of educational and social inequality in later life. To tackle this problem, we need to know the answers to a number of questions: How do children learn language from what they see and hear? What do measures of children's brain activity tell us about what they know? and How do differences between children and differences in their environments affect how children learn to talk? Answering these questions is a major challenge for researchers. LuCiD will bring together researchers from a wide range of different backgrounds to address this challenge.
The LuCiD Centre will be based in the North West of England and will coordinate five streams of research in the UK and abroad. It will use multiple methods to address central issues, create new technology products, and communicate evidence-based information directly to other researchers and to parents, practitioners and policy-makers.
LuCiD's RESEARCH AGENDA will address four key questions in language and communicative development: 1) ENVIRONMENT: How do children combine the different kinds of information that they see and hear to learn language? 2) KNOWLEDGE: How do children learn the word meanings and grammatical categories of their language? 3) COMMUNICATION: How do children learn to use their language to communicate effectively? 4) VARIATION: How do children learn languages with different structures and in different cultural environments?
The fifth stream, the LANGUAGE 0-5 PROJECT, will connect the other four streams. It will follow 80 English learning children from 6 months to 5 years, studying how and why some children's language development is different from others. A key feature of this project is that the children will take part in studies within the other four streams. This will enable us to build a complete picture of language development from the very beginning through to school readiness.
Applying different methods to study children's language development will constrain the types of explanations that can be proposed, helping us create much more accurate theories of language development. We will observe and record children in natural interaction as well as studying their language in more controlled experiments, using behavioural measures and correlations with brain activity (EEG). Transcripts of children's language and interaction will be analysed and used to model how these two are related using powerful computer algorithms.
LuciD's TECHNOLOGY AGENDA will develop new multi-method approaches and create new technology products for researchers, healthcare and education professionals. We will build a 'big data' management and sharing system to make all our data freely available; create a toolkit of software (LANGUAGE RESEARCHER'S TOOLKIT) so that researchers can analyse speech more easily and more accurately; and develop a smartphone app (the BABYTALK APP) that will allow parents, researchers and practitioners to monitor, assess and promote children's language development.
With the help of six IMPACT CHAMPIONS, LuCiD's COMMUNICATIONS AGENDA will ensure that parents know how they can best help their children learn to talk, and give healthcare and education professionals and policy-makers the information they need to create intervention programmes that are firmly rooted in the latest research findings.
Data description (abstract)
When listening to relative clauses (RC) children show anticipation for a subject (SRC) rather than object relative clause (ORC). Research has suggested that changes to discourse interfere with this SRC bias (Yang, Mo & Louwerse, 2012), however others have argued these findings were due to effects of lexical priming, rather than true discourse effects (Forster & Sicuro Corrêa, 2017). We investigated discourse effects on RC interpretation using ambiguous RCs and preamble sentences with no direct reference to the agents in the target sentence. For example, the target “The man saw the nurse [NP1] with the boy [NP2] who was very tired” was employed after one of these preambles:
“It was a long day… (1) …at the hospital” [NP1-priming] (2) …at the school” [NP2-priming] (3) …that Tuesday” [Neutral]
Forty-eight children (aged 4-6) and 30 adults saw pictures of NP1 and NP2 as they listened to the target sentence and their eye movements were monitored. We found no evidence of the preambles influencing online processing, and a strong bias for NP2 anticipation, suggesting that syntax guided the processing for children and adults while discourse did not. We later used unambiguous sentences with varying morphological cues (“The man saw the nurse(s) [NP1] with the boy(s) [NP2] who was/were very tired”) on adults and found that these cues influenced online interpretation with interference from syntax but not discourse.
Data creators: |
|
Sponsors: |
Economic and Social Research Council
|
Grant reference: |
ES/L008955/1
|
Topic classification: |
Psychology
|
Keywords: |
LANGUAGE DEVELOPMENT
|
Project title: |
The International Centre for Language and Communicative Development
|
Alternative title: |
LuCiD WP10
|
Grant holders: |
Elena Lieven, Bob McMurray, Jeffrey Elman, Gert Westermann, Morten H Christiansen, Thea Cameron-Faulkner, Fernand Gobet, Ludovica Serratrice, Sabine Stoll, Meredith Rowe, Padraic Monaghan, Michael Tomasello, Ben Ambridge, Silke Brandt, Anna Theakston, Eugenio Parise, Caroline Frances Rowland, Colin James Bannard, Grzegorz Krajewski, Franklin Chang, Floriana Grasso, Evan James Kidd, Julian Mark Pine, Arielle Borovsky, Vincent Michael Reid, Katherine Alcock, Daniel Freudenthal
|
Project dates: |
From | To |
---|
1 September 2014 | 31 May 2020 |
|
Date published: |
26 Aug 2021 16:49
|
Last modified: |
26 Aug 2021 16:50
|
Collection period: |
Date from: | Date to: |
---|
1 September 2014 | 31 May 2020 |
|
Country: |
United Kingdom |
Data collection method: |
Ninety children and 125 adults were tested across four experiments. All participants were monolingual native English speakers. Child participants were recruited from Reception and Year 1 classes from two primary schools in the North of England and adults were recruited from the undergraduate student population at the first author’s institution. The school received a book token as a thank you for participating and the undergraduates received course credit. Ethical approval for this experiment was obtained from the University Research Ethics Committee of the first author’s institution.
The first three eye-tracking experiment used a within-subjects design with one independent variable with three levels. The variable was the type of preamble used before the target sentence (Neutral, NP1-biasing or NP2-biasing). The fourth experiment used a 3x2x2 within-subject design, in which we varied which noun phrase in the target sentence was plural (NP1 or NP2) and whether the key verb phrase was singular or plural (Was or Were).
Target sentences The target sentences were ambiguous sentences in which a relative clause could be attached to one of two noun phrases. Across conditions target sentences were consistent in the first three expeirments. These sentences had the following structure:
“The man saw the [NP1] with/of the [NP2] who was very [adjective]”
Twenty-four target sentences were produced. Four different heads were used equally across these sentences (“man”, “woman”, “boy”, and “girl”). The NP1 and NP2 positions were filled with 24 different human characters, each used twice across the 24 sentences, but never twice in the same sentence. Each sentence had a unique NP1 and NP2 pairing. Twelve unique adjectives were each used twice across the target sentences.
Preamble sentences Three preamble sentences were constructed for each of the target sentences. Examples of the three preambles used for the target sentence, “The man saw the nurse of the boy who was very tired” are below:
(1) It had been a very long day at the hospital.
(2) It had been a very long day at the school.
(3) It had been a very long day that Wednesday.
These preambles either involved a location linked to the NP1 (1) or NP2 (2) or involved a time that was neither explicitly linked to NP1 nor NP2 (3). In half of the neutral preambles the time referred to was a day (Wednesday, yesterday, etc.) and the in the other half it was a time of the day (morning, afternoon, etc.). For each target sentence, all of the the preamble sentences described the same event at the different locations or times (e.g. “a very long day”). These events provided a potential cause for the adjective in the target sentence (e.g. “tired”).
Sentence displays The target sentences were accompanied by visual sentence displays. These displays featured visual depictions of the three human characters in the target sentence. The first character in the sentence (man, woman, girl or boy) was positioned in the centre of the top half of the display. The characters matched to NP1 and NP2 were positioned in the centre of the lower –left and lower-right quadrants of the screen. For each item, two sentence displays were made: One with the NP1 image on the right and the NP2 image on the left, and one with the opposite arrangement.
Preamble displays Each preamble sentence was also accompanied by a visual display. The preamble displays for the NP1- and NP2-biasing conditions featured a cartoon-depiction of the location referred to in the preamble sentence. The neutral preambles were always accompanied by one of two preamble displays: A picture of a cartoon-calendar if the preamble referred to a day, and a picture of a cartoon-clock if the preamble referred to a time of day.
Questions A question was associated with each item. This question asked the listener the identity of the character attached to the adjective, and took the form “who was very tired?”
All sentences and questions were recorded and edited using Audacity sound editing software. The speaker was a native British-English speaker, with an English accent familiar to our participants. The experiment was scripted and run using the SR Research Experiment Builder software. Eye movement behaviour was captured using a laptop-mounted SR Research EyeLink Portable Duo eye-tracker. This system uses corneal reflection and pupil position to calculate where a participant is fixating. Participants were positioned approximately 50 cm from the monitor and wore target stickers on their heads so that the tracker could track head position. Calibration involved the participant fixating on nine markers on the screen. Before each trial, participants fixated a marker in the middle of the screen. This "Drift Checking" procedure allowed the experimenter to see the estimated fixation point on their display and required the experimenter to accept the fixation in order to begin the trial. If the error for this procedure exceeded 2 degrees of visual angle on three consecutive trials, the calibration procedure was repeated. A Microsoft Sidewinder gamepad was used for participant responses.
Testing of the children took place at the primary schools, in a quiet area visible to school staff. Adults were tested in a small room at the Child Study Centre at the University of Manchester. Each participant was told that they would be playing a word and picture game and informed they would see some pictures and hear a lady saying some sentences, after which the lady would ask them a question about one of the sentences. The experimenter told the participant, that the answer to the question was either the picture on the right, or the picture on the left. The child participants then practiced pressing the “left” and “right” buttons on the gamepad to make sure they understood. Once the experimenter was satisfied that the participant was comfortable with the gamepad, the eye-tracker was set-up and the participant moved on to the practice trials. In each practice trial, as well as the experimental and filler trials, the preamble display was shown as the preamble sentence was played. Afterwards, a blank screen was displayed for 1000 ms, followed by the sentence display. Two thousand milliseconds after the onset of this display, the target sentence began. At the point of onset of the final word in a sentence, the participant was able to press the two response buttons on the gamepad. Once a button was pressed on the gamepad the visual display would disappear and the participant was shown a tick or a cross, indicating whether their response was correct or incorrect. If correct, the participant was congratulated and encouraged to carry on. If incorrect, the experimenter explained why the response was incorrect and encouraged the participant to make sure they listened carefully and that they only pressed the button once they knew which picture they wanted to choose. After completion of the two practice trials, the experimental/filler session began. Participants were told they would no longer get feedback for their responses, and that if they were unsure of an answer, they should just give their best guess. Participants each carried out 30 randomized trials, using each experimental and filler item once. |
Observation unit: |
Individual |
Kind of data: |
Numeric, Text |
Type of data: |
Experimental data
|
Resource language: |
English |
|
Rights owners: |
|
Contact: |
|
Notes on access: |
The Data Collection is available to any user without the requirement for registration for download/access.
|
Publisher: |
UK Data Service
|
Last modified: |
26 Aug 2021 16:50
|
|
Available Files
Data
Read me
Data collections
Publications
Website
Edit item (login required)
|
Edit Item |