Grant holders: |
Elena Lieven, Bob McMurray, Jeffrey Elman, Gert Westermann, Morten H Christiansen, Thea Cameron-Faulkner, Fernand Gobet, Ludovica Serratrice, Sabine Stoll, Meredith Rowe, Padraic Monaghan, Michael Tomasello, Ben Ambridge, Silke Brandt, Anna Theakston, Eugenio Parise, Caroline Frances Rowland, Colin James Bannard, Grzegorz Krajewski, Franklin Chang, Floriana Grasso, Evan James Kidd, Julian Mark Pine, Arielle Borovsky, Vincent Michael Reid, Katherine Alcock, Daniel Freudenthal
|
Collection period: |
Date from: | Date to: |
---|
6 February 2017 | 8 June 2018 |
|
Country: |
United Kingdom |
Data collection method: |
A preregistered sample size of 122 children (61 per group, randomly allocated) was determined on the basis of a power calculation with d=0.3, power = 0.5, on the basis of a between-subjects t-test (using GPower 3.0). Although our analysis plan actually specified the use of mixed-effects models, it is not possible to run a power analysis for such models without simulated data, and we were not aware of any findings from studies with sufficiently similar methods to form the basis for such a simulation. Although a power greater than 0.5 would have been desirable, a total sample size of 122 was our maximum in terms of time, funding and personnel. We go some way towards mitigating this problem by also including a supplementary, exploratory Bayesian analysis (the decision to add this analysis was taken after the main results were known). A total of 143 children completed the experiment, but 21 were excluded (9 from the Experimental group and 12 from the Control group) for failing to meet the preregistered training criteria set out below. Children were recruited from UK Reception (aged 4-5 years) and Year 1 (5-6 years) classes. The final sample ranged from 4;2 to 6;8, M=5;6, SD= 7.7 months, Experimental group = M=64.85 months, SD=7.93; Control group = M=66.54 months, SD=7.44)
Before training, all participants completed the Word Structure test from the fifth edition of the CELF-Preschool 2 UK (Wiig, Secord & Semel, 2004). This is a production test of morphosyntax, in which children are asked to complete sentences to describe pictures (e.g., Experimenter: This girl is climbing. This girl is… Child: Sleeping). The purpose of this test was to allow us to verify that the Experimental and Control groups were matched for general ability with morphosyntax. This was found to be the case (Experimental: M=19.42, SD=3.05; Control: M=19.95, SD=2.79). We did not include a baseline measure of complex-question production because we did not want to give children practice in producing these questions, since our goal was to investigate the impact of relevant training on children who had previously heard no – or extremely few – complex questions.
All participants completed five training sessions on different days. As far as possible, children were tested on five consecutive days, but sometimes this was not possible due to absence. The total span of training (in days) for each child was included as a covariate in the statistical analysis. Each daily training session comprised two sub-sessions: Noun-Phrases and simple yes/no questions, always in that order. The CELF was presented immediately before the first training session on Day 1; The complex-question test session immediately after the final training session on Day 5.
Noun-phrase (NP) training. The aim of this part of the session was to train children in the Experimental group on complex noun phrases (e.g., the bird who’s happy), resulting in the formation of a complex-noun-phrase schema (the [THING] who’s [PROPERTY]) that could be combined with a simple question schema (Is [THING] [ACTION]ing?) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). On each day, children in the Experimental group heard the experimenter produce 12 such complex noun phrases, and heard and repeated a further 12 such phrases.
NP training took the form of a bingo game, in which the experimenter and child took turns request cards from a talking dog toy, in order to complete their bingo grid, with the experimenter helping the child by telling her what to say. The dog’s responses were structured such that the child always won the bingo game on Days 1, 3 and 5, and the experimenter on Days 2 and 4, resulting in an overall win for the child. In order to provide pragmatic motivation for the use of complex noun phrases (e.g., the bird who’s sad, as opposed to simply the bird), the bingo grid contained two of each animal, with opposite properties (e.g., the bird who’s happy vs the bird who’s sad; the chicken who’s big vs the chicken who’s small), requested on subsequent turns by the child and the experimenter. Two different versions of the game were created, with different pairings of animals and adjectives, the first used on Days 1, 3 and 5, the second on Days 2 and 4. The allocation of NPs to the experimenter versus the child, and the order of the trials was varied within each version, but was not subject to any between-subjects variation: Within a particular group (Experimental/Control) all children had identical training.
Children in the Control group received similar training to the Experimental group, except that instead of complex NPs (e.g., the bird who’s happy), they heard and repeated semantically-matched simple adjectival NPs (e.g., the happy bird).
Simple-question training. The aim of this part of the session was to train children on simple questions (e.g., Is the bird cleaning?), resulting in the formation of a simple question schema (Is [THING] [ACTION]ing?) that children in the Experimental group – but crucially not the Control group – could combine with the trained complex-noun-phrase schema (the [THING] who’s [PROPERTY]) to yield a complex-question schema (Is [the [THING] who’s [PROPERTY]] ACTIONing?). Simple question training was identical for the Experimental and Control groups, and took the form of a game in which the child repeated questions spoken by the experimenter, subsequently answered by the same talking dog toy from the NP training part of the session.
The experimenter first explained that “We are going to ask the dog some questions. We’ll see an animal on the card and try to guess what the animal is doing on the other side of the card”. On each trial, the experimenter first showed the face of the card depicting the animal doing nothing in particular and said, for example, “On this one, here’s a bird. I wonder if the bird is cleaning. Let’s ask the dog. Copy me. Is the bird cleaning”. After the child had attempted to repeat the question, the dog responded (e.g., “No, he’s having his dinner”), and the experimenter turned the card to show an illustration depicting the answer. As for the NP training, two different versions of the game were created, with different pairings of animals and actions, the first used on Days 1, 3 and 5, the second on Days 2 and 4, with the order of presentation varied within each version. All children, regardless of group, had identical simple-question training. Note that, in order to encourage schema combination, an identical set of animals featured in the NP (e.g, the bird who’s sad) and complex-question training (e.g., is the bird cleaning?).
Test phase: complex questions. The aim of the test phase was to investigate children’s ability to produce complex questions (e.g., Is the crocodile who’s hot eating?) by combining trained complex-NP and simple-question schemas ((Is [the [THING] who’s [PROPERTY]] ACTIONing?). Because we were interested in training an abstract schema, rather than particular lexical strings, the target complex questions for the test phase used only animals, verbs and adjectives that were not featured during training.
The game was very similar to that used in the simple-question training, except that children were told “This time you are not going to copy me. I will tell you what to ask, and you can ask the dog”. For each trial, the experimenter held up the relevant card and said (for example). “Two crocodiles: hot and cold [points to each crocodile; one wearing swimwear on a beach; the other wearing winter clothing in snow]. I wonder if the crocodile who’s hot is eating. Ask the dog if the crocodile who’s hot is eating”. Note that this prompt (equivalent to that used in Ambridge et al, 2008) precludes the possibility of the child producing a well-formed question simply by repeating part of the experimenter’s utterance. As before, the dog then answered (e.g., “Yes, he’s having his breakfast”), and the experimenter turned the card to show the relevant animation. Each child completed 12 test trials in random order.
In order to ensure that both the Experimental and Control groups were made up of children who had successfully completed the training, we followed our preregistered exclusion criteria, which specified that “any child who does not correctly repeat at least half of the noun phrases and at least half of the questions on all five days will be excluded… All children who complete the training and test to criterion (outlined above) will be included, and any who do not will be replaced”. On this criterion, we excluded 21 children.
All participants produced scorable responses for all trials, with no missing data (i.e., all responses were clearly some attempt at the target question). Presumably this was due to our extensive training and strict exclusion criteria which ensured that children were competent and confident in putting questions to the talking dog in response to prompts from the experimenter. Responses were coded according to the scheme s, which also shows the number of responses in each category, for each group.
In order to check reliability, all responses were independently coded by two coders: the first and final author. At the first pass, the coders showed 100% agreement with regard to the classification of responses as correct (1) or erroneous (0), with the only disagreements relating to the classification of error types (84 cases for an overall agreement rate of 94.3%). All of these discrepancies related to ambiguities in the coding scheme and, following discussion, were eliminated for 100% agreement. |
Observation unit: |
Individual |
Kind of data: |
Numeric, Text, Still image, Audio, Software |
Type of data: |
Experimental data
|
Resource language: |
English |
|