Data collection method: |
We selected six objects with basic level nouns and their associated sounds (car, cow, dog, sheep, telephone, train), suitable for both adults’ and infants’ experiments. The auditory stimuli included spoken words and their associated sounds. A native female speaker recorded the words uttered in neutral and adult-directed speech (ADS); and the associated sounds were selected from the internet. For the infant experiments, a different native speaker recorded the stimuli in infant-directed speech (IDS). The visual stimuli were images of the objects, selected online. Adult Experiments (1A & 1B) -1A: Visual Identification Task; replication of Lupyan and Thompson-Schill (2012). Thirty healthy adults (20 female; age range: 24;10 y to 42;9 y) sat in front of a 19” CRT monitor and were given a gamepad to respond by button-press. On each trial, participants heard either a word (e.g. ‘cow’) or an associated sound (e.g. cow meowing) while fixating a central black fixation cross on a grey screen, followed by an image. The inter stimulus interval (ISI) was 1000 ms. The images matched the auditory stimulus 50% of the time, and the order of trials was randomised. Each image remained on the screen for 2 seconds, and participants were instructed to respond as fast as possible by pressing a match (e.g. cow) or mismatch (e.g. telephone) button on a gamepad. The side (left and right buttons) of the correct response was counterbalanced across participants. After every response, participants received an auditory feedback for correct (a beep) or incorrect (a buzz) responses. As the image disappeared, another trial began. Across trials, each of the six objects was preceded by a word and a sound, match and mismatch, and repeated four times, yielding 96 verification trials. The experiment lasted approximately five minutes. 1B: Object Recognition Task - Twenty healthy adults sat at 50-70 cm in front of the computer screen. A Tobii X120 eyetracker (Tobii Pro, Stockholm, Sweden) located beneath the screen recorded their gaze at 60 Hz sampling rate. The eye tracker was first calibrated, using a five-point calibration (shrinking blue and red attention grabber) procedure delivered through Matlab® (v. 2013b). The calibration was controlled with a key press and repeated if necessary. Each trial began with the appearance of a black fixation cross centred on a grey screen for 1000 ms after which an auditory stimulus was played, a word (e.g. dog) or a sound (e.g. dog bark), while the fixation cross remained on the screen. The visual stimulus depicting two objects simultaneously – target (e.g. dog) and distractor (e.g. train) – appeared at 1000 ms ISI, and remained on the screen for 2000 ms while the eye tracker recorded participant’s gaze. After 2000 ms the image disappeared, and another trial began. The side of target and distractor was counterbalanced, resulting in one block of 24 trials. The experimental block was repeated 4 times, yielding 96 trials in total. The order of trials within a block and across participants was randomised. The experiment lasted approximately 9 minutes. Infant Experiments (2A, 2B, 2C) - In Exp. 2A, thirty-two healthy 9-month-old infants (15 girls; age range: 8m13d to 9m28d) took part in the study. In Exp. 2B, there were thirty-two 12-month-olds (18 girls; age range: 11m14d to 12m27d), and in Exp. 2C twenty-three 18-month-old (11 girls; age range: 17m14 to 18m21d) infants. An additional forty infants took part in the study but were not included in the final sample due to an insufficient amount of trials per condition (word or sound; n=35), no familiarization phase (n=1), participating twice (at 9- and 12 months; n=1), low birth weight (<2500 kg; n=2) or premature (<37 weeks of gestation; n=1). We adapted the procedure from Experiment 1B to infants, by adding a familiarization phase (using slide presentation (Microsoft Office 2016) on an iPad mini (7,9”) tablet); and, by increasing the time of the fixation cross on the screen to 3000 ms. During this time, caregivers were encouraged to maintain infants’ attention and interest in the task by saying for instance, “Oh look!” or “Look ….”. Infants sat on their caregiver’s laps, and caregivers were asked to sit at a 90° angle from their infant to ensure the eye tracker recorded infants’ eye movements only, and to facilitate the interaction between trials. Caregivers were also instructed to avoid verbal communication when the auditory and visual stimuli were displayed, pointing to the screen or naming the objects. The visual stimulus remained on the screen for 4.5 seconds while the eye tracker recorded infants’ gaze. After 4.5 seconds, the image disappeared, and another trial began. Infants were presented with one block of 24 trials in total. A break was taken when needed, and the experiment lasted approximately 5 minutes. |