Collection period: |
Date from: | Date to: |
---|
4 January 2011 | 3 October 2014 |
|
Country: |
United Kingdom |
Data collection method: |
The data consists of scores on four core tasks (described below): 1) picture naming as a measure of children’s ability to retrieve and produce words in response to a picture, 2) word-picture verification as a measure of the children’s knowledge of the concept in each picture, 3) picture judgement using a subset of the naming items, designed as a measure of associative semantics, 4) nonword repetition to explore the children’s phonological abilities when lexical processing is not required. Scores in background assessments are also included. The background assessments are simple and choice reaction time (described below), and non-verbal ability and receptive vocabulary, assessed using the standardised Pattern Construction subtest of the British Ability Scales (Elliot, Smith & McCullouch, 1996) and British Picture Vocabulary Scales Edition III (Dunn, Dunn & Styles, 1997) respectively.
Details of core tasks and simple and choice reaction time tasks:
1. Picture naming: The stimuli were 72 black and white line drawings of objects from Funnell, Hughes and Woodcock (2006). The objects were from four categories, with 18 items in each. Two categories represent living things and two represent artefacts. The task was programmed using the experimental software DMDX (Forster & Forster, 2003) running on a laptop computer with 15.4” screen. Naming responses were recorded using an external microphone connected to the laptop. Items were presented in one session divided into three blocks of 24 items each. Children were asked to provide a single word for each picture. Four fixed randomized orders were rotated across children. No more than two objects from the same category appeared in succession. Trials were controlled by the researcher and consisted of presentation of a fixation cross for 500 msecs, then presentation of a picture for a maximum of 5000 msecs in the case of typically developing (TD) children and 10000 msecs in the case of children with word finding difficulty. Three items, not used in the main testing session, were presented for practice. Feedback on accuracy was given only during practice trials. Naming responses were recorded using an external microphone connected to the laptop. Naming responses were recorded using a scoresheet at the time of testing and were checked later from the recording. The CheckVocal software programme (Protopapas, 2007) was used to obtain naming latencies.
2. Word-Picture Verification Task (WPVT): The stimuli were the pictures from the naming task. On each trial a picture was presented on the computer together with a pre-recorded spoken word. On one occasion the picture was presented with its correct verbal label and on another the picture was presented with a semantically related word. Children were asked to decide if the spoken word they heard corresponded to the picture or not. Seventy-two object names were selected that were semantically related to the objects depicted in the Funnell et al. pictures. The task was run on a laptop computer with a 15.4” screen and was programmed using the software DMDX (Forster & Forster, 2003). There were two testing sessions, with individual target pictures appearing once in each session. In one session the picture appeared with its name, and in the other with the semantically related word. The 72 items in each testing session were split into three blocks of 24 items each with a rest pause between blocks. Children were asked to press designated response buttons on the keyboard. Responses were scored correct only if the child accepted the correct name and rejected the semantically related word. Three practice trials were presented with stimuli that were not included in the main testing session. Feedback was given after the practice trials only. Four fixed random orders of stimuli were rotated across participants. Each trial began with the presentation of a fixation cross in the centre of the screen for 500 msec. The picture preceded the audio file by 16.62 msecs.
3. Picture judgement task of associative semantics (PJs): On each trial three pictures depicting objects were presented on the computer screen: a target together with two pictures underneath. One of the two pictures presented in the lower part of the screen had an associative semantic relationship to the target, the second came from the same semantic category as the first. Sixty-nine pictures depicting items from the Funnell et al. (2006) and Druks and Masterson (2000) picture sets were selected from the Shutterstock website. The task was administered using a laptop computer with screen size 15.4” and it was programmed using Visual Basic software. There were three practice trials using items that did not appear in the main session and twenty trials in the main task. A fixation point appeared at the start of each trial. Children were asked to choose which of the two items in the lower part of the screen fitted best with the item at the top using designated response keys covered with stickers. Feedback on accuracy was only given during practice trials.
4. Nonword Repetition: The Children's Test of Nonword Repetition (Gathercole & Baddeley, 1996) was used to assess phonological abilities. The nonwords were administered singly for repetition according to manual instructions. Responses were recorded using a scoresheet at the time of testing and were checked later from the audio-recording.
Simple and Choice Reaction Time: Computerized tasks of simple and choice reaction time were adapted from Powell, Stainthorp, Stuart, Garwood and Quinlan (2007) and programmed on a laptop computer with a 15.4” screen using the DMDX software (Forster & Forster, 2003). The simple reaction time task measured the time taken to make a key press response following the appearance of a target on the screen. Target stimuli were six different coloured drawings of monster characters. The pictures and instructions appeared on the screen and the instructions were read out by the researcher. There were six trials for practice followed by two blocks of 18 trials each. Each trial started with the presentation of a fixation cross in the centre of a white screen, followed by a lag and then the appearance of the target stimulus. The duration of the lag varied, and was either, 300, 600 or 900 msecs. The lag times were randomised across trials and presentation of the six target stimuli was also randomised across trials. The target stimuli remained on the screen for 1500 msecs.
In the choice reaction time task children were asked to decide which of two stimuli appeared on the computer screen, and to press the appropriate response key on the computer keyboard as quickly as possible. The targets were two dinosaur pictures (one green and one orange) from the Shutterstock pictures. Children were asked to press the left Ctrl button as soon as the green dinosaur appeared, or the right Ctrl button if the orange dinosaur appeared. Green and orange stickers were placed on the two buttons. There were six practice items, with half containing the orange and half the green dinosaur. A black fixation cross appeared in the middle of the white screen for 500 msecs followed by the target stimuli. Lag times varied in randomised order, as did appearance of either the orange or green dinosaur. The lag times were 300, 600 or 900 msecs. The target stimulus remained on the screen for 1500 msecs. There were two blocks of 18 trials each in the main test session.
Sampling procedure:
The participants were children aged four to eight years with typically developing language (TD) and children with word finding difficulty (WFD) aged six to eight years. Group membership is coded in the datafile ‘Bestetal_TD_WFD_2015-08-03’ in the column ‘Group’ as TD=typically developing language, WFD=word finding difficulty. Children were recruited at nine urban mainstream primary schools with mixed catchment areas. The youngest children (four-year-olds) in the TD sample were attending nurseries at the schools. The geographical location comprised two London boroughs and one authority bordering London. Information and consent letters were distributed (examples are included in the folder ‘Bestetal_2015-03-08 (Lexical retrieval difficulties)’) and once parental/carer consent for participation was obtained children were asked to complete the Pattern Construction subtest from the British Ability Scales Edition II (BAS-II, Elliot, Smith & McCullouch, 1996). Children were excluded from the sample if they had a score that fell below the average range in the test. The final sample consisted of 102 TD children aged 4;00 to 8;06, and 24 children with WFD aged from 6;03 to 8;07.
In the TD sample there were 11 children aged 4;00-4;05, 11 aged 4;06-4;11, 12 aged 5;00-5;05, 10 aged 5;06-5;11, 11 aged 6;00-6;05, 12 aged 6;06-6;11, 10 aged 7;00-7;05, 12 aged 7;06-7;11, and 13 aged 8;00-8-06. Approximately half the children in each six-month age band were male and half were female (7 female and 4 male aged 4;00-4;05, 5 female and 6 male aged 4;06-4;11, 6 female and 6 male aged 5;00-5;05, 4 female and 6 male aged 5;06-5;11, 3 female and 8 male aged 6;00-6;05, 7 female and 5 male aged 6;06-6;11, 7 female and 3 male aged 7;00-7;05, 4 female and 8 male aged 7;06-7;11, and 7 female and 6 male aged 8;00-8-06). There were 50 girls and 52 boys in total in the TD group. Gender is coded in the datafile ‘Bestetal_TD_WFD_2015-08-03’ in the column ‘Gender’ as F=female, M=male. In the group of children with WFD there were 5 children aged 6;00-6;05 (1 girl and 4 boys), 5 aged 6;06-6;11 (2 girls and 3 boys), 7 aged 7;00-7;05 (2 girls and 5 boys), 6 aged 7;06-7;11 (4 girls and 2 boys), and one aged 8;07 (male). There were a total of 9 girls and 15 boys in the WFD group.
Children with WFD were referred to the study by the Special Educational Needs Co-ordinators/Inclusion Managers at their schools. Following referral, the same initial recruitment procedure was followed as for the TD children. That is, information and consent letters were distributed (examples are included in the folder ‘Bestetal_2015-03-08 (Lexical retrieval difficulties)’), and once parental/carer consent for participation was obtained children were asked to complete the Pattern Construction subtest from the BAS-II. Children were also asked to complete the Test of Word Finding Second Edition (German, 2000). The criteria for inclusion in the WFD group were that children had a score that was at least in the average range in the nonverbal ability test (as for the TD group), that they demonstrated a word finding standard score of below 90 and comprehension score within the normal range in the Test of Word Finding, and they did not have a diagnosis of dyspraxia, ASD, ADHD or global developmental delay.
Of the TD children, 82 spoke English as their sole or main language at home, and twenty spoke English and regularly spoke an additional language(s) at home. Nineteen of the children with WFD spoke English as the sole/main language at home, and five regularly spoke a language other than English at home. This is coded in the datafile ‘Bestetal_TD_WFD_2015-03-08’ in the column ‘Additional Language’ as 0=English main or sole language, 1=additional language.
All the children were seen by the researchers individually at school for purposes of completing the tasks.
References:
Druks, J. and Masterson, J. (2000). Object and Action Naming Battery. Hove: Psychology Press.
Dunn, L. M., Dunn, D. M. and Styles, B. (1997). British Picture Vocabulary Scale III. Windsor: NFER NELSON.
Elliot, C. D., Smith, P. and McCullouch, K. (1996). British Ability Scale II Edition. Windsor: NFER NELSON.
Forster, K. I. and Forster, J. (2003). DMDX: A windows display program with millisecond accuracy. Behavior Research Methods, Instruments, & Computers, 35 (1), 116-124.
Funnell, E., Hughes, D. and Woodcock, J. (2006). Age of acquisition for naming and knowing: A new hypothesis. The Quarterly Journal of Experimental Psychology, 59 (2), 268-295.
Gathercole, S. and Baddeley, A. (1996). The Children's Test of Nonword Repetition. London: Psychological Corporation.
German, D. (2000). Test of Word Finding, Second Edition (TWF-2). Pearson.
Powell, D., Stainthorp, R., Stuart, M., Garwood, H. and Quinlan, P. (2007). An experimental comparison between rival theories of rapid automatized naming performance and its relationship to reading. Journal of Experimental Child Psychology, 98 (1), 46-68.
Protopapas, A. (2007). CheckVocal: A program to facilitate checking the accuracy and response time of vocal responses from DMDX. Behavior Research Methods, 39 (4), 859-862.
|
Observation unit: |
Group, Individual |
Kind of data: |
Numeric |
Type of data: |
Experimental data
|
Resource language: |
English |
|
Data sourcing, processing and preparation: |
Data were anonymised by providing a numerical code for each participant. Data for each participant comprised the following:
1. Picture naming data, analysed using the CheckVocal software (Protopapas, 2007). Data consisted of number of pictures named correctly/72 and average and median naming times calculated across correct responses. Incorrect naming responses were categorised according to five error types: semantic (e.g., lion named as “tiger”), phonological (e.g., caravan named as “carara”), mixed (e.g., tractor named as “truck”), perceptual (e.g., nest named as “hedgehog”), and other (no response, unrelated response, unspecified noun such as “stuff”). The number of each of the error types, as well as the percentage of a child’s total error represented by each error type is recorded.
2. Word-picture verification task data, extracted from files constructed using DMDX software (Forster & Forster, 2003). Scores comprised a) a ‘combined’ accuracy score/72, where, for a given picture, accuracy depended on the correct acceptance of the matching picture name and correct rejection of the non-matching picture name, b) total number of trials correct/144, c) average response time across correct responses, d) median response time across correct responses.
3. Picture judgement task data was extracted from files constructed using Visual Basic software and consisted of the number of trials correct/20, percentage of trials correct, as well as average and median response time across correct responses.
4. Nonword repetition data comprised number of nonwords repeated correctly/40.
5. Simple reaction time data was extracted from files constructed using DMDX software (Forster & Forster, 2003). Average response time and median response time across responses were calculated (accuracy was at ceiling in this task).
6. Choice reaction time data was extracted from files constructed using DMDX software (Forster & Forster, 2003). Scores comprised number of trials correct/36 as well as average and median response times across correct responses.
7. Pattern construction ability standard and percentile scores were derived using the test manual instructions from the British Ability Scales Edition II (Elliot, Smith & McCullouch, 1996).
8. Receptive vocabulary standard and percentile scores were derived using the test manual instructions from the British Picture Vocabulary Scales Edition III (Dunn, Dunn & Styles, 1997).
Datafiles received a final check for accuracy by two of the WORD project team working together between August and October 2014.
Related resources:
Information concerning analyses of the data, computational modelling, and intervention for a subset of the sample of children with word finding difficulty can be found on the website created specifically for the project (http://sites.google.com/site/wordfinding/).
References:
Dunn, L. M., Dunn, D. M. and Styles, B. (1997). British Picture Vocabulary Scale III. Windsor: NFER NELSON.
Elliot, C. D., Smith, P. and McCullouch, K. (1996). British Ability Scale II Edition. Windsor: NFER NELSON.
Forster, K. I. and Forster, J. (2003). DMDX: A windows display program with millisecond accuracy. Behavior Research Methods, Instruments, & Computers, 35 (1), 116-124.
Protopapas, A. (2007). CheckVocal: A program to facilitate checking the accuracy and response time of vocal responses from DMDX. Behavior Research Methods, 39 (4), 859-862.
|
Rights owners: |
Name |
Affiliation |
ORCID (as URL) |
Best Wendy |
University College London |
|
Thomas Michael |
Birkbeck, University of London |
|
Masterson Jackie |
UCL Institute of Education |
|
|
Contact: |
Name | Email | Affiliation | ORCID (as URL) |
---|
Masterson, J. | j.masterson@ioe.ac.uk | UCL Institute of Education | Unspecified |
|
Publisher: |
UK Data Archive
|
Last modified: |
06 Apr 2017 09:24
|
|