Lipreading training in deaf children

Principal investigator: Mairéad MacSweeney (UCL)

Funders: Wellcome Trust, Economic and Social Research Council 

wt  esrc-logo

Project description:

Many deaf children find learning to read to be very challenging. This project evaluated a computerized speechreading (lipreading) training program in a randomized controlled trial to determine (a) whether it is possible to train speechreading in deaf children and (b) whether speechreading training results in improvements in phonological skills and, subsequently, single word reading skills. 

Sixty-six deaf 5- to 7-year-olds children participated in the study. They became deaf before they were 12 months of age. They either took part in a speechreading intervention, or in a maths training intervention (active control group). The interventions consisted of computerised games that were designed to run across forty-eight 10-min sessions. Children were engaged in one session per day, 4 days a week, for 12 weeks.

figure_lipreading

Screenshots from the computer games used in the speechreading (a, b) and maths (c, d) interventions.

In the speechreading intervention, children saw a silent video of a model saying a given word, and had to choose the corresponding picture from a choice of four pictures. They also had to match visual speech patterns with letters and words. For example, seeing the video of a model speaking a phoneme (e.g, ‘p’) and choosing the corresponding letter. In the maths training intervention, children were presented with counting and arithmetic exercises.

Results showed a significant benefit of the speechreading training intervention on speechreading and also on their speech production accuracy. Importantly the speech production measure took into account both auditory speech and visual speech (i.e., articulation accuracy). The benefits of speechreading training were stronger when only children who completed the full 48 sessions were included in the analyses. These benefits however did not filter through to result in improved single word reading.

Although no effects were seen on reading skills, practitioners and parents of deaf children are likely to be interested in a game that can lead to speechreading gains in deaf children. Future research should explore whether interventions are more efficient for children whose speechreading skills are particularly low and also whether a longer training period would lead to subsequent gains in reading.

 

Paper:

Pimperton, H., Kyle, F., Hulme, C., Harris, M., Beedie, I., Ralph-Lewis, A., … & MacSweeney, M. (2019). Computerized Speechreading Training for Deaf Children: A Randomized Controlled Trial. Journal of Speech, Language, and Hearing Research62(8), 2882-2894.

See a demonstration of the games here:

http://star-demo.research.sc/

Generating words: semantic processing in bimodal bilingual adults

Principal Investigators: Mairéad MacSweeney (UCL), Eva Gutierrez (UCL) and Chloë Marshall (UCL-IOE)

Funder: IOE/UCL Strategic Partnership Research Innovation Fund

Project description:  A ‘semantic fluency task’ requires participants to produce as many words as they can from a specific category (e.g., animals) in a limited period of time. This task has a basic science function in that it indexes the organisation of words in an individual’s semantic network. The task also has an applied function, in the diagnosis of neurological disorders, and is being increasingly used in educational contexts, for example, as part of a battery of tests for the diagnosis of dyslexia.

This project investigates semantic fluency in a group who have not previously been investigated: hearing bimodal bilingual adults. These individuals were born to Deaf parents, from whom they learnt British Sign Language (BSL), and they also grew up learning the surrounding spoken language, English. This unique group of people, termed CODAs (Children of Deaf Adults), will allow us to investigate the impact of knowing a sign language (BSL) on the organisation of the spoken lexicon. We will use the semantic fluency task to investigate their semantic organisation at a behavioural level, in terms of the pattern and number of participants’ responses. We will also measure hemispheric lateralisation during performance of the task using functional transcranial Doppler sonography (fTCD).

The study will provide unique information about how the brain deals with two languages – which in this case are delivered in different modalities. Our primary hypothesis is that knowing a sign language will influence word generation in English, and that this will be reflected in the behavioural and neuroimaging data.

Papers

Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production

Multisensory Learning

clipart

Our world is noisy and distracting, filled with a multitude of sights and sounds; the television is on while we talk on the phone, there are street sounds as we navigate a map, and people talking to each other as we try to attend to a specific conversation.   To walk into a toy shop is to be overwhelmed with sights, sounds, and even smells. Clearly, children are stimulated and excited by information from multiple sensory modalities.

But what is best for their learning? Research from psychology has shown that adults learn better if they are given information in different sensory modalities at the same time. This fact has been used as the basis for many childhood educational programs in literacy and numeracy. But there has been no systematic investigation into whether children learn better from information presented in different sensory modalities. Or if, in fact, there are individual differences in this ability.  To take advantage of multimodal stimuli a learner has to be able to pay attention to one thing and not another, and to switch attention when required. These sophisticated skills – inhibitory control, selective attention and cognitive flexibility – are developed slowly throughout the course of childhood, and some children develop slower than others. Preliminary results show that, as a consequence, children can struggle to learn from multimodal information.

esrc-logoThe ‘Multisensory’ grant, funded by the Economic and Social Research Council, aimed at identifying when and where children have difficulty with multlimodal information, and to  help develop materials that are tailored to their cognitive and perceptual development.

Four main research questions have been addressed by the project, and a summary of the findings can be found in the presentation below.



 The team 

team_multisensory


 Publications
 

Broadbent, H., Osborne, T., Mareschal, D., Kirkham, N.Z. (under review). Are two cues always better than one? The role of multiple intra-sensory cues compared to multi-sensory cues in children’s learning. Cognition.

Broadbent, H., Osborne, T., White, H., Mareschal, D., Kirkham, N.Z. (2019) Touch and look: The role of visual-haptic cues for categorical learning in children. Infant and Child Development, doi:10.1002/icd.2168.

Broadbent, H., Osborne, T., Mareschal, D., Kirkham, N.Z. (2018) Withstanding the test of time: multisensory cues improve the delayed retention of incidental learning. Developmental Science, doi: 10.1111/desc.12726 

Broadbent, H., Osborne, T., Rea, M., Peng, A., Mareschal, D., Kirkham, N. (2018) Incidental category learning and cognitive load in a multisensory environment across childhood. Developmental Psychology56(6), 1020-1028. doi: 10.1037/dev0000472.

Broadbent, H., White, H., Mareschal, D., Kirkham, N. (2017) Incidental learning in a multisensory environment across childhood. Developmental Science21(2) e12554, doi:10.1111/desc.12554.

Kirkham, N. Z.,  Rea, M., Osborne, T., White, H. & Mareschal, D. (2019) Do cues from multiple modalities support quicker learning in primary school children? Developmental Psychology, 55, 2048-2059. doi: 10.1037/dev0000778

Massonnié, J., Rogers, C. J., Mareschal, D. & Kirkham N. Z. (2019) Is classroom noise always bad for children? The contribution of age and selective attention to creative performance in noise. Frontiers in Psychology, 10, 381.  doi.org/10.3389/fpsyg.2019.00381.

Peng, A., Kirkham, N. Z., & Mareschal, D. (2018) Information processes of task-switching and modality-switching across development. PLoS ONE, 13(6)e0198973.

Peng, A., Kirkham, N. Z., & Mareschal, D. (2018) Task switching costs in preschool children and adults. Journal of Experimental Child Psychology, 172, 59-72. doi: 10.1016/j.jecp.2018.01.019