Dr. Natasha Kirkham (Birkbeck, University of London) gave a seminar updating us on the findings of her current project investigating the impact of multi-sensory approaches to learning in the classroom. Dr. Kirkham’s work investigates what guides attention and supports learning from infancy into early childhood. Recently, she has focused on learning occurring in naturalistic settings, amidst all the noise and distraction of real-life environments.
Children’s formal learning in the classroom takes place in dynamic multi-sensory environments, which can be noisy, distracting and occasionally chaotic. Sometimes the information provided is mutually supportive (e.g., consistent or redundant cues), but at other times it can be de-correlated (independent cues), or even contradictory (conflicting cues). Prior research has shown that multi-sensory information can sometimes facilitate learning in infants (Bahrick & Lickliter, 2000; Lewkowicz, 2000; Richardson & Kirkham, 2004; Wu & Kirkham, 2010) and adults (e.g., Shams & Seitz, 2008; Frassinetti, Bolognini, & Ladavas, 2002).
Consequently, the idea that information received simultaneously from multiple modalities is ‘supportive’ of learning has been used as the basis for educational programs in literacy and numeracy, dealing with both typically and atypically developing children (Bullock, Pierce, & McClelland, 1989; Carbo, Dunn, & Dunn, 1986; Luchow & Sheppard, 1981; Mount & Cavet, 1995).
And yet, beyond its intuitive appeal, there has been no systematic investigation of the effects of multi-sensory stimuli on school-aged children’s basic learning (Barutchu, Crewther, Fifer, Shivdasani, Innes-Brown, Toohey et al., 2011).
Dr. Kirkham presented evidence from her team’s latest work looking at the pros and cons of multimodal information in a learning setting, focusing on the modalities of sight and sound. Thus far, they have used two tasks to tap multi-sensory learning. Both involve learning new categories using audio and visual features.
In the first task (run in collaboration with Prof. Denis Mareschal), the goal is explicit – figure out the categories! In one condition, clues to the categories are in audio features, in a second in visual features, in a third in both audio and visual features together. The results showed that redundant multi-sensory (audio-visual) information offers only a little learning support above and beyond uni-sensory information (audio or visual alone), and only in the youngest age group. In fact, while 5-year-olds seem to show some benefit from multi-sensory information, by 10 years of age children perform best in the auditory alone condition.
The second task (run in collaboration with Dr. Hannah Broadbent) is similar in all ways except that it is an incidental learning task – with children asked to press a button every time they see a frog appear on the screen. There were two categories of frogs, defined, as before, by visual, auditory or audiovisual features. Afterwards, children were asked to identify the categories. In this task, the categories were actually irrelevant to the task at hand – kids just had to spot the frog! In this study, all the age groups (5-, 7-, and 10-year-olds) performed significantly better in identifying the (irrelevant) categories of frog when the categories were marked by multi-sensory cues, rather than just visual or audio features alone.
So, as the team begins to investigate the possible benefits of multi-sensory learning, a more complex picture is emerging. Benefits depend on the type of learning and the age of the child. Multi-sensory presentation may be best for incidental learning. For explicit learning, multi-sensory presentation may be advantageous only for younger children. The project is still on-going.