Alexander Khalil
Victor H. Minces
John Iversen
Gabriella Musacchia
T. Christina Zhao
Andrea A. Chiba
Alexander Khalil
Victor H. Minces
John Iversen
Gabriella Musacchia
T. Christina Zhao
Andrea A. Chiba
While the prioritisation of science, technology, engineering and mathematics (STEM) is a logical step in the effort to develop curricula that meet the increasing technical demands of society, methods of training broad cognitive and pro-social skills such as communication, cooperation, attention and creativity are elusive, yet critical, to the development of a dynamic workforce and healthy society. A growing body of evidence suggests that the practice and study of music may be one such method. The present chapter examines ways in which the practice of music may support education by driving aspects of cognitive development while also calling attention to the fact that music learning, cognitive development and education themselves are inextricably connected to their socio-cultural context. This fact holds important implications both for scientific research on music and for appropriate implementation of music in K-12 curricula.
A new focus towards improving educational outcomes through a well-rounded education includes a directive for parity between arts education and science, technology, engineering and maths (STEM) education in our nations’ schools. Thirty years of vacillating US policy over the importance of the inclusion of the arts in K-12 curricula was fortified by the Every Student Succeeds Act (ESSA), which was signed by President Obama in December 2015. The act echoed the sentiment of the 2006 UNESCO conference that the arts – beyond merely meeting the goal of contributing to the creation of a multifaceted individual – are an essential component of education.
The present chapter examines ways in which the practice of music may support education, by driving aspects of cognitive development while also calling attention to the fact that music learning, cognitive development and education themselves are inextricably connected to their socio-cultural context. This fact holds important implications both for scientific research on music and for appropriate implementation of music in K-12 curricula.
Education primarily takes place in the context of a classroom setting, providing the challenge of unifying individuals to form a successful group. Interacting effectively in a learning environment is not trivial: individuals must fluidly regulate attention, integrating action within their personal control with the overall dynamics of the group. Fundamental to this process is the individual ability to generate predictions based on perceptions of temporal patterns in such things as the speech and gesture of those in the group. These temporal predictions allow enhancement of processing at key points in time, thereby increasing both accuracy and efficiency of taking in and conveying information in a classroom setting (Jones and Boltz, 1989[1]). The ability to integrate into a group is thereby an important skill-set that is refined through interactions that occur countless times throughout the day within a socio-cultural context and is thus strongly affected by this environment. One uniquely expressive element of any socio-cultural environment – one that is intrinsically temporal and centrally involves group integration – is the practice of music.
Throughout recorded history, music has been a significant and participatory component of human culture. Beyond being a means for expressing emotion, defining ethnic identity, and accompanying a variety of activities and ceremonies, music is a form of communication. As such – from nursery rhymes to concert band – music has always been a part of traditional education systems. For example, both Plato in ancient Greece and Confucius in ancient China emphasised music as central to education, reviewed in (Park, 2015[2]; Stamou, 2002[3]) and today music is practiced in classrooms from Indonesia to Norway.
Recent shifts in educational policy, as manifest in the United States by the ESSA, call for “providing all students access to an enriched curriculum and educational experience” (ESSA, 2015[4]). Whereas this law, and policies like it in other nations, provides some impetus for inclusion of music in education, the role of music in contemporary education remains unclear and often contentious. Instead, skills that apply directly to the needs of an increasingly technical global workforce, such as computing and mathematics, are readily prioritised. Below, however, we will see that instruction in STEM areas can be augmented by specialised programmes that connect STEM skills and concepts to music (Minces et al., 2016[5]). Nevertheless, a main source of contention is whether musical skills transfer: do cognitive, behavioural, and cultural skills and fluencies gained through music practice transfer to domains outside of music? Simply put, does music learning improve academic learning, personal and life outcomes? And, if so, how can this guide implementation into school curricula?
One of the most prevalent theories of music transfer has to do with structural similarities between music and language. Music and language share many features that may enable benefits from music training to affect language processing. Briefly, both are systems of communication in which simple sounds are combined according to hierarchical rules to create words/phrases/paragraphs. Both use variations in pitch and timing to communicate meaning. In the time domain, music and speech both comprise temporal events, or “rhythms”, that take place at multiple, nested timescales ranging from 10s of milliseconds to 10s of minutes or even hours. While rhythms at each of these timescales require precise integration of processing, a high degree of integration across timescales is necessary for successful communication. Below, we present scientific evidence for connections between rhythms at various timescales and particular aspects of learning.
A recent summary of work in the field systematises the rationale for expecting music training to benefit language, because of the structural similarities between music and speech, but emphasising the intensive, repetitive, and emotionally and socially engaging nature of music practice, all factors thought to lead to greater brain plasticity (Patel, 2014[6]). That is, the intense, repetitive, rewarding attention to nuances of sound involved in music training may place greater demands on brain circuits, thus improving an ability to process sound in general, including improved perception of language.
Recent studies on music and language find that more practice and an early beginning to music training (around five years of age) are associated with better speech sound analysis in the brain (Musacchia, Strait and Kraus, 2008[7]). With better analysis comes faster and sharper brain response time to speech sounds as well (Musacchia and Schroeder, 2009[8]). This means that musicians who start playing early (and continue to practice) have a remarkably robust internal model of the speech sounds they hear. The effects reported in these studies have been replicated and expanded across dozens of investigations at several ages and in many countries (Gordon, Fehd and McCandliss, 2015[9]; Patel, 2014[6]).
A large body of research investigates connections between music, language and literacy. School-age children who participate in music classes, either as part of the standard curriculum or after school hours, show better standardised reading scores (Tierney and Kraus, 2013[10]). Reading benefit is present even among students who had no previous music experience, although the effect is larger in those who participate in music longer. While the link between oral language and literacy has been implemented through phonics since the turn of the century, these new data suggest that music training provides an additional, or alternative, route to strengthen the development of sound-to-letter correspondence. This may be especially valuable for the large and growing number of extraordinary children in our school system that face language-learning problems.
In music, at slower timescales, isochronous beats can be organised into units (i.e. groupings of strong and weak beats). These units establish music’s metrical structure. For example, marching music usually follows the duple metre structure (2 beats per unit: strong-weak) whereas waltz follows the triple metre structure (3 beats per unit: strong-weak-weak). Recent neuro-imaging studies have demonstrated that, in addition to the auditory system, the motor system is involved in tracking isochronous beats, including the motor cortex, basal ganglia and the cerebellum (Grahn, 2012[11]). The processing of metrical structure has also been described by event-related responses and oscillatory responses in the brain (Iversen, Repp and Patel, 2009[12]; Zhao et al., 2017[13]).
The timescale over which metre is perceived is similar to that of prosody in which syllables are structured into larger groupings with strong and weak accents. Speech sounds remain intelligible when speech is intentionally degraded at faster timescales yet prosodic rhythm is preserved; supporting the idea that prosody plays an important role in speech perception (Shannon et al., 1995[14]).
Learning prosodic and metrical structure begins very early in life. The acoustic information relevant for processing information on the prosodic/metric timescale is available to infants even before birth. It has been demonstrated that newborns can detect violations in musical metre (Winkler et al., 2009[15]) and also discriminate languages that have different rhythmic characteristics (Nazzi, Bertoncini and Mehler, 1998[16]).
In a study that directly connects prosodic structure in speech with musical metre, (Zhao and Kuhl, 2016[17]), first demonstrated that a one-month music intervention at 9 months of age not only enhanced infants’ neural processing of music metre, but also syllable structure in speech. The effect was interpreted as an enhancement of the infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. Notably, this skill might also be a fundamental building block of the ability to temporally integrate into a group setting.
Despite their parallel structures and many similarities, an important distinction between music and language exists in the tendency to synchronise musical behaviour to a periodic rhythm, a “beat”. From tapping a foot to playing in an ensemble, this tendency to synchronise to music is ubiquitous in human culture, and indicative of special brain circuits not widely shared in the animal kingdom (Iversen, 2016[18]). The phenomenon of beat perception and synchronisation is noteworthy because it involves the interplay of the outside world and an internal rhythmic, predictive process, which creates expectations about future events that may guide attention.
Although there are various forms of attention, one view of attention describes a dynamic process that fluctuates over short periods of time. Dynamic Attending Theory (DAT) (Jones and Boltz, 1989[1]) proposes that greater attention is allocated to points in time at which future events are expected, thus maximising processing abilities at points of greater importance, while potentially reducing attention to times in which informative cues are unlikely to occur. In support of DAT, multiple studies have found facilitated processing of events that occur at expected times (Arnal and Giraud, 2012[19]). This temporal facilitation even extends to the processing of non-auditory events. For example, visual image processing was facilitated when images occurred on (vs. off) the beat of an accompanying auditory pattern (Escoffier, Sheng and Schirmer, 2010[20]).
A connection between the ability to synchronise to a beat – a capacity that is based on the generation of temporal expectations – and ability to focus or maintain attention has been borne out in several recent studies. Tierney and Kraus (2013[10]) found a correlation between ability to tap in synchrony with a metronome beat and ability to focus attention. While tapping with a metronome beat clearly relates to rhythmic ability, synchronising rhythmically with other individuals – a fundamental aspect of many music cultures worldwide – is a somewhat different task, involving interaction on many levels. Khalil et al. (2013[21]) found a correlation between the ability to synchronise in a group music setting and the ability to maintain attention in a classroom environment. The ability to generate and act on appropriate temporal expectations of a group dynamic may, thus, be relevant to the ability to function effectively in a classroom setting. Group rhythmic synchrony has been found to increase pro-social behaviour in infants (Cirelli, Einarson and Trainor, 2014[22]), young children (Kirschner and Tomasello, 2010[23]) and adults (Hove 2009). Significantly, group rhythmic synchrony has also been found to enhance the ability to co-operate (Valdesolo, Ouyang and DeSteno, 2010[24]). These studies point towards an important aspect of the learning environment: the group dynamic. Classroom learning does not simply depend upon each individual’s cognitive skills and aptitudes, rather, it also can be enhanced, or degraded by the ability of the entire classroom to co-ordinate and co-operate. Integrating music into the classroom, then, may benefit not only each individual but also the group as a whole.
Further, while creativity has often been associated with “defocused” or “diffuse” attentional states, recent research has shown that creativity is more closely related to the capacity to modulate attention, moving dynamically through various states during the creative process (Vartanian, Martindale and Matthews, 2009[25]). In this way, creative problem solving – and creativity in general – may be related to a form of dynamic attending.
Whereas music is commonly referred to as a “universal language”, and some generally universal features do exist in all music, such as the use of periodic rhythmic structures, consensus in the field of Ethnomusicology is that it is not universal (Campbell, 1997[26]). The affect and meaning of any given music is largely culture specific. Even the processing of musical structures such as metre becomes culture specific very early in life, as experience shapes the abilities of the underlying universal neural mechanisms. For example, in a series of studies, Hannon and Trehub (Hannon and Trehub, 2005[27]), demonstrated that infants growing up in North America at 6 months of age could detect metrical structure violations in both metrical structure (2-beat groupings) common in Western European music as well as a complex metre that is more common to Eastern European music (7 beats/group). However, by the age of 12 months, infants in North America can no longer detect metrical structure violation in the Eastern European (less familiar) complex metre. This demonstrates that even very early in life, human beings adapt their capabilities at prediction to match the temporal dynamics of the world around them, facilitating communication between like-cultured individuals. However, only 2 weeks of passive listening to Balkan music at home at this age (20 minutes/day) reversed this narrowing. This supports the intriguing possibility that the practice of music, when implemented in a multi-cultural setting, may be leveraged as a way to facilitate communication between individuals of different cultures.
The fact that music is not a universal language and so requires awareness of socio-cultural context in its implementation while other subjects such as mathematics do not require this to the same extent, remains a significant challenge in music education worldwide. Many nations continue – either through maintaining education systems developed under colonialism, or through having adopted Western education systems – to feature Western music in the classroom despite the fact that students experience completely different and equally rich music outside the classroom (Bradley, 2012[28]). In Ghana, for example, Western music continues to dominate music curricula, particularly in higher education. This is particularly interesting, given that Ghana is a nation whose traditional music is significant worldwide, having influenced the formation of pop-music and in its most traditional form is taught at many universities around the world (Otchere, 2015[29]).
When music implementation is attuned to socio-cultural environments many direct and indirect benefits may arise. For example, in the Nyanza region of Kenya traditional songs have been woven into the school curriculum to enhance learning by connecting school learning with material learned outside of school (Akuno, 2015[30]). Cultural relevance is proposed to be of key importance to music in urban classrooms due to the increasing heterogeneity of the cultural background of students in such schools (Doyle, 2014[31]). Beyond direct music learning, it is important to note that music is not a stand-alone activity. Rather, it is ubiquitous in human behaviour and activity and so may be integrated into curricula in a variety of ways outside of formal music learning classes, yielding surprising benefits.
Throughout most of human history, music has been an integral part of scientific academic development. The connections between music and science are ubiquitous, and several scientific concepts have been advanced in terms of their relationship with music (Pesic, 2014[32]). Given these relationships, teenagers’ intrinsic interest in music (North et al., 2000) offers a relevant avenue for students to explore and engage in STEM fields. One example of this educational approach is “Listening to Waves”, a programme developed by Victor Minces with Alexander Khalil that engages students in the physics of sound waves through music (www.listeningtowaves.com). Through this programme, interest in and intuitive knowledge of music motivates students to approach challenging physics concepts. Not only does this approach engage students with physics but it also affords opportunities for STEM engagement for students of diverse backgrounds (Minces et al., 2016[5]). Ultimately, students also use their newfound knowledge of the physics of sound to create instruments and sound installations, allowing for cultural expression through scientific and technological practices.
The practice of music has been an intrinsic component of education, both formal and informal, throughout human history and across cultures. While direct benefits of the study of music may at first appear to be limited to the development of musical skills, recent research identifies ways in which music may be used in the classroom to enhance learning of humanities and STEM education, improve classroom culture, and even enhance cognitive skills that support learning in interpersonal settings. Through music practice, students may strengthen their ability to learn both individually, and in a group. Therefore, we submit that music can play an important role in curricula across all ages and levels.
Whereas it is difficult to generalise from a limited number of studies to general policy in either music research or research on learning, some general guidelines might be useful.
For example, appropriate implementation of music learning in schools would be inclusive, and not exclusively centred on elective ensembles that are focused on performance. This view is conceived in much the same way that physical education (PE) is meant to involve all students regardless of the presence of specialised afterschool sports activities. A great deal of research on music has been conducted with small sample sizes, in a limited number of schools, and/or in laboratory settings. We recommend developing research methodologies that integrate the collection of data into the fabric of music learning programmes themselves, allowing research to record music classes and track students’ musical and scholastic progress. In this way, the long-term impact of music education may become a research priority to support the development of evidence-based practices.
A key element both for implementation of music programmes and conducting research is awareness and integration of socio-cultural context. Perception of temporal patterns in the world, and all of the cognitive processes that are involved, is refined through interpersonal interaction within a given socio-cultural context. The engagement and rewards of these interactions are social in nature. Appropriate implantation of music learning in school curricula should not only take socio-cultural context(s) into account but also leverage it, thereby improving school culture and even the value of education. In an increasingly interconnected world, scientists who study cultural phenomena such as music must focus their attention not only on cognitive mechanisms that may underlie these phenomena but also on how these mechanisms – and human perception itself – are mediated by culture.
[30] Akuno, E. (2015), “The singing teacher’s role in educating children’s abilities, sensibilities and sensitivities”, British Journal of Music Education, Vol. 32/03, pp. 299-313, http://dx.doi.org/10.1017/S0265051715000364.
[19] Arnal, L. and A. Giraud (2012), “Cortical oscillations and sensory predictions”, Trends in Cognitive Sciences, Vol. 16/7, pp. 390-398, http://dx.doi.org/10.1016/j.tics.2012.05.003.
[28] Bradley, D. (2012), Good for What, Good for Whom?: Decolonizing Music Education Philosophies, Oxford University Press, http://dx.doi.org/10.1093/oxfordhb/9780195394733.013.0022.
[26] Campbell, P. (1997), “Music, the universal language: Fact or fallacy?”, International Journal of Music Education, Vol. os-29/1, pp. 32-39, http://dx.doi.org/10.1177/025576149702900105.
[22] Cirelli, L., K. Einarson and L. Trainor (2014), “Interpersonal synchrony increases prosocial behavior in infants”, Developmental Science, Vol. 17/6, pp. 1003-1011, http://www.ncbi.nlm.nih.gov/pubmed/25513669.
[31] Doyle, J. (2014), “Cultural relevance in urban music education”, Update: Applications of Research in Music Education, Vol. 32/2, pp. 44-51, http://dx.doi.org/10.1177/8755123314521037.
[20] Escoffier, N., D. Sheng and A. Schirmer (2010), “Unattended musical beats enhance visual processing”, Acta Psychologica, Vol. 135/1, pp. 12-16, http://dx.doi.org/10.1016/j.actpsy.2010.04.005.
[4] ESSA (2015), Every Student Succeeds Act (ESSA) of 2015 S.1177 – 14th Congress, http://www.congress.gov/bill/114th-congress/senate-bill/1177/text. (accessed on 14 November 2018).
[9] Gordon, R., H. Fehd and B. McCandliss (2015), “Does music training enhance literacy skills? A meta-analysis”, Frontiers in Psychology, Vol. 6, p. 1777, http://dx.doi.org/10.3389/fpsyg.2015.01777.
[11] Grahn, J. (2012), “See what I hear? Beat perception in auditory and visual rhythms”, Experimental Brain Research, Vol. 220/1, pp. 51-61.
[27] Hannon, E. and S. Trehub (2005), “Metrical categories in infancy and adulthood”, Psychological Science, Vol. 16/1, pp. 48-55, http://dx.doi.org/10.1111/j.0956-7976.2005.00779.x.
[18] Iversen, J. (2016), “In the beginning was the beat: Evolutionary origins of musical rhythm in humans”, in R. Hartenberger (ed.), The Cambridge Companion to Percussion, Cambridge University Press, https://www.researchgate.net/publication/311793780_In_the_Beginning_Was_the_Beat_Evolutionary_Origins_of_Musical_Rhythm_in_Humans.
[12] Iversen, J., B. Repp and A. Patel (2009), “Top-down control of rhythm perception modulates early auditory responses”, Annals of the New York Academy of Sciences, Vol. 1169/1, pp. 58-73, http://dx.doi.org/10.1111/j.1749-6632.2009.04579.x.
[1] Jones, M. and M. Boltz (1989), “Dynamic attending and responses to time”, Psychological Review, Vol. 96/3, pp. 459-91, http://www.ncbi.nlm.nih.gov/pubmed/2756068.
[21] Khalil, A. et al. (2013), “Group rhythmic synchrony and attention in children”, Frontiers in Psychology, Vol. 4, p. 564, http://dx.doi.org/10.3389/fpsyg.2013.00564.
[23] Kirschner, S. and M. Tomasello (2010), “Joint music making promotes prosocial behavior in 4-year-old children”, Evolution and Human Behavior, Vol. 31/5, pp. 354-364, http://dx.doi.org/10.1016/J.EVOLHUMBEHAV.2010.04.004.
[5] Minces, V. et al. (2016), “Listening to waves: Using computer tools to learn science through making music”, in EDULEARN16 Proceedings, https://doi.org/10.21125/edulearn.2016.1919.
[8] Musacchia, G. and C. Schroeder (2009), “Neuronal mechanisms, response dynamics and perceptual functions of multisensory interactions in auditory cortex”, Hearing Research, Vol. 258/1-2, pp. 72-79, http://dx.doi.org/10.1016/j.heares.2009.06.018.
[7] Musacchia, G., D. Strait and N. Kraus (2008), “Relationships between behavior, brainstem and cortical encoding of seen and heard speech in musicians and non-musicians”, Hearing Research, Vol. 241/1-2, pp. 34-42, http://dx.doi.org/10.1016/j.heares.2008.04.013.
[16] Nazzi, T., J. Bertoncini and J. Mehler (1998), “Language discrimination by newborns: Toward an understanding of the role of rhythm”, Journal of Experimental Psychology, Human Perception and Performance, Vol. 24/3, pp. 756-766, http://www.ncbi.nlm.nih.gov/pubmed/9627414.
[29] Otchere, E. (2015), “Music teaching and the process of enculturation: A cultural dilemma”, British Journal of Music Education, Vol. 32/03, pp. 291-297, http://dx.doi.org/10.1017/S0265051715000352.
[2] Park, S. (2015), “Music as a necessary means of moral education: A case study from reconstruction of Confucian culture in Joseon Korea”, International Communication of Chinese Culture, Vol. 2/2, pp. 123-136, http://dx.doi.org/10.1007/s40636-015-0021-2.
[6] Patel, A. (2014), “Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis”, Hearing Research, Vol. 308, pp. 98-108, http://dx.doi.org/10.1016/j.heares.2013.08.011.
[32] Pesic, P. (2014), Music and the Making of Modern Science, MIT Press, Cambridge, MA, https://mitpress.mit.edu/books/music-and-making-modern-science (accessed on 14 November 2018).
[14] Shannon, R. et al. (1995), “Speech recognition with primarily temporal cues”, Science, Vol. 270/5234, pp. 303-304, http://www.ncbi.nlm.nih.gov/pubmed/7569981.
[3] Stamou, L. (2002), “Plato and Aristotle on music and music education: Lessons from ancient Greece”, International Journal of Music Education, Vol. 39/1, pp. 3-16, http://dx.doi.org/10.1177/025576140203900102.
[10] Tierney, A. and N. Kraus (2013), “Music training for the development of reading skills”, in Progress in Brain Research, http://dx.doi.org/10.1016/B978-0-444-63327-9.00008-4.
[24] Valdesolo, P., J. Ouyang and D. DeSteno (2010), “The rhythm of joint action: Synchrony promotes cooperative ability”, Journal of Experimental Social Psychology, Vol. 46/4, pp. 693-695, http://dx.doi.org/10.1016/J.JESP.2010.03.004.
[25] Vartanian, O., C. Martindale and J. Matthews (2009), “Divergent thinking ability is related to faster relatedness judgments”, Psychology of Aesthetics, Creativity, and the Arts, Vol. 3/2, pp. 99-103, http://dx.doi.org/10.1037/a0013106.
[15] Winkler, I. et al. (2009), “Newborn infants detect the beat in music”, in Proceedings of the National Academy of Sciences, http://dx.doi.org/10.1073/pnas.0809035106.
[17] Zhao, T. and P. Kuhl (2016), “Musical intervention enhances infants’ neural processing of temporal structure in music and speech”, Proceedings of the National Academy of Sciences, Vol. 113/19, pp. 5212-5217, http://dx.doi.org/10.1073/pnas.1603984113.
[13] Zhao, T. et al. (2017), “Neural processing of musical meter in musicians and non-musicians”, Neuropsychologia, Vol. 106, pp. 289-297, http://dx.doi.org/10.1016/j.neuropsychologia.2017.10.007.