Eliza L. Congdon
Miriam A. Novack
Elizabeth Wakefield
Susan Goldin-Meadow
Eliza L. Congdon
Miriam A. Novack
Elizabeth Wakefield
Susan Goldin-Meadow
The actions we produce and observe every day can help us learn new ideas or change the way we think. One type of action – the gestures we produce when we talk – has been shown to support learning when incorporated into instructional contexts. Unlike other forms of action, gestures are not used to physically change the world. Instead, they are movements of the hand that can represent and manipulate ideas. Here we review literature demonstrating the powerful effects that gesture can have on learning, and we discuss findings that explore the mechanisms by which these effects occur. Specifically, we explore whether gesture facilitates learning through its capacity to be integrated with speech, its ability to engage the motor system and its role as a spatial tool. Finally, we discuss implications of these scientific findings for educational practice and policy.
To understand how to optimise learning, researchers often focus on the language of learning – what teachers or students are saying in an instructional setting. But spoken language is only part of the story. Indeed, when we communicate with one another, we spontaneously and prolifically produce hand movements, or gestures, along with spoken language. These gestures are produced by both instructors and learners, and are ubiquitous around the world. And, these gestures have an impact: they can shape the outcome of learning situations. In this chapter, we review how a learner’s own gestures reflect thinking and learning processes not apparent in their spoken explanations, and we show that modifying either a learner’s or an instructor’s gesture production can causally improve learning outcomes. We end by proposing some reasons why gesture is so powerful, and we discuss implications for policy and practice, including specific ways to encourage active learning by increasing children’s gesture production in a classroom setting. The research presented throughout this chapter underscores the idea that gesture compliments spoken instruction and promotes learning across age groups, academic content areas and cultures.
Gestures have been categorised by scholars into several broad, descriptive classes (Kendon, 2004[1]). Here we define gestures as meaningful hand movements that are produced off-objects (i.e. in the air) and accompany speech. We focus on the gestures that have been most studied in instructional contexts – deictic gestures, which identify objects or locations in the world (e.g. pointing to the edge of a triangle drawn on the chalkboard), iconic gestures, which convey information through the similarity between their form and their referent (e.g. using ones hands as the edges of a triangle to talk about the angles of the triangle), and metaphoric gestures, which represent ideas through a metaphoric relation between their form and their meaning (e.g. making one’s hands into a triangle shape to talk about the components of a mediation analysis).
Gestures differ from other types of movement that we see in learning or instructional contexts, such as classroom demonstrations, because they do not create lasting change in the external world. For example, a teacher gestures a rotating motion next to a model of a molecule to indicate that the molecule needs to be mentally rotated to correctly think about it. This instructor is representing the idea of rotation without physically rotating the molecule. Gestures, as representational actions, are construed by both teachers and students as being categorically different from other kinds of object-directed actions (Novack, Wakefield and Goldin-Meadow, 2016[2]; Wakefield, Novack and Goldin-Meadow, 2018[3]), and this construal has crucial implications for thinking and learning. For example, in one study, children who learned a novel maths concept through gesturing towards objects were able to generalise that concept to novel problem types better than children who learned through directly acting on the objects (Novack et al., 2014[4]). This type of research suggests that it is important to consider gesture as distinct from other types of actions, as they can have unique effects on learning outcomes.
Gesture is produced spontaneously by both teachers and students across a wide variety of academic domains, including mathematics, (Goldin-Meadow and Singer, 2003[5]), geoscience (Atit, Shipley and Tikoff, 2014[6]), conservation of mass problems (Church and Goldin-Meadow, 1986[7]), chemistry (Stieff, 2011[8]) and physics (Roth, 2000[9]). The prevalence of gesture in instructional contexts has led researchers to ask whether these spontaneous hand movements, particularly those produced by learners, can give us insight into learning processes.
As it turns out, the gestures that students produce while thinking through difficult problems do convey important information about their state of conceptual understanding information that is often absent from their verbal explanations. For example, a toddler still in the process of learning her numbers says the wrong number when asked how many buttons are shown (e.g. she says “two” when the correct answer is three). Yet, that same child can use her hands to show the correct number of buttons (i.e. holding up three fingers) (Gunderson et al., 2015[10]). Similarly, a third grader solving a missing addend equivalence problem (e.g. 4+6+9 =__+9) says that she solved the problem by adding up all of the numbers (e.g. “I added the 4, 6, 9, and 9, and got 28”, an incorrect solution), while producing gestures that highlight the two sides of the equation, a subtle demonstration that she is beginning to notice that the equation has two parts, a step towards understanding equivalence (Perry, Breckinridge Church and Goldin-Meadow, 1988[11]). Children can thus gesture about ideas that they cannot yet verbalise.
When a learner expresses different (but relevant) information in gesture than in speech, we call this a speech-gesture mismatch (Church and Goldin-Meadow, 1986[7]). Importantly, learners who produce these mismatches on a task are more likely to learn from instruction on that task than learners who do not (Alibali and Goldin-Meadow, 1993[12]). In addition to the domain of mathematics, speech-gesture mismatches have been documented among toddlers on the cusp of producing two word utterances (Iverson and Goldin-Meadow, 2005[13]), 5- to 7-year-olds learning about conservation of mass (Church and Goldin-Meadow, 1986[7]) and even 9-year-olds discussing moral reasoning dilemmas (Beaudoin-Ryan and Goldin-Meadow, 2014[14]). Across all of these instances, the learner’s spontaneous gestures on a task, and the manner in which that gesture relates to the spoken explanation, serve as a marker that a student is “ready to learn” the task.
Although gesture may seem like a subtle cue, all of us, not just trained laboratory researchers, are sensitive to its presence. Without any specific gesture training, both college undergraduates and experienced elementary school teachers can glean information from children’s gestures. After watching a video of a child describing his reasoning to a math problem, adults often mentioned ideas expressed uniquely in gesture (and not in speech) when describing the child’s reasoning (Alibali, Flevares and Goldin-Meadow, 1997[15]). Even more importantly, when teachers were asked to instruct children in solving mathematical equivalence problems after watching them explain their initial (incorrect) reasoning, teachers adapted their instruction in reaction to children’s verbal and gestured explanations. Children who produced speech-gesture mismatches, signalling that they were ready to learn the task, were given more extensive instruction, containing more different types of strategies, than children whose gestures matched their speech (Goldin-Meadow and Singer, 2003[5]).
Spontaneously produced gesture can help experimenters and instructors identify students on the brink of conceptual change. This fascinating observation has led researchers to ask whether gesture simply reflects cognitive change, or whether it also plays an active role in creating or causing that change. In this section, we review studies in which gesture is experimentally manipulated in instructional settings, revealing that gesture not only reflects what learners think, but also affects the learning process.
There is some research showing that simply encouraging learners to move their hands when explaining a problem (i.e. increasing spontaneous gesture production) can lead to improved learning outcomes. In a maths instruction paradigm, children who were encouraged to gesture while explaining their solutions produced significantly more different kinds of problem-solving strategies during their explanations than children who were not told anything about gesture, or who were told specifically not to move their hands (Broaders et al., 2007[16]). Those same children then learned more from subsequent instruction than the children in the other two groups. Similarly, in another study with fifth grade students, children who were told to gesture during a moral dilemma reasoning task showed a willingness to consider issues from multiple perspectives, more so than their peers who were not told to gesture or were not given any instructions about hand movements (Beaudoin-Ryan and Goldin-Meadow, 2014[14]). Thus, experimentally increasing children’s gesture production increases the likelihood that they will explore undiscovered implicit ideas via their own gestures, a process that then boosts their ability to gain further insights from instruction.
Researchers have also been able to improve learning outcomes by asking children to produce specific gestures. For example, asking children to produce a gesture highlighting the two sides of an equivalence problem helped them retain what they had learned about mathematical equivalence four weeks later (Cook, Mitchell and Goldin-Meadow, 2008[17]). In another example, children were more likely to learn how to correctly produce palindromes, words or phrases that read the same forwards and backwards (e.g. “kayak”), if they were taught to produce symmetry gestures along with a speech strategy than if they were taught the symmetry strategy only in speech (Wakefield and James, 2015[18]). The effects of this type of “trained” gesture instruction extend even to very young children. Eighteen-month-old toddlers who were taught to point, (i.e. taught to put their finger on a picture of an object as it was verbally labelled) learned to point more in naturalistic interactions with their parents, which also led to greater increases in their spoken vocabulary, compared to children who were not taught to point (LeBarton, Goldin-Meadow and Raudenbush, 2015[19]).
In addition to studies that manipulate child-produced gestures, studies that manipulate instructor-produced gestures can also positively influence learning. We review examples of these studies in more detail in the next section, as we address potential mechanisms through which gestures – both producing and observing – support learning.
As we have established, gesture is a powerful tool for reflecting and promoting cognitive change, whether it is spontaneously produced or intentionally incorporated into instruction. In this section, we explore some of the specific mechanisms that may underlie gesture’s wide-ranging effects on learning. Although these mechanisms are not yet fully understood, we believe that elucidating how gesture promotes learning will help us predict the situations in which gesture will be most effective, and create policy-level changes that reflect these findings. We review three potential mechanisms by which gesture may facilitate learning: its ability to spatialise information, its capacity to be integrated with speech and its ability to engage the motor system in the learning process.
One way in which gesture can be used as a spatial tool is by helping to direct a learner’s visual attention to a certain location in the spatial environment. For example, children as young as 4.5 months will shift their visual attention following a dynamic, deictic or pointing gesture (Rohlfing, Longo and Bertenthal, 2012[20]). Adult learners also pay attention to gestures, particularly those that pause in space to emphasise the relevance of a particular spatial location (Gullberg and Holmqvist, 2006[21]). In a chaotic classroom setting with many competing sources of potential information, this ability to direct or capture students’ visual attention has clear consequences for learning. For example, gesturing towards the referent for a new word can facilitate learning a label for that object (Rader and Zukow-Goldring, 2012[22]), tracing an outline of two symmetrical objects highlights the relation between the two objects and facilitates learning the concept of symmetry (Valenzeno, Alibali and Klatzky, 2003[23]) and gesturing towards two sides of a mathematical equivalence problem can clarify the role of the equals sign (Cook, Duffy and Fenn, 2013[24]).
Gesture can also represent spatial information or spatial relations through its form or motion. For example, when explaining mathematical equivalence, instructors often make a v-point gesture to the first two addends of the equation to represent the idea that those two addends should be combined into a single quantity. Or children who are reasoning through a mental transformation task may use their hands to represent the physical features of the to-be-rotated object, which facilitates their ability to solve these mental transformation problems (Ehrlich, Levine and Goldin-Meadow, 2006[25]). Finally, gestures can be used to represent objects when objects are altogether absent from the environment. Ping and Goldin-Meadow (2008[26]) taught children the concept of conservation, the idea that changing the shape of a substance does not change its mass, through speech alone, speech and gesture in the presence of objects, or speech and gesture in the absence of objects. Children learned more from instruction if presented with speech and gesture, regardless of whether the objects were present or absent. The gestures contained crucial spatial information that had the potential to help the children find meaning in the spoken instruction whether or not the objects themselves were present.
Gestures are predominately produced with spoken language, and it has been argued that these two streams of communication emerge from a single integrated system (Arnheim and McNeill, 1994[27]). Neuro-imaging data support this claim, finding that processing speech and gesture activates overlapping neural regions (Holle et al., 2010[28]). Furthermore, because speech and gesture occur in different modalities (oral and manual), they have the capacity to provide different, but complementary information at the same time. Singer and Goldin-Meadow (2005[29]) gave children instruction on a mathematical equivalence task containing either one or two problem-solving strategies, and varied whether these strategies were presented entirely through speech, through speech with ‘matching’ gesture (expressing the same information as speech), or through speech and ‘mismatching’ gesture (expressing two correct, but different strategies in speech and gesture). Children performed best on a post-test if they learned through gesture and speech that expressed different information, suggesting that the integration of the two complementary ideas provided the most comprehensive instruction. In a recent study, (Congdon et al., 2017[30]) expanded upon these findings by showing that the temporal simultaneity of the different pieces of information in the mismatching speech and gesture instruction was crucial for this integration effect. Future research will investigate whether simultaneity in speech and gesture production is as important, or perhaps more important, when it is being produced by the learner.
An obvious but often overlooked property of gesture is the fact that it is a type of action. Because gesture engages the motor system, it changes the way information learned through gesture is processed. For example, Alibali et al. (2011[31]) either allowed or prohibited the use of gesture when individuals solved a spatial gear-task. Those who were allowed to gesture, persisted in using a perceptual-motor based strategy, whereas those who were not allowed to gesture, used an abstract reasoning strategy. Neuro-imaging evidence has also demonstrated that the motor system is deeply involved in processing information learned through gesture. For example, after producing gestures while learning new information, such as musical melodies (Wakefield and James, 2011[32]) or new vocabulary words (Macedonia, Müller and Friederici, 2011[33]), motor areas are reactivated when these stimuli are subsequently encountered. Although it is obvious that producing gesture engages the motor system in the moment, these surprising neuro-imaging results suggest that motor system involvement continues after the participant has ceased gesture production. Gesture can provide learners with a robust representation of newly learned ideas that engages multiple neural systems, and this enrichment may, in part, underlie gesture’s role in improving memory and recall after instruction. Furthermore, when watching co-speech gesture, adults show activation in motor planning regions (Wakefield, James and James, 2013[34]), suggesting that learners themselves need not be producing the gesture to meaningfully engage their motor system.
In sum, gesture’s ability to direct visual attention and highlight spatial relations, integrate with speech, and engage the motor system each contributes to the role gesture plays in learning. Gesture is also produced spontaneously, can be manipulated experimentally, is a special kind of representational action, and is ever-present in educational contexts. So, which of these features is most important in making gesture an effective tool for teaching and learning? An open possibility, and the one we favour here, is that the real power of gesture does not come from any single property, but rather, from a combination of all of them. In other words, gesture may not have a unique, single characteristic that makes it good for learning. Instead, gesture may be a ‘perfect storm’ of properties that allow it have powerful and wide-ranging effects on cognition.
The goal of this chapter has been to review the ways in which gestures can both reflect and change thinking during the learning process. We have explored some of the potential mechanisms driving these effects, and discussed instances in which gesture may be particularly powerful for promoting cognitive change. There is still much research to be done. Nevertheless, the work we have reviewed here provides solid evidence for incorporating gesture into the classroom, and carries with it implications for education practices.
One clear finding is that student-produced gesture supports learning. Yet, we know that gesture is very unlikely to occur in the absence of speech. That is, if students are not given opportunities to reason aloud through difficult problems or explain their solutions to others, they are unlikely to produce gestures that have the potential to unearth their own implicit ideas. Importantly, students should be given opportunities to produce gesture even before they have mastered a concept, as studies have shown that explanations need not be correct to be useful in promoting cognitive change. A simple way to create a gesture-friendly culture in the classroom is to ensure that students are regularly encouraged to talk aloud with one another and with the instructor in situations that are conducive to gesture production (i.e. with their hands free to move around).
Although it is important to give the learner opportunities to gesture, we also know that teacher-produced gesture can be a very effective instructional tool. In fact, cross-cultural research looking at gesture production in Hong Kong, Japan and the United States shows that teachers in the two higher achieving countries, Hong Kong and Japan, spontaneously use more instructional linking gestures, which help students make connections between various instructional elements, than teachers in the United States (Richland, 2015[35]). Studies like this suggest that teachers should generally be made aware of the power of gesture to direct and engage students’ visual attention, highlight spatial relations and convey important semantic information. Many educators are naturally prolific gesturers; however, being explicitly aware of the power of gesture might further encourage teachers to create learning situations that are conducive to natural gesture. An instructor who is informed of gesture’s potential can even integrate gesture in his or her lesson plans, designing their own hand movements to help students think through complex problems.
The integration of gesture into ever-evolving modern classrooms raises important questions about how this body-based learning tool interacts with technology-based learning interventions. Although there is much work to be done on this front, some recent research has found that integrating gesture into virtual learning environments improves student outcomes. For example, children who watched a maths lesson with a gesturing avatar, a computer-programmed instructor, learned more than the children who watched the same lesson from an avatar that did not gesture (Cook et al., 2017[36]). Other types of visuo-spatial learning technologies, like tablet-based virtual sketchpads, have affordances like touch-screen capabilities that could provide unique methods to promote student-produced gesture (see Chapter 8, by Forbus and Uttal, for examples of visuo-spatial learning technologies). Although these technologies have the potential for exciting revelations in gesture instruction, it should be noted that natural gesture is cost-effective, readily available at all times, and impervious to glitches, software updates and costly repairs.
Finally, to incorporate gestures into everyday classroom instruction, it may be necessary to directly affect policy. For example, standard teacher training and education could include information about the power of gesture, both as an index of a child’s current stage of conceptual development, and as a tool for promoting further conceptual understanding. Teachers could also be given concrete tips on how to create a gesture-friendly classroom environment. In some cases, this type of information may challenge traditional notions of what kinds of behaviour are considered conducive to student learning. Many theories of classroom management, now widely adopted (especially among discipline-oriented charter schools), consider students cognitively “ready to learn” when they are sitting quiet and still with their hands folded across their desks and their feet on floor. The work we have summarised here provides powerful counter-evidence against this particular classroom culture – a quiet, motionless student is not necessarily the student who is most prepared to learn.
[15] Alibali, M., L. Flevares and S. Goldin-Meadow (1997), “Assessing knowledge conveyed in gesture: Do teachers have the upper hand?”, Journal of Educational Psychology, Vol. 89/1, pp. 183-193, http://dx.doi.org/10.1037/0022-0663.89.1.183.
[12] Alibali, M. and S. Goldin-Meadow (1993), “Gesture-speech mismatch and mechanisms of learning: What the hands reveal about a child’s state of mind”, Cognitive Psychology, Vol. 25/4, pp. 468-523, http://www.ncbi.nlm.nih.gov/pubmed/8243044 (accessed on 17 October 2018).
[31] Alibali, M. et al. (2011), “Spontaneous gestures influence strategy choices in problem solving”, Psychological Science, Vol. 22/9, pp. 1138-1144, http://dx.doi.org/10.1177/0956797611417722.
[27] Arnheim, R. and D. McNeill (1994), “Hand and mind: What gestures reveal about thought”, Leonardo, Vol. 27/4, p. 358, http://dx.doi.org/10.2307/1576015.
[6] Atit, K., T. Shipley and B. Tikoff (2014), “What do a geologist’s hands tell you? A framework for classifying spatial gestures in science education”, in K. Grossner and D.G. Janelle (eds.), Space in Mind : Concepts for Spatial Learning and Education, MIT Press, Cambridge, MA.
[14] Beaudoin-Ryan, L. and S. Goldin-Meadow (2014), “Teaching moral reasoning through gesture”, Developmental Science, Vol. 17/6, pp. 984-990, http://dx.doi.org/10.1111/desc.12180.
[16] Broaders, S. et al. (2007), “Making children gesture brings out implicit knowledge and leads to learning”, Journal of Experimental Psychology: General, Vol. 136/4, pp. 539-550, http://dx.doi.org/10.1037/0096-3445.136.4.539.
[7] Church, R. and S. Goldin-Meadow (1986), “The mismatch between gesture and speech as an index of transitional knowledge”, Cognition, Vol. 23/1, pp. 43-71, http://www.ncbi.nlm.nih.gov/pubmed/3742990 (accessed on 17 October 2018).
[30] Congdon, E. et al. (2017), “Better together: Simultaneous presentation of speech and gesture in math instruction supports generalization and retention”, Learning and Instruction, Vol. 50, pp. 65-74, http://dx.doi.org/10.1016/j.learninstruc.2017.03.005.
[24] Cook, S., R. Duffy and K. Fenn (2013), “Consolidation and transfer of learning after observing hand gesture”, Child Development, Vol. 84/6, pp. 1863-1871, http://dx.doi.org/10.1111/cdev.12097.
[36] Cook, S. et al. (2017), “Hand gesture and mathematics learning: Lessons from an avatar”, Cognitive Science, Vol. 41/2, pp. 518-535, http://dx.doi.org/10.1111/cogs.12344.
[17] Cook, S., Z. Mitchell and S. Goldin-Meadow (2008), “Gesturing makes learning last”, Cognition, Vol. 106/2, pp. 1047-1058, http://dx.doi.org/10.1016/j.cognition.2007.04.010.
[25] Ehrlich, S., S. Levine and S. Goldin-Meadow (2006), “The importance of gesture in children’s spatial reasoning”, Developmental Psychology, Vol. 42/6, pp. 1259-1268, http://dx.doi.org/10.1037/0012-1649.42.6.1259.
[5] Goldin-Meadow, S. and M. Singer (2003), “From children’s hands to adults’ ears: Gesture’s role in the learning process”, Developmental Psychology, Vol. 39/3, pp. 509-20, http://www.ncbi.nlm.nih.gov/pubmed/12760519 (accessed on 17 October 2018).
[21] Gullberg, M. and K. Holmqvist (2006), “What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video”, Pragmatics & Cognition, Vol. 14/1, pp. 53-82, http://dx.doi.org/10.1075/pc.14.1.05gul.
[10] Gunderson, E. et al. (2015), “Gesture as a window onto children’s number knowledge”, Cognition, Vol. 144, pp. 14-28, http://dx.doi.org/10.1016/j.cognition.2015.07.008.
[28] Holle, H. et al. (2010), “Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions”, NeuroImage, Vol. 49/1, pp. 875-884, http://dx.doi.org/10.1016/j.neuroimage.2009.08.058.
[13] Iverson, J. and S. Goldin-Meadow (2005), “Gesture paves the way for language development”, Psychological Science, Vol. 16/5, pp. 367-371, http://dx.doi.org/10.1111/j.0956-7976.2005.01542.x.
[1] Kendon, A. (2004), Gesture: Visible Action as Utterance, Cambridge University Press, Cambridge, http://dx.doi.org/10.1017/CBO9780511807572.
[19] LeBarton, E., S. Goldin-Meadow and S. Raudenbush (2015), “Experimentally induced increases in early gesture lead to increases in spoken vocabulary”, Journal of Cognition and Development, Vol. 16/2, pp. 199-220, http://dx.doi.org/10.1080/15248372.2013.858041.
[33] Macedonia, M., K. Müller and A. Friederici (2011), “The impact of iconic gestures on foreign language word learning and its neural substrate”, Human Brain Mapping, Vol. 32/6, pp. 982-998, http://dx.doi.org/10.1002/hbm.21084.
[4] Novack, M. et al. (2014), “From action to abstraction: Using the hands to learn math”, Psychological Science, Vol. 25/4, pp. 903-10, http://dx.doi.org/10.1177/0956797613518351.
[2] Novack, M., E. Wakefield and S. Goldin-Meadow (2016), “What makes a movement a gesture?”, Cognition, Vol. 146, pp. 339-348, http://dx.doi.org/10.1016/j.cognition.2015.10.014.
[11] Perry, M., R. Breckinridge Church and S. Goldin-Meadow (1988), “Transitional knowledge in the acquisition of concepts”, Cognitive Development, Vol. 3/4, pp. 359-400, http://dx.doi.org/10.1016/0885-2014(88)90021-4.
[26] Ping, R. and S. Goldin Meadow (2008), “Hands in the air. Using ungrounded iconic gestures to teach children conservation of quantity”, Development Psychology, Vol. 44/5, p. 1277.
[22] Rader, N. and P. Zukow-Goldring (2012), “Caregivers’ gestures direct infant attention during early word learning: The importance of dynamic synchrony”, Language Sciences, Vol. 34/5, pp. 559-568, http://dx.doi.org/10.1016/J.LANGSCI.2012.03.011.
[35] Richland, L. (2015), “Linking gestures: Cross-cultural variation during instructional analogies”, Cognition and Instruction, Vol. 33/4, pp. 295-321, http://dx.doi.org/10.1080/07370008.2015.1091459.
[20] Rohlfing, K., M. Longo and B. Bertenthal (2012), “Dynamic pointing triggers shifts of visual attention in young infants”, Developmental Science, Vol. 15/3, pp. 426-435, http://dx.doi.org/10.1111/j.1467-7687.2012.01139.x.
[9] Roth, W. (2000), “From gesture to scientific language”, Journal of Pragmatics, Vol. 32/11, pp. 1683-1714, http://dx.doi.org/10.1016/S0378-2166(99)00115-0.
[29] Singer, M. and S. Goldin Meadow (2005), “Children learn when their teacher’s gestures and speech differ”, Psychological Science, Vol. 16/2, pp. 85-59.
[8] Stieff, M. (2011), “When is a molecule three dimensional? A task-specific role for imagistic reasoning in advanced chemistry”, Science Education, Vol. 95/2, pp. 310-336, http://dx.doi.org/10.1002/sce.20427.
[23] Valenzeno, L., M. Alibali and R. Klatzky (2003), “Teachers’ gestures facilitate students’ learning: A lesson in symmetry”, Contemporary Educational Psychology, Vol. 28/2, pp. 187-204, http://dx.doi.org/10.1016/S0361-476X(02)00007-3.
[18] Wakefield, E. and K. James (2015), “Effects of learning with gesture on children’s understanding of a new language concept”, Developmental Psychology, Vol. 51/8, pp. 1105-1114, http://dx.doi.org/10.1037/a0039471.
[32] Wakefield, E. and K. James (2011), “Effects of sensori-motor learning on melody processing across development”, Cognition, Brain, Behavior : An Interdisciplinary Journal, Vol. 15/4, pp. 505-534, http://www.ncbi.nlm.nih.gov/pubmed/25653926.
[34] Wakefield, E., T. James and K. James (2013), “Neural correlates of gesture processing across human development”, Cognitive Neuropsychology, Vol. 30/2, pp. 58-76, http://dx.doi.org/10.1080/02643294.2013.794777.
[3] Wakefield, E., M. Novack and S. Goldin-Meadow (2018), “Unpacking the ontogeny of gesture understanding: How movement becomes meaningful across development”, Child Development, Vol. 89/3, pp. e245-e260, http://dx.doi.org/10.1111/cdev.12817.