Technology Tools
Technology Tools

Educators often take advantage of educational technologies as they make the shifts in instruction, teacher roles, and learning experiences that next gen learning requires. Technology should not lead the design of learning, but when educators use it to personalize and enrich learning, it has the potential to accelerate mastery of critical content and skills by all students.

Learn More

SMALLab Learning is creating immersive and engaging learning experiences for students by integrating mixed reality technologies with learning science.

Editor’s Note: Excerpts from this guest post originally appeared in the Fall 2012 edition of ARVELSIG SUPER NEWS.

The emergence of new educational technologies and interfaces that accept natural physical movement (i.e., gestures, touch, body positioning) as input into interactive digital environments is an exciting development. At SMALLab Learning, we’ve been at the forefront of integrating these technologies with contemporary research from the learning sciences to create immersive and engaging learning experiences for students. (Imagine students using their own bodies as “cursors” to manipulate digital objects and transform virtual figures.) In the general public, however, there is often confusion around what, exactly, embodied or kinesthetic learning entails and its potential for education.

Embodiment and Mixed Reality. One category of new educational technologies is referred to as “mixed reality” (MR), this involves the “merging of real and virtual worlds” (Milgram & Kishino, 1994). That is, digital components like projected graphics on a floor or wall are merged with real world tangible objects, e.g., trackable hand-held wands.

For example, Figure 1 displays an example of a student using a rigid body tracking wand to interact with floor projections—in this case the length of a light wave. Our lab—and others—have cited the strong potential of these technologies to engage learners of all types in immersive experiences that enhances education (Birchfield & Johnson-Glenberg, 2010; Chang, Lee, Wang, & Chen, 2010; Hughes, Stapleton, Hughes, & Smith, 2005; Johnson-Glenberg, Birchfield, & Uysal , 2009;  Johnson-Glenberg, Birchfield, Megowan-Romanowicz,  Tolentino,  &  Martinez, 2009; Johnson-Glenberg, Koziupa, Birchfield, & Li, 2011; Lindgren & Moshe11, 2011).

Embodiment and Cognition. The working hypothesis in our lab is that human cognition is really embodied cognition. This means that cognitive processes are deeply rooted and come from the body’s interactions with its physical environment (Wilson, 2002). Multiple research areas now support the tenet that embodiment is an underpinning of cognition. The various research areas include (but are not limited to): neuroscience (Rizzolatti & Craighero, 2004), cognitive psychology (Barsalou, 2008; Glenberg & Kaschak, 2002; Glenberg, 2010), math (Lakoff & Nunez, 2000), gesture (Hostetter & Alibali, 2008; Goldin-Meadow, Cook, & Mitchell, 2009), expert acting (Noice & Noice, 2006- with the idea of “active experiencing”), and dance (Winters, 2008).   Glenberg (2010) contends that all cognition comes from developmental, embodied interactions with physical environments. It follows that all thought—even the most abstract—is built on the foundation of physical embodiment.  Pulvermüller and Fadiga’s (2010) review of fMRI experiments demonstrate that when reading words related to action, areas in the brain are activated in a somatotopic manner. For example, reading “lick” activates motor areas that control the mouth, whereas reading “pick” activates areas that control the hand.  This activation is part of a parallel network representing ‘meaning’ and shows that the mappings do not fade once stable comprehension is attained. Motoric codes are still activated during linguistic comprehension in adulthood.  If these codes are still active in the adult brain, then we think a good design principle would be one that included the modality of kinesthetics.

In our labs, at both Arizona State University and SMALLab Learning in Los Angeles, we are very specific about what embodiment means, how we design content to be appropriately embodied, and how we assess learning. When we say that a lesson is highly embodied, it means the learner is doing more than simply swiping a hand across a touch screen to turn a page—it means that the human gesture or whole body movements are aligned or congruent with the mediated content to be learned. This alignment or what others call “congruency” (John Black’s lab and Segal, 2011) must be designed with forethought. If you are designing a lesson on gears in the mouse-driven world it seems natural to use a “click” to start the gear train system turning. However, with a KINECT sensor as input students can now spin their hands in the direction the input gear should go; they can intuitively control the speed of the gear train. This “spin” movement is congruent with the material to be learned, whereas if we had designed the interaction with a default type of “push” gesture on the KINECT that would not be congruent.

With the grant, we have created multiple supplemental Simple Machine game units on gears and levers (and several assessment measures). Figure 2 represents a screenshot of our two player Winching Gears Game.  These games are available for free on the website under Products.

The KINECT experience is called Flow (Fluid Learning on the Wall). The goal of the winching game is to bring up boulders from a mine and the direction and speed of the input gear is mapped to the human gesture. Students explore ratios and mechanical advantage in a kinesthetic classroom experience. By altering the size of the input gear in real time students begin to understand the concept of gear ratios and what they can lift with a fixed amount of input force. The game is very embodied because the size of the input gear (its diameter) is determined by the distance of the learner’s hand to his/her shoulder joint.  We tackle a misconception many 5th through 12th graders hold regarding the size of an input gear and which is better for lifting heavier objects.

Two MR Environments. In our labs, we design for two learning platforms: 1) rigid body motion tracking using multiple infrared cameras, and 2) the skeletal tracking system in Microsoft’s KINECT sensor.  Both of these technologies afford locomotion and we build locomotion components in to lessons when it is appropriate. For example, if the goal is to instruct in constant velocity in the large scale 15 by 15 foot SMALLab (Situated Multimedia Arts Learning lab) platform, we would design a scenario where the student would actually walk through the space with a constant velocity. In Figure 3, two students are exploring chemistry titration—click link to see video of teacher using this scenario. The students use the wands in an embodied up-and-down motion like pipettes to drop molecules into a virtual flask in the center.

In the SMALLab environment the infrared cameras above the students’ heads capture the velocity of the handheld tracking wands. The system gives the students immediate feedback about position and speed via sonic and visual feedback.  Because the space is so large and the visual input covers an expanse of the floor (and learners’ retinas), the experience feels very immersive. In the second environment (Flow) the KINECT sensor serves as the motion capture input device. There is less sustained locomotion—but there is always sensori-motor engagement of the core, shoulder, arm and wrist—and all gestures are designed to be congruent with the content learned. Figure 4 shows two children playing the RGB matching Flow game using the KINECT as input (height of hand maps to saturation of color) in the center target.

Future Questions. We are excited about the new media games we have created with this funding and hope the content is widely distributed. We encourage others who are exploring methods for using the KINECT for education to think through meaningful designs to map body gesture in a way that is relevant or congruent to the content. For example, if we want to explore friction and heat transfer, we would not design for students to use a one-handed drag and drop gesture, that is not congruent.  We would design the system to track the students rubbing their hands together. The KINECT could then monitor duration and vigor and a graph would display the putative temperature rise.  If you wanted to teach young children to understand an analog clockface, you would design a scenario where their arms—as hour and minute hands—circle around the body (KINECT TIME), not using a finger flick to swipe a wheel with numbers.

Below are some questions that will be guiding our future research. We welcome feedback below or either directly to our lab.

  • Are there limits to what can be embodied? Michael Eisenberg asked an intriguing question at the 2012 International Conference for the Learning Sciences—“How come no one has embodied a Laplace transformation?” (Which we are, of course, now trying to embody.)
  • Is adding the modality of kinesthetics/gesture infelicitous for some learners? Perhaps it is more difficult for some low prior knowledge students to integrate all the input? Aptitude by treatment interactions have yet to be explored in this space.
  • We strive for all our lessons to also be game-like and include collaboration.  Is it possible to seamlessly integrate assessment into the games? How might learning gains be influenced by game components and competition? How can we judge the efficacy and quality of whole classroom collaboration (without going over endless videotapes….)?
  • How can multimedia designers make sure there is time for reflection built into all their content. Should we try to carve out time for solitary reflection? How can we do that elegantly without it feeling like dead time to some students?

Learn more about SMALLab’s work on the NGLC website or contact Dr. Johnson-Glenberg directly at: or
Special thanks to David Birchfield, Colleen Megowan-Romanowicz, Cyndi Boyd and Tim Lang.

Mina C. Johnson-Glenberg


Mina Johnson-Glenberg is a co-inventor of SMALLab, a learning company that started as a research project at Arizona State University with a goal of creating a transformational learning environment for K-12 education. SMALLab is an embodied learning environment where students are up out of their seats, moving as they learn.