Next-Gen Learning Tools Unveiled at LIVE Incubator

Vanderbilt University
By Jennifer Kiilerich and Jenna Somers

Imagine learning about ecosystems by "becoming" a bee, connecting with your child more deeply by discussing a good book, or being inspired by a library of brand-new AI creations right at your fingertips. These experiences are all possible thanks to learning tools being developed, researched, nurtured and shared by Vanderbilt's LIVE Learning Innovation Incubator.

LIVE, an initiative of Vanderbilt Peabody College of education and human development, brings together trans-institutional teams of researchers and strategic partners to develop educational innovations that empower learners, individuals and communities.

"What excites me most is the incredible range of work our teams accomplish each day with the creativity that emerges when people work together across disciplines."

"LIVE is all about fostering groundbreaking ideas and building tools that truly make a difference for learning, wherever it occurs. What excites me most is the incredible range of work our teams accomplish each day with the creativity that emerges when people work together across disciplines," said Alyssa Wise, LIVE director.

With grants, expert advice from LIVE engineers, and opportunities for community and industry connections, LIVE supports Vanderbilt innovators to transform inspiration into meaningful research and market-ready tools. Ahead, learn more about three of these cutting-edge tools including:

  • STEP, which boosts kids' learning through interactive, embodied simulations;
  • REED, an AI-based agent that augments how caregivers read with children; and
  • The Peabody Hub for Mindful AI Innovation, a dynamic online community resource for responsible AI use.

STEP: LEARNING SCIENCE BY BECOMING IT

The bright lights of a theater stage may not conjure thoughts of the scientific method, but that's where Vanderbilt researcher Noel Enyedy first found inspiration for STEP-Science through Technology Enhanced Play. "I was hanging out with some folks in the theater department," he recalled, "and they were using tracking technology so actors could control their own lighting and sound on stage."

Enyedy, professor of teaching and learning at Peabody, wondered: Why not bring that same technology into the classroom? "We wanted to do things differently in science classes," he said. "We considered how kids could learn at a deeper level and come to love the subject matter."

STEP, launched more than 15 years ago, initially used ceiling-mounted cameras to scan QR codes on students' hats. When kids' QR codes collided, the system could model things like force and motion, such as how a ball might move if hit in a certain way, or what friction would look like if acted out by people. This early design opened the door to a new way of teaching scientific phenomena, grounded in embodiment and perspective-taking.

Now, synced trackers (like those used in mobile devices) allow children to step inside scientific models that are reflected for them on a digital screen. Instead of hearing a lecture on states of matter, they become water molecules.

For example, when first graders hold still and spread out at precise distances, a screen correlating with their trackers signals that they have formed ice-green dots frozen in place. "If they run around the room like kids do, they quickly learn they can form gas," said Enyedy, which turns their dots red. Moving slowly creates liquid-blue on the screen.

In another model, children play the role of bees, flying flower to flower to pollinate. "The role-taking and engagement allow students to identify with the first-person perspective," Enyedy noted.

Integrating AI

Now in its fourth generation, the system has evolved into GEM-STEP: Generalized Embodied Modeling with STEP. Developed within the LIVE Learning Innovation Incubator and supported by a series of federal grants, including a five-year, $2.8 million award from the National Science Foundation that recently concluded, GEM-STEP represents a new chapter for mixed-reality learning.

While earlier models were pre-built to demonstrate specific concepts, GEM-STEP integrates coding directly into the activity. It gives students the power not just to inhabit scientific models, but to shape and reprogram them in real time. Enyedy's Ph.D. students, in collaboration with Cornelius Vanderbilt Professor of Engineering Gautam Biswas's team, are using the approach with teens learning computer science.

For instance, high schoolers role-play as DJs organizing songs for a party. Wearing trackers linked to hit songs, their movements and interactions alter on-screen avatars. Students work together to discover they have been sorted by Spotify popularity.

GEM-STEP then visualizes the underlying algorithm, revealing the science and computational logic behind the simulation. "We want to provide a more expansive way for kids to interact with these technologies," said Joyce Fonteles, a computer sciences graduate assistant. "Because later on that will help them think outside of the box." In this interdisciplinary partnership, Fonteles and her team are developing multimodal learning analytics for GEM-STEP.

A central question for the researchers is whether new computer-science learners can use plain language to describe models they want to build and then use AI to help them generate the necessary code. At the same time, they hope the AI agents will serve as thought partners, asking key critical thinking questions and aiding in understanding.

GEM-STEP may take AI even further by observing student behavior during simulations and proposing new activities: noticing, for instance, that no one is exploring one corner of the simulation, or that groups cluster around the same "flower." Computer vision and other techniques implemented by Biswas's team can automatically detect and analyze this type of motion as students engage in learning.

"They could see the applicability of it outside of computer science and outside of this specific activity."

During a pilot study at the School for Science and Math at Vanderbilt, teenagers quickly imagined how GEM-STEP might extend into their daily lives. "One of them talked about how she could improve her archery skills, and another wanted to be a doctor and said she could use this for medicine," recalled teaching and learning Ph.D. student Efrat Ayalon. "They could see the applicability of it outside of computer science and outside of this specific activity."

With ten study units already developed, STEP and GEM-STEP connect science to play, collaboration and empathy, helping kids of all interests and abilities participate in STEM.

REED: UNLOCKING DEEPER MEANING IN CHILDREN'S BOOKS

After a hard day of work, a parent reading a bedtime story to their child might feel too tired and stressed to think of questions that could spark insightful conversations about the story with their child. But these conversations-which scholars call dialogic reading-are critical to literacy development. They strengthen children's reading comprehension and vocabulary, as well as their love of reading. What if there was a mobile app that could help these families?

That is what a Vanderbilt research team supported by LIVE is developing. The REED app leverages generative AI to listen to a caregiver reading to a child, tracks where they are in the story, and then suggests real-time, open-ended prompts to encourage dialogue between the caregiver and child. REED stands for read, engage, explore and discuss, which describes the process of dialogic reading. The app generates prompts based on the child's age, preferred language, and whether the book is new to them or an old favorite.

Amy Booth, professor of psychology and human development at Peabody College, Abbie Petulante, PhD'22, a post-doctoral fellow at Vanderbilt Data Science, and Margaret Shavlik, PhD'23, a post-doctoral researcher in early literacy interventions, have developed a prototype of REED and are working with the Vanderbilt Center for Technology Transfer and Commercialization to bring the app to market via their new business, COG Learning. The name COG Learning reflects the team's motivation for creating the mobile app-to Close Opportunity Gaps in the availability of support for early learning across the differing experiences and skills of young children.

"Many early literacy interventions are unfortunately not equally effective for kids who need them the most, so we are figuring out ways to close the gap," Booth said. "We know that back-and-forth conversations between a caregiver and a child are a powerful engine for early learning, so we intentionally designed the app to be parent focused and minimally intrusive. Rather than providing a conversational agent, the app shows the prompt to the caregiver who then asks the child the question. There has been a lot of enthusiasm for the app among the 60 or so families who have tried it out so far."

At Vanderbilt Data Science, Petulante leads the development of the large language model that will generate developmentally appropriate conversation prompts for any book. She will serve as the CTO at COG Learning. "The technology behind REED shows how we can leverage domain-specialized AI to make a real impact. By teaching the model to mirror real human expertise, we're able to put that expertise into every caregiver's pocket," Petulante said.

Shavlik is analyzing the effectiveness of the mobile app for fostering dialogic reading and its downstream effects on vocabulary, story comprehension, narrative skill, and other literacy measures. She is also taking the lead in commercialization efforts for COG Learning and will be the company's CEO.

"I'm excited to be part of something that we hope ignites a love of reading for both children and the adults who read with them."

"I'm excited to be part of something that we hope ignites a love of reading for both children and the adults who read with them," Shavlik said. "Interactive book reading doesn't just build vocabulary and comprehension; it also supports attention, self-regulation, and perspective-taking, all while strengthening the caregiver-child bond. With REED, we're working to make those rich interactions easier to spark-and through COG Learning, we're taking this work beyond the lab and into families' everyday lives."

This article includes content originally published on September 3, 2025.

PEABODY HUB FOR MINDFUL AI INNOVATION: OFFERING AN ETHICAL AI RESOURCE

Generative AI is ubiquitous these days, but its use tends to raise both practical and ethical questions. A LIVE-supported online gallery launched this spring, the Peabody Hub for Mindful AI Innovation, provides a space for users to reflect on what it means to create with artificial intelligence.

Through an interactive website featuring student-made AI projects, visitors can tinker with machine-learning and generative AI tools while reading creators' narratives examining the development, potential societal impacts and ethical implications of their work.

Vanderbilt Peabody professors Golnaz Arastoopour Irgens and Alyssa Wise are the driving forces behind the project, which was built by LIVE research engineer Albert Na (BS'21) with support from a Peabody Instructional Innovation Grant. "We feel like it's urgently needed for students right now to have a broad understanding of the social and ethical considerations of AI," said Arastoopour Irgens, assistant professor of human-centered technologies.

"The hub is a source of inspiration about not just what's possible technically, but what can be imagined," said Wise, professor of technology and education and director of LIVE. "We want to spark thoughtfulness about what we do with AI and why-and what are the consequences of that?"

The site welcomes submissions from anybody with an AI creation, but its first round of projects came from undergraduate students in Peabody's new AI Everywhere class, a popular course that will soon be a permanent offering.

"Shadow Waltz," AI-created art and music by student Qwynn Foster, features ghost-like dancers in a dramatically lit ballroom and an accompanying song. "Shadow Waltz is a testament to what AI can do when guided by human emotion," wrote Foster.

Another example is Ashley Kim's AI assistant, "Korea Travel Assistant," which allows users to input English phrases they may need help with while visiting Korea. The chatbot offers not only translations, but also phonetic pronunciations and suggestions for polite interactions, a culturally important aspect of conversing in Korea.

In her commentary, Kim noted that "the chatbot promotes cultural respect and travel confidence, but there are ethical trade-offs-such as oversimplifying cultural nuance or discouraging deeper learning-that I worked to minimize."

"Break Free," by Bryan Zhang
"Break Free," by Bryan Zhang
"Shadow Waltz," by Qwynn Foster

Zhang's project "Break Free," meanwhile, looks a little different. A black square, like an old television set that has lost reception, features fuzzy, static-like dots. Zhang has some background in AI, having served on his previous school's ChatGPT task-force committee, and was eager to dig deeper. He wanted to create chaos with AI-which is antithetical to a technology driven by patterns and algorithms.

"It was an examination of how bias and the algorithm don't actually give you full creative freedom, because AI is trained to try to be as organized and as pattern-seeking as possible," he explained.

The Peabody Hub for Mindful AI Innovation is a jumping off point-and homebase-for broader AI stewardship and instruction that will take place at Peabody College and across Vanderbilt's campus in the coming months and years. With ongoing leadership and support from LIVE, the site will expand and evolve, say Wise and Arastoopour Irgens. It will serve as a landing spot for students, faculty, K-12 teachers and community members interested in responsible AI use.

This article includes content originally published on September 8, 2025.

EDUCATIONAL TOOLS OF THE FUTURE

LIVE leaders and creators understand that education goes hand in hand with innovation. Whether that takes the form of nurturing a big idea, supporting research and development or empowering researchers to translate innovation into social entrepreneurship, the LIVE Learning Innovation Incubator is dedicated to designing and scaling educational tools that make a difference. Learn more about the initiative here and discover more of its pioneering learning tools here.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.