Communication is a fundamental human right, and many individuals need augmentative and alternative communication (AAC) approaches or tools, such as a notebook or electronic tablet with symbols the user can select to create messages, to communicate effectively. While access to speech-language therapies and interventions that promote successful communication outcomes can help some, many existing AAC systems are not designed to support the needs of individuals with motor or visual impairments. By integrating movement sensors with artificial intelligence (AI), researchers at Penn State are finding new ways to further support expressive communication for AAC users.
Led by Krista Wilkinson, distinguished professor of communication sciences and disorders at Penn State, and Syed Billah, assistant professor of information sciences and technology at Penn State, researchers developed and tested a prototype application that interprets body-based communicative movements into speech output using sensors. This initial test included three individuals with motor or visual impairment who served as community advisors to the project. All participants said that the prototype improved their ability to communicate quickly and with people outside their immediate social circle. The theory behind the technology and initial findings were published in Augmentative and Alternative Communication.
Aided and unaided AAC
There are two different types of AAC individuals can use. Aided AAC is typically technology assisted - pointing at pictures or selecting symbols in a specialized app on an electronic tablet. For example, a person might be presented with three different food options via images on their tablet and will point to the choice they want, communicating it to their communication partner. While aided AAC can be understood easily, even by individuals not familiar with the user, it can be physically taxing for those with visual or motor impairments, according to Wilkinson.
The other form of AAC is unaided, or body-based AAC - facial expressions, shrugs or gestures that are specific to the individual. For example, a person with little to no speech who also has motor impairments, but can move their arms and hands, may raise their hand when shown a specific object signaling, "I want."
"Unaided AAC is fast, efficient and often less physically taxing for individuals as the movements and gestures are routinely used in their everyday lives," Wilkinson said. "The downside is these gestures are typically only known by people familiar with the individual and cannot be understood by those they may interact with on a less frequent basis, making it more difficult for AAC users to be independent."
According to Wilkinson, the goal of developing the prototype was to begin breaking down the wall between aided and unaided AAC, giving individuals the tools they need to open more of the world and communicate freely with those outside their immediate circles.
How AI can help
Current technologies have already begun incorporating AI for natural gesture recognition. However, mainstream technologies are based on large numbers of movements produced by people without disabilities. For individuals with motor or visual disabilities, it is necessary to make the technologies capable of learning idiosyncratic movements - movements and gestures with specific meaning to individuals - and map them to specific commands. The ability of these systems to adjust to individual movement patterns reduces the potential for error and the demands placed on the individual to perform specific pre-assigned movements, according to Wilkinson.
The utility and user experience of AI algorithms, however, is largely unexplored. There are gaps in the understanding of how these algorithms are developed, how they can be adapted for AAC users with diverse disabilities and how they can be seamlessly integrated into existing AAC, according to Wilkinson.
Building the prototype
When developing and testing the prototype, Wilkinson said it was important to her and her team to gather input and feedback from individuals who would be likely to use, and benefit from, this technology.
Emma Elko is one of three "community advisers" the researchers worked with, along with her mother, Lynn Elko - Emma's primary communication partner. Emma has cortical visual impairment - a visual disability caused by damage to the brain's visual pathways rather than the eyes themselves - and uses aided AAC to communicate. She also has specific gestures she makes to say, "I want" and "come here."
Using a sensor worn on Emma's wrist, the researchers captured her communicative movements. The sensor detected the kinematics - how an object moves, focusing on position and speed - of each movement, allowing it to distinguish between different gestures like an up and down motion versus a side-to-side motion.
Emma was prompted to repeat a movement three times, with Lynn signaling the beginning and end of the movement for the algorithm to capture. The researchers found three repetitions of a gesture gathered sufficient data and minimized user fatigue.
Once the AI algorithm captured the gesture and an associated communicative output was assigned, a connected smartphone application translated the gesture into speech output to be produced any time the sensor recorded the gesture being made. In this way, Emma could communicate directly with someone who was unfamiliar with the specific meaning of her gestures.
"The idea is that we can create a small dictionary of an individual's most commonly used gestures that have communicative meaning to them," Wilkinson said. "The great thing about it is the sensor technology allows individuals to be disconnected from their computer or tablet AAC, allowing them to communicate with people more freely."