Texting In Virtual World Made As Easy As Real World

Typing a message may be second nature on a smartphone, but in the virtual world it remains a stubborn challenge. In Augmented Reality (AR) and Mixed Reality (MR), text input often relies on mid-air gestures that require users to move their arms extensively - an approach that is slow, tiring, and impractical in crowded spaces.

Now, researchers at Tohoku University may have found a solution. They have developed a new text input system that allows users to type on a compact virtual keyboard using minimal hand movement, opening the door to more practical everyday use of AR and MR devices.

The system is designed around small thumb movements, enabling efficient typing on a miniature virtual keyboard. This makes it particularly suited to mobile environments such as commuting, where space is limited, and large gestures are not feasible.

At the core of the approach is a novel interface in which candidate characters appear in a fan-shaped layout around the point of contact. Users can quickly select the intended character, even when tapping imprecisely on the small keyboard.

Behind the scenes, the system continuously learns from user behavior. By analyzing the relationship between tap positions and selected characters, it builds a personalized model that improves prediction accuracy over time.

Tests showed that users were able to input text reliably while making only minimal finger movements, suggesting the approach could significantly reduce the physical strain associated with current AR and MR typing methods.

"Our goal was to design a text input method that works even with a very small keyboard and minimal hand movement, so that AR and MR devices can be used more naturally in everyday situations," said lead researcher and postdoctoral fellow at Tohoku University, Guanghan Zhao.

A user wearing an AR device performs text input using thumb interaction within a limited space, in an environment that simulates a crowded public setting. ©Zhao et al.

Yoshifumi Kitamura, a professor at Tohoku University's Research Institute of Electrical Communication and Director of the Interdisciplinary ICT Research Center for Cyber and Real Spaces, said the work highlights a broader shift in interface design. "As AR and MR technologies become more integrated into daily life, intuitive and efficient input methods will be essential. This research shows how predicting user intent can help overcome the physical limitations of compact interfaces," he said.

The findings point to a new direction for human-computer interaction, where systems adapt to users rather than requiring precise input. The team believes the technology could help make AR and MR devices more viable in daily lifeand bring closer a future where virtual interfaces begin to replace smartphones.

The research results were presented on March 23rd at the 33rd IEEE Conference on Virtual Reality and 3D User Interfaces, held in South Korea from March 21st to 25th.

The FanType input interface. Estimated character candidates (in this example, z and s) are displayed in a fan-shaped layout, allowing users to select and enter characters with minimal thumb movement. ©Zhao et al.
Publication Details:

Title: Intention-Inferring Fan-shaped Thumb Interface for Text Entry on Small XR Keyboards

Authors: Guanghan Zhao, Louis Teys, Gyeonghwan Yang, Shengdong Zhao, and Yoshifumi Kitamura

Conference: 2026 IEEE Conference Virtual Reality and 3D User Interfaces (VR)

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.