High-Speed Visual BCI: Hybrid Encoding, EEG Decoding

Beijing Institute of Technology Press Co., Ltd

Visual BCIs based on steady-state visual evoked potentials (SSVEPs) have long been the gold standard for high-speed noninvasive brain-computer communication, thanks to their rapid response, minimal user training, and high ITR. However, progress in boosting ITR has nearly stagnated since 2018, when a landmark 40-target BCI system achieved 325.33 bpm. A key bottleneck has been the underutilization of spatial information in visual perception: traditional BCI systems rely on 64-channel EEG caps with ~3 cm interelectrode spacing, which cannot resolve the fine-grained spatiotemporal dynamics of the visual cortex, where the fundamental functional unit occupies just ~1 mm² of cortical surface. Existing encoding strategies have also been limited. While frequency and phase modulation have been widely used to encode BCI commands, spatial encoding alone has suffered from a small number of encodable targets, large stimulus sizes, and low ITR. To address these gaps, the team designed a hybrid encoding framework that integrates frequency, phase, and spatial information, paired with high-density EEG recording to unlock the full potential of visual spatiotemporal neural signals.

The system's breakthrough performance stems from two seamlessly integrated technical advances, centered on a novel hybrid frequency–phase–space encoding paradigm and high-density EEG recording with systematic electrode validation. The hybrid encoding framework creates a compact, large-command-set scheme that multiplicatively expands the number of encodable targets without increasing the interface size, using 40 square flickering stimuli each assigned a unique frequency ranging from 8 to 15.8 Hz at 0.2 Hz intervals and a unique initial phase spanning 0 to 2π at 0.35π intervals, with 5 cross-shaped fixation points (center, up, down, left, right) embedded within each flicker as independent spatial targets. This hybrid design expands the command set from 40 to 200 targets, while reducing the average stimulus size to as little as 1.49° of visual angle—far smaller than the 3.3° stimuli used in classic 40-target systems—improving user experience for large command sets. Complementing this encoding strategy, the team used a 256-channel EEG cap built to the international 10-5 system with a mean interelectrode distance of 1.5 cm, selecting 66 parieto-occipital electrodes (the region where SSVEPs are most prominent) for decoding, and systematically compared four electrode configurations derived from standard clinical caps: 66/256, which uses 66 parieto-occipital electrodes from a 256-channel cap; 32/128, with 32 parieto-occipital electrodes from a 128-channel cap; 21/64, featuring 21 parieto-occipital electrodes from a 64-channel cap; and 9/64, the 9 parieto-occipital electrode setup that serves as the conventional baseline in prior BCI research. The results revealed a clear, task-dependent relationship between electrode density and decoding performance: for the classic 40-target frequency–phase encoding paradigm, the 66/256 configuration delivered an 83.66% increase in theoretical ITR over the 9/64 baseline, and for the 200-target hybrid encoding paradigm, this improvement soared to 195.56%, with the 32/128 and 21/64 configurations also achieving 153.08% and 103.07% gains, respectively.

The team validated the system with 15 healthy participants in offline experiments, 10 of whom completed follow-up online tests, delivering record-breaking performance across both settings. With the 66/256 configuration, the system reached a peak offline actual ITR of 470.64 ± 8.97 bpm for an 80-target setup, with 92.59% classification accuracy using just 0.2 seconds of stimulus data, and relative to the 9/64 baseline, the 66/256 configuration boosted actual ITR by 23.96% for the 40-target setup up to 79.68% for the 200-target setup. After personalizing system parameters including target number, fixation combinations, and stimulus duration for each user, the online BCI system achieved an average actual ITR of 472.72 ± 15.06 bpm, with the highest individual performance reaching 551.42 bpm, marking the highest actual ITR ever reported for a noninvasive BCI system, and a dynamic window classification algorithm pushed the peak actual ITR even higher, reaching 507.59 bpm for the 80-target setup by adapting the stimulus duration to the confidence of each classification decision.

A critical finding of the study is that spatial information decoding has far stricter requirements for electrode density than frequency–phase information decoding, as the researchers linked this performance gap to the retinotopic mapping of the visual system, where stimuli at different spatial positions relative to the user's gaze elicit distinct lateralized amplitude and phase topographies in the visual cortex; high-density EEG captures these fine spatial variations, while low-density configurations smooth out these critical details, limiting the ability to decode spatial targets. At 0.5 seconds of data length, the 66/256 configuration improved spatial decoding accuracy by 15.53% over the 9/64 baseline, compared to just a 1.32% gain for frequency decoding. Electrode optimization analysis also revealed a pattern of diminishing returns: for the 80-target task, peak performance was achieved with 52 electrodes, while the more complex 200-target task required 60 electrodes for optimal accuracy, and the discarded electrodes were primarily low-signal peripheral channels, providing practical guidance for balancing performance and hardware complexity in future BCI designs.

This breakthrough addresses longstanding barriers in visual BCI development, with far-reaching implications for both research and real-world applications, as the hybrid encoding framework enables compact, large-command-set BCI systems without sacrificing speed or accuracy, while the quantitative analysis of electrode density provides data-driven design rules for next-generation BCI hardware. Beyond the laboratory, the technology holds promise for assistive communication for people with severe motor impairments such as amyotrophic lateral sclerosis, as well as high-speed human-computer interaction for consumer electronics, virtual reality, and augmented reality. The team acknowledges key limitations to address in future work, including validating performance in more naturalistic environments where head movements and eye blinks introduce artifacts, testing in diverse user populations including older adults and individuals with visual or neurological impairments, and developing more user-friendly high-density EEG systems to reduce setup time and improve accessibility. "Our work demonstrates that integrating spatial information into BCI encoding, paired with high-density EEG decoding, can unlock unprecedented communication speeds for noninvasive BCIs," the authors noted. "These findings not only push the boundaries of current BCI performance but also lay the groundwork for a transformative shift from efficient BCI interaction to truly natural, intuitive brain-computer communication using complex visual stimuli, including natural images."

Authors of the paper include Gege Ming, Weihua Pei, Sen Tian, Xiaogang Chen, Xiaorong Gao, and Yijun Wang.

This work was supported by the National Natural Science Foundation of China under Grants 62071447, 62401325, and 62201321; in part by the National Key Research and Development Program of China under Grants 2022YFF-1202303 and 2023YFF1203702; and in part by the Postdoctoral Fellowship Program of CPSF under Grant Number GZC20240864.

The paper, "A High-Speed Visual BCI Based on Hybrid Frequency-Phase-Space Encoding and High-Density EEG Decoding" was published in the journal Cyborg and Bionic Systems on Mar. 26, 2026, at DOI: 10.34133/cbsystems.0555.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.