With the development of brain-computer interface (BCI) technology, the application of electroencephalography (EEG) signals in motion decoding has been expanding. Traditional BCI decoding is primarily based on Cartesian coordinates, which are suitable for representing linear movements but less efficient for decoding rotational or circular movements. In contrast, polar coordinates provide a more natural and compact representation for circular movements by directly encoding angular information, making it more efficient for decoding rotational motion. However, existing polar coordinate decoding methods have not been widely applied to hand motion decoding. "To address this, we proposed a new circular motion paradigm aimed at continuously decoding hand motion angles using EEG signals and exploring the potential of polar coordinates in motion decoding." said the author Xiaohan Lu, a researcher at Southern University of Science and Technology, "Participants were asked to perform bimanual circular tracking with a fixed radius, while their EEG signals were recorded. The feasibility of polar coordinate-based decoding of hand motion angles was assessed using six deep learning models, including EEGNet, DeepConvNet, and their combinations with LSTM."
The main goal of this experiment was to decode hand motion angles from EEG signals and explore the effectiveness of polar coordinates. Eight healthy participants (6 males and 2 females, aged 20–26 years) participated, all of whom were right-handed and free of neurological diseases. The task was a continuous visuomotor tracking task where participants controlled a handle to align a circular cursor with a moving square target on the screen. The target moved clockwise along a fixed-radius circular path, and participants were required to track the target. EEG signals were continuously recorded using a 32-channel cap at a sampling rate of 256 Hz with the g.HIAMP 256 bio-signal amplifier. To ensure signal quality, data preprocessing included bandpass filtering, independent component analysis (ICA) to remove ocular artifacts, and bad channel interpolation. Positional data, tracked by a camera using an AprilTag, was used to calculate polar angle coordinates. The experiment consisted of 20 sessions, each including 10 trials with a 1-second preparation phase, a 6-second movement phase, and a 2-second rest phase. Subsequently, based on the obtained data, the feasibility of decoding polar coordinates angles from EEG signals was evaluated through various deep learning models.
The experimental results demonstrate the outstanding performance of polar coordinate-based hand motion angle decoding across all participants. Six deep learning models, including EEGNet, DeepConvNet, ShallowConvNet, and their LSTM hybrids, were evaluated using 10-fold cross-validation. The results showed that all models significantly outperformed chance levels, with the best model achieving a mean squared error (MSE) of 1.012 rad2, a mean absolute error (MAE) of 0.627 rad, and a Pearson correlation coefficient (CC) of 0.895 across 8 subjects. Among the models, DeepConvNet + LSTM achieved the highest performance, with the highest R2 value of 0.75. Furthermore, LSTM-combined models outperformed their original CNN counterparts in all metrics, especially in Pearson correlation coefficient, indicating that incorporating LSTM significantly improved decoding performance. All models significantly outperformed the baseline with randomized labels (P<0.001), confirming the feasibility of continuous angular decoding from EEG signals.
Overall, the polar coordinate-based method for decoding hand motion angles demonstrates excellent performance. The performance of the six deep learning models in the experiment surpassed that of the random label control group, verifying the feasibility of polar coordinate decoding. However, the study also identified some limitations. Firstly, the fixed-radius circular trajectory used in the experiment simplified the randomness of real limb movements, which may not fully represent natural hand movements. Secondly, the study only used the motor execution (ME) paradigm; future research could incorporate the motor imagery (MI) paradigm, especially for patients with motor function loss. "We plan to enhance the complexity of the motion paradigm, introduce random-radius trajectories, and explore hybrid movement trajectories. Additionally, temporal models like GRU and Transformers will be considered to further optimize decoding performance." said Xiaohan Lu.
Authors of the paper include Xiaohan Lu, Yifeng Chen, Zhiying Li, Jinqiu Zhao, Yijie Zhou, Dongrui Wu, and Mingming Zhang.
This work was supported by the National Key R&D Program of China (grant no. 2023YFF1205200), the National Natural Science Foundation of China (grant no. 62303211), the Shenzhen Science and Technology Program (grant nos. JCYJ20220530113811027 and JCYJ20220818103602004), and the Shenzhen Medical Research Fund (grant no. D2402017).
The paper, "Electroencephalography Enables Continuous Decoding of Hand Motion Angles in Polar Coordinates" was published in the journal Cyborg and Bionic Systems on Jan 12, 2026, at DOI: 10.34133/cbsystems.0469.