Motor imagery electroencephalogram (EEG) signals are non-stationary time series with a low signal-to-noise ratio. Therefore, the single-channel EEG analysis method is difficult to effectively describe the interaction characteristics between multi-channel signals. This paper proposed a deep learning network model based on the multi-channel attention mechanism. First, we performed time-frequency sparse decomposition on the pre-processed data, which enhanced the difference of time-frequency characteristics of EEG signals. Then we used the attention module to map the data in time and space so that the model could make full use of the data characteristics of different channels of EEG signals. Finally, the improved time-convolution network (TCN) was used for feature fusion and classification. The BCI competition IV-2a data set was used to verify the proposed algorithm. The experimental results showed that the proposed algorithm could effectively improve the classification accuracy of motor imagination EEG signals, which achieved an average accuracy of 83.03% for 9 subjects. Compared with the existing methods, the classification accuracy of EEG signals was improved. With the enhanced difference features between different motor imagery EEG data, the proposed method is important for the study of improving classifier performance.
With the development of brain-computer interface (BCI) technology and its translational application in clinical medicine, BCI medicine has emerged, ushering in profound changes to the practice of medicine, while also bringing forth a series of ethical issues related to BCI medicine. BCI medicine is progressively emerging as a new disciplinary focus, yet to date, there has been limited literature discussing it. Therefore, this paper focuses on BCI medicine, firstly providing an overview of the main potential medical applications of BCI technology. It then defines the discipline, outlines its objectives, methodologies, potential efficacy, and associated translational medical research. Additionally, it discusses the ethics associated with BCI medicine, and introduces the standardized operational procedures for BCI medical applications and the methods for evaluating the efficacy of BCI medical applications. Finally, it anticipates the challenges and future directions of BCI medicine. In the future, BCI medicine may become a new academic discipline or major in higher education. In summary, this article is hoped to provide thoughts and references for the development of the discipline of BCI medicine.
The bidirectional closed-loop motor imagery brain-computer interface (MI-BCI) is an emerging method for active rehabilitation training of motor dysfunction, extensively tested in both laboratory and clinical settings. However, no standardized method for evaluating its rehabilitation efficacy has been established, and relevant literature remains limited. To facilitate the clinical translation of bidirectional closed-loop MI-BCI, this article first introduced its fundamental principles, reviewed the rehabilitation training cycle and methods for evaluating rehabilitation efficacy, and summarized approaches for evaluating system usability, user satisfaction and usage. Finally, the challenges associated with evaluating the rehabilitation efficacy of bidirectional closed-loop MI-BCI were discussed, aiming to promote its broader adoption and standardization in clinical practice.
Rapid serial visual presentation-brain computer interface (RSVP-BCI) is the most popular technology in the early discover task based on human brain. This algorithm can obtain the rapid perception of the environment by human brain. Decoding brain state based on single-trial of multichannel electroencephalogram (EEG) recording remains a challenge due to the low signal-to-noise ratio (SNR) and nonstationary. To solve the problem of low classification accuracy of single-trial in RSVP-BCI, this paper presents a new feature extraction algorithm which uses principal component analysis (PCA) and common spatial pattern (CSP) algorithm separately in spatial domain and time domain, creating a spatial-temporal hybrid CSP-PCA (STHCP) algorithm. By maximizing the discrimination distance between target and non-target, the feature dimensionality was reduced effectively. The area under the curve (AUC) of STHCP algorithm is higher than that of the three benchmark algorithms (SWFP, CSP and PCA) by 17.9%, 22.2% and 29.2%, respectively. STHCP algorithm provides a new method for target detection.
Control at beyond-visual ranges is of great significance to animal-robots with wide range motion capability. For pigeon-robots, such control can be done by the way of onboard preprogram, but not constitute a closed-loop yet. This study designed a new control system for pigeon-robots, which integrated the function of trajectory monitoring to that of brain stimulation. It achieved the closed-loop control in turning or circling by estimating pigeons’ flight state instantaneously and the corresponding logical regulation. The stimulation targets located at the formation reticularis medialis mesencephali (FRM) in the left and right brain, for the purposes of left- and right-turn control, respectively. The stimulus was characterized by the waveform mimicking the nerve cell membrane potential, and was activated intermittently. The wearable control unit weighted 11.8 g totally. The results showed a 90% success rate by the closed-loop control in pigeon-robots. It was convenient to obtain the wing shape during flight maneuver, by equipping a pigeon-robot with a vivo camera. It was also feasible to regulate the evolution of pigeon flocks by the pigeon-robots at different hierarchical level. All of these lay the groundwork for the application of pigeon-robots in scientific researches.
The effective classification of multi-task motor imagery electroencephalogram (EEG) is helpful to achieve accurate multi-dimensional human-computer interaction, and the high frequency domain specificity between subjects can improve the classification accuracy and robustness. Therefore, this paper proposed a multi-task EEG signal classification method based on adaptive time-frequency common spatial pattern (CSP) combined with convolutional neural network (CNN). The characteristics of subjects' personalized rhythm were extracted by adaptive spectrum awareness, and the spatial characteristics were calculated by using the one-versus-rest CSP, and then the composite time-domain characteristics were characterized to construct the spatial-temporal frequency multi-level fusion features. Finally, the CNN was used to perform high-precision and high-robust four-task classification. The algorithm in this paper was verified by the self-test dataset containing 10 subjects (33 ± 3 years old, inexperienced) and the dataset of the 4th 2018 Brain-Computer Interface Competition (BCI competition Ⅳ-2a). The average accuracy of the proposed algorithm for the four-task classification reached 93.96% and 84.04%, respectively. Compared with other advanced algorithms, the average classification accuracy of the proposed algorithm was significantly improved, and the accuracy range error between subjects was significantly reduced in the public dataset. The results show that the proposed algorithm has good performance in multi-task classification, and can effectively improve the classification accuracy and robustness.
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.
The brain-computer interface (BCI) based on motor imagery electroencephalography (MI-EEG) enables direct information interaction between the human brain and external devices. In this paper, a multi-scale EEG feature extraction convolutional neural network model based on time series data enhancement is proposed for decoding MI-EEG signals. First, an EEG signals augmentation method was proposed that could increase the information content of training samples without changing the length of the time series, while retaining its original features completely. Then, multiple holistic and detailed features of the EEG data were adaptively extracted by multi-scale convolution module, and the features were fused and filtered by parallel residual module and channel attention. Finally, classification results were output by a fully connected network. The application experimental results on the BCI Competition IV 2a and 2b datasets showed that the proposed model achieved an average classification accuracy of 91.87% and 87.85% for the motor imagery task, respectively, which had high accuracy and strong robustness compared with existing baseline models. The proposed model does not require complex signals pre-processing operations and has the advantage of multi-scale feature extraction, which has high practical application value.
High-frequency steady-state asymmetric visual evoked potential (SSaVEP) provides a new paradigm for designing comfortable and practical brain-computer interface (BCI) systems. However, due to the weak amplitude and strong noise of high-frequency signals, it is of great significance to study how to enhance their signal features. In this study, a 30 Hz high-frequency visual stimulus was used, and the peripheral visual field was equally divided into eight annular sectors. Eight kinds of annular sector pairs were selected based on the mapping relationship of visual space onto the primary visual cortex (V1), and three phases (in-phase[0º, 0º], anti-phase [0º, 180º], and anti-phase [180º, 0º]) were designed for each annular sector pair to explore response intensity and signal-to-noise ratio under phase modulation. A total of 8 healthy subjects were recruited in the experiment. The results showed that three annular sector pairs exhibited significant differences in SSaVEP features under phase modulation at 30 Hz high-frequency stimulation. And the spatial feature analysis showed that the two types of features of the annular sector pair in the lower visual field were significantly higher than those in the upper visual field. This study further used the filter bank and ensemble task-related component analysis to calculate the classification accuracy of annular sector pairs under three-phase modulations, and the average accuracy was up to 91.5%, which proved that the phase-modulated SSaVEP features could be used to encode high- frequency SSaVEP. In summary, the results of this study provide new ideas for enhancing the features of high-frequency SSaVEP signals and expanding the instruction set of the traditional steady state visual evoked potential paradigm.
This paper proposes a motor imagery recognition algorithm based on feature fusion and transfer adaptive boosting (TrAdaboost) to address the issue of low accuracy in motor imagery (MI) recognition across subjects, thereby increasing the reliability of MI-based brain-computer interfaces (BCI) for cross-individual use. Using the autoregressive model, power spectral density and discrete wavelet transform, time-frequency domain features of MI can be obtained, while the filter bank common spatial pattern is used to extract spatial domain features, and multi-scale dispersion entropy is employed to extract nonlinear features. The IV-2a dataset from the 4th International BCI Competition was used for the binary classification task, with the pattern recognition model constructed by combining the improved TrAdaboost integrated learning algorithm with support vector machine (SVM), k nearest neighbor (KNN), and mind evolutionary algorithm-based back propagation (MEA-BP) neural network. The results show that the SVM-based TrAdaboost integrated learning algorithm has the best performance when 30% of the target domain instance data is migrated, with an average classification accuracy of 86.17%, a Kappa value of 0.723 3, and an AUC value of 0.849 8. These results suggest that the algorithm can be used to recognize MI signals across individuals, providing a new way to improve the generalization capability of BCI recognition models.