Control at beyond-visual ranges is of great significance to animal-robots with wide range motion capability. For pigeon-robots, such control can be done by the way of onboard preprogram, but not constitute a closed-loop yet. This study designed a new control system for pigeon-robots, which integrated the function of trajectory monitoring to that of brain stimulation. It achieved the closed-loop control in turning or circling by estimating pigeons’ flight state instantaneously and the corresponding logical regulation. The stimulation targets located at the formation reticularis medialis mesencephali (FRM) in the left and right brain, for the purposes of left- and right-turn control, respectively. The stimulus was characterized by the waveform mimicking the nerve cell membrane potential, and was activated intermittently. The wearable control unit weighted 11.8 g totally. The results showed a 90% success rate by the closed-loop control in pigeon-robots. It was convenient to obtain the wing shape during flight maneuver, by equipping a pigeon-robot with a vivo camera. It was also feasible to regulate the evolution of pigeon flocks by the pigeon-robots at different hierarchical level. All of these lay the groundwork for the application of pigeon-robots in scientific researches.
The bidirectional closed-loop motor imagery brain-computer interface (MI-BCI) is an emerging method for active rehabilitation training of motor dysfunction, extensively tested in both laboratory and clinical settings. However, no standardized method for evaluating its rehabilitation efficacy has been established, and relevant literature remains limited. To facilitate the clinical translation of bidirectional closed-loop MI-BCI, this article first introduced its fundamental principles, reviewed the rehabilitation training cycle and methods for evaluating rehabilitation efficacy, and summarized approaches for evaluating system usability, user satisfaction and usage. Finally, the challenges associated with evaluating the rehabilitation efficacy of bidirectional closed-loop MI-BCI were discussed, aiming to promote its broader adoption and standardization in clinical practice.
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups ofⅣ_ⅢandⅣ_Ⅰ. The experimental results proved that the method proposed in this paper was feasible.
In the present investigation, we studied four methods of blind source separation/independent component analysis (BSS/ICA), AMUSE, SOBI, JADE, and FastICA. We did the feature extraction of electroencephalogram (EEG) signals of brain computer interface (BCI) for classifying spontaneous mental activities, which contained four mental tasks including imagination of left hand, right hand, foot and tongue movement. Different methods of extract physiological components were studied and achieved good performance. Then, three combined methods of SOBI and FastICA for extraction of EEG features of motor imagery were proposed. The results showed that combining of SOBI and ICA could not only reduce various artifacts and noise but also localize useful source and improve accuracy of BCI. It would improve further study of physiological mechanisms of motor imagery.
Rapid serial visual presentation-brain computer interface (RSVP-BCI) is the most popular technology in the early discover task based on human brain. This algorithm can obtain the rapid perception of the environment by human brain. Decoding brain state based on single-trial of multichannel electroencephalogram (EEG) recording remains a challenge due to the low signal-to-noise ratio (SNR) and nonstationary. To solve the problem of low classification accuracy of single-trial in RSVP-BCI, this paper presents a new feature extraction algorithm which uses principal component analysis (PCA) and common spatial pattern (CSP) algorithm separately in spatial domain and time domain, creating a spatial-temporal hybrid CSP-PCA (STHCP) algorithm. By maximizing the discrimination distance between target and non-target, the feature dimensionality was reduced effectively. The area under the curve (AUC) of STHCP algorithm is higher than that of the three benchmark algorithms (SWFP, CSP and PCA) by 17.9%, 22.2% and 29.2%, respectively. STHCP algorithm provides a new method for target detection.
The development and potential application of brain-computer interface (BCI) technology is closely related to the human brain, so that the ethical regulation of BCI has become an important issue attracting the consideration of society. Existing literatures have discussed the ethical norms of BCI technology from the perspectives of non-BCI developers and scientific ethics, while few discussions have been launched from the perspective of BCI developers. Therefore, there is a great need to study and discuss the ethical norms of BCI technology from the perspective of BCI developers. In this paper, we present the user-centered and non-harmful BCI technology ethics, and then discuss and look forward on them. This paper argues that human beings can cope with the ethical issues arising from BCI technology, and as BCI technology develops, its ethical norms will be improved continuously. It is expected that this paper can provide thoughts and references for the formulation of ethical norms related to BCI technology.
The brain-computer interface (BCI) based on motor imagery electroencephalography (EEG) shows great potential in neurorehabilitation due to its non-invasive nature and ease of use. However, motor imagery EEG signals have low signal-to-noise ratios and spatiotemporal resolutions, leading to low decoding recognition rates with traditional neural networks. To address this, this paper proposed a three-dimensional (3D) convolutional neural network (CNN) method that learns spatial-frequency feature maps, using Welch method to calculate the power spectrum of EEG frequency bands, converted time-series EEG into a brain topographical map with spatial-frequency information. A 3D network with one-dimensional and two-dimensional convolutional layers was designed to effectively learn these features. Comparative experiments demonstrated that the average decoding recognition rate reached 86.89%, outperforming traditional methods and validating the effectiveness of this approach in motor imagery EEG decoding.
Affective brain-computer interfaces (aBCIs) has important application value in the field of human-computer interaction. Electroencephalogram (EEG) has been widely concerned in the field of emotion recognition due to its advantages in time resolution, reliability and accuracy. However, the non-stationary characteristics and individual differences of EEG limit the generalization of emotion recognition model in different time and different subjects. In this paper, in order to realize the recognition of emotional states across different subjects and sessions, we proposed a new domain adaptation method, the maximum classifier difference for domain adversarial neural networks (MCD_DA). By establishing a neural network emotion recognition model, the shallow feature extractor was used to resist the domain classifier and the emotion classifier, respectively, so that the feature extractor could produce domain invariant expression, and train the decision boundary of classifier learning task specificity while realizing approximate joint distribution adaptation. The experimental results showed that the average classification accuracy of this method was 88.33% compared with 58.23% of the traditional general classifier. It improves the generalization ability of emotion brain-computer interface in practical application, and provides a new method for aBCIs to be used in practice.
Brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP) have attracted much attention in the field of intelligent robotics. Traditional SSVEP-based BCI systems mostly use synchronized triggers without identifying whether the user is in the control or non-control state, resulting in a system that lacks autonomous control capability. Therefore, this paper proposed a SSVEP asynchronous state recognition method, which constructs an asynchronous state recognition model by fusing multiple time-frequency domain features of electroencephalographic (EEG) signals and combining with a linear discriminant analysis (LDA) to improve the accuracy of SSVEP asynchronous state recognition. Furthermore, addressing the control needs of disabled individuals in multitasking scenarios, a brain-machine fusion system based on SSVEP-BCI asynchronous cooperative control was developed. This system enabled the collaborative control of wearable manipulator and robotic arm, where the robotic arm acts as a “third hand”, offering significant advantages in complex environments. The experimental results showed that using the SSVEP asynchronous control algorithm and brain-computer fusion system proposed in this paper could assist users to complete multitasking cooperative operations. The average accuracy of user intent recognition in online control experiments was 93.0%, which provides a theoretical and practical basis for the practical application of the asynchronous SSVEP-BCI system.
Neurofeedback (NF) technology based on electroencephalogram (EEG) data or functional magnetic resonance imaging (fMRI) has been widely studied and applied. In contrast, functional near infrared spectroscopy (fNIRS) has become a new technique in NF research in recent years. fNIRS is a neuroimaging technology based on hemodynamics, which has the advantages of low cost, good portability and high spatial resolution, and is more suitable for use in natural environments. At present, there is a lack of comprehensive review on fNIRS-NF technology (fNIRS-NF) in China. In order to provide a reference for the research of fNIRS-NF technology, this paper first describes the principle, key technologies and applications of fNIRS-NF, and focuses on the application of fNIRS-NF. Finally, the future development trend of fNIRS-NF is prospected and summarized. In conclusion, this paper summarizes fNIRS-NF technology and its application, and concludes that fNIRS-NF technology has potential practicability in neurological diseases and related fields. fNIRS can be used as a good method for NF training. This paper is expected to provide reference information for the development of fNIRS-NF technology.