The eye-computer interaction technology based on electro-oculogram provides the users with a convenient way to control the device, which has great social significance. However, the eye-computer interaction is often disturbed by the involuntary eye movements, resulting in misjudgment, affecting the users’ experience, and even causing danger in severe cases. Therefore, this paper starts from the basic concepts and principles of eye-computer interaction, sorts out the current mainstream classification methods of voluntary/involuntary eye movement, and analyzes the characteristics of each technology. The performance analysis is carried out in combination with specific application scenarios, and the problems to be solved are further summarized, which are expected to provide research references for researchers in related fields.
When performing eye movement pattern classification for different tasks, support vector machines are greatly affected by parameters. To address this problem, we propose an algorithm based on the improved whale algorithm to optimize support vector machines to enhance the performance of eye movement data classification. According to the characteristics of eye movement data, this study first extracts 57 features related to fixation and saccade, then uses the ReliefF algorithm for feature selection. To address the problems of low convergence accuracy and easy falling into local minima of the whale algorithm, we introduce inertia weights to balance local search and global search to accelerate the convergence speed of the algorithm and also use the differential variation strategy to increase individual diversity to jump out of local optimum. In this paper, experiments are conducted on eight test functions, and the results show that the improved whale algorithm has the best convergence accuracy and convergence speed. Finally, this paper applies the optimized support vector machine model of the improved whale algorithm to the task of classifying eye movement data in autism, and the experimental results on the public dataset show that the accuracy of the eye movement data classification of this paper is greatly improved compared with that of the traditional support vector machine method. Compared with the standard whale algorithm and other optimization algorithms, the optimized model proposed in this paper has higher recognition accuracy and provides a new idea and method for eye movement pattern recognition. In the future, eye movement data can be obtained by combining it with eye trackers to assist in medical diagnosis.
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.