west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Feature fusion" 14 results
  • A novel approach for assessing quality of electrocardiogram signal by integrating multi-scale temporal features

    During long-term electrocardiogram (ECG) monitoring, various types of noise inevitably become mixed with the signal, potentially hindering doctors' ability to accurately assess and interpret patient data. Therefore, evaluating the quality of ECG signals before conducting analysis and diagnosis is crucial. This paper addresses the limitations of existing ECG signal quality assessment methods, particularly their insufficient focus on the 12-lead multi-scale correlation. We propose a novel ECG signal quality assessment method that integrates a convolutional neural network (CNN) with a squeeze and excitation residual network (SE-ResNet). This approach not only captures both local and global features of ECG time series but also emphasizes the spatial correlation among ECG signals. Testing on a public dataset demonstrated that our method achieved an accuracy of 99.5%, sensitivity of 98.5%, and specificity of 99.6%. Compared with other methods, our technique significantly enhances the accuracy of ECG signal quality assessment by leveraging inter-lead correlation information, which is expected to advance the development of intelligent ECG monitoring and diagnostic technology.

    Release date:2024-12-27 03:50 Export PDF Favorites Scan
  • Research on arrhythmia classification algorithm based on adaptive multi-feature fusion network

    Deep learning method can be used to automatically analyze electrocardiogram (ECG) data and rapidly implement arrhythmia classification, which provides significant clinical value for the early screening of arrhythmias. How to select arrhythmia features effectively under limited abnormal sample supervision is an urgent issue to address. This paper proposed an arrhythmia classification algorithm based on an adaptive multi-feature fusion network. The algorithm extracted RR interval features from ECG signals, employed one-dimensional convolutional neural network (1D-CNN) to extract time-domain deep features, employed Mel frequency cepstral coefficients (MFCC) and two-dimensional convolutional neural network (2D-CNN) to extract frequency-domain deep features. The features were fused using adaptive weighting strategy for arrhythmia classification. The paper used the arrhythmia database jointly developed by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) and evaluated the algorithm under the inter-patient paradigm. Experimental results demonstrated that the proposed algorithm achieved an average precision of 75.2%, an average recall of 70.1% and an average F1-score of 71.3%, demonstrating high classification accuracy and being able to provide algorithmic support for arrhythmia classification in wearable devices.

    Release date:2025-02-21 03:20 Export PDF Favorites Scan
  • Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis

    Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.

    Release date:2023-10-20 04:48 Export PDF Favorites Scan
  • Research on emotion recognition methods based on multi-modal physiological signal feature fusion

    Emotion classification and recognition is a crucial area in emotional computing. Physiological signals, such as electroencephalogram (EEG), provide an accurate reflection of emotions and are difficult to disguise. However, emotion recognition still faces challenges in single-modal signal feature extraction and multi-modal signal integration. This study collected EEG, electromyogram (EMG), and electrodermal activity (EDA) signals from participants under three emotional states: happiness, sadness, and fear. A feature-weighted fusion method was applied for integrating the signals, and both support vector machine (SVM) and extreme learning machine (ELM) were used for classification. The results showed that the classification accuracy was highest when the fusion weights were set to EEG 0.7, EMG 0.15, and EDA 0.15, achieving accuracy rates of 80.19% and 82.48% for SVM and ELM, respectively. These rates represented an improvement of 5.81% and 2.95% compared to using EEG alone. This study offers methodological support for emotion classification and recognition using multi-modal physiological signals.

    Release date: Export PDF Favorites Scan
  • Research progress of multimodal magnetic resonance imaging brain tumor segmentation based on fused neural network model

    In clinical diagnosis of brain tumors, accurate segmentation based on multimodal magnetic resonance imaging (MRI) is essential for determining tumor type, extent, and spatial boundaries. However, differences in imaging mechanisms, information emphasis, and feature distributions among multimodal MRI data have posed significant challenges for precise tumor modeling and fusion-based segmentation. In recent years, fusion neural networks have provided effective strategies for integrating multimodal information and have become a major research focus in multimodal brain tumor segmentation. This review systematically summarized relevant studies on fusion neural networks for multimodal brain tumor segmentation published since 2019. First, the fundamental concepts of multimodal data fusion and model fusion were introduced. Then, existing methods were categorized into three types according to fusion levels: prediction fusion models, feature fusion models, and stage fusion models, and their structural characteristics and segmentation performance were comparatively analyzed. Finally, current limitations were discussed, and potential development trends of fusion neural networks for multimodal MRI brain tumor segmentation were summarized. This review aims to provide references for the design and optimization of future multimodal brain tumor segmentation models.

    Release date:2026-02-06 02:05 Export PDF Favorites Scan
  • Early Alzheimer’s disease recognition via multimodal hand movement quality assessment

    Alzheimer’s disease (AD) is a common elderly illness, and the hand movement abilities of patients differ from those of normal individuals. Focusing on the utilization of RGB, optical flow, and hand skeleton as tri-modal image information for early AD recognition, a method for early AD recognition via multi-modal hand motion quality assessment (EADR) is proposed. First, a hybrid modality feature encoder incorporating global contextual information was designed to integrate the global contextual information of features from three specific modality branches. Subsequently, a fusion modality feature decoder network incorporating specific modality features was proposed to decode the overlooked information in the fusion modality branch from specific modality features, thereby enhancing feature fusion. Experiments demonstrated that EADR effectively could capture high-quality hand motion features and excelled in hand motion quality assessment tasks, outperforming existing models. Based on this, the action quality scoring regression model trained using the k-nearest neighbors algorithm demonstrated the best recognition performance for AD patients, with Spearman’s rank correlation coefficient and Kendall’s rank correlation coefficient reaching 90.98% and 83.44%, respectively. This indicates that the assessment of hand motor ability may serve as a potential auxiliary tool for early AD identification.

    Release date:2026-02-06 02:05 Export PDF Favorites Scan
  • Research on motor imagery recognition based on feature fusion and transfer adaptive boosting

    This paper proposes a motor imagery recognition algorithm based on feature fusion and transfer adaptive boosting (TrAdaboost) to address the issue of low accuracy in motor imagery (MI) recognition across subjects, thereby increasing the reliability of MI-based brain-computer interfaces (BCI) for cross-individual use. Using the autoregressive model, power spectral density and discrete wavelet transform, time-frequency domain features of MI can be obtained, while the filter bank common spatial pattern is used to extract spatial domain features, and multi-scale dispersion entropy is employed to extract nonlinear features. The IV-2a dataset from the 4th International BCI Competition was used for the binary classification task, with the pattern recognition model constructed by combining the improved TrAdaboost integrated learning algorithm with support vector machine (SVM), k nearest neighbor (KNN), and mind evolutionary algorithm-based back propagation (MEA-BP) neural network. The results show that the SVM-based TrAdaboost integrated learning algorithm has the best performance when 30% of the target domain instance data is migrated, with an average classification accuracy of 86.17%, a Kappa value of 0.723 3, and an AUC value of 0.849 8. These results suggest that the algorithm can be used to recognize MI signals across individuals, providing a new way to improve the generalization capability of BCI recognition models.

    Release date: Export PDF Favorites Scan
  • Research on prediction model of protein thermostability integrating graph embedding and network topology features

    Protein structure determines function, and structural information is critical for predicting protein thermostability. This study proposes a novel method for protein thermostability prediction by integrating graph embedding features and network topological features. By constructing residue interaction networks (RINs) to characterize protein structures, we calculated network topological features and utilize deep neural networks (DNN) to mine inherent characteristics. Using DeepWalk and Node2vec algorithms, we obtained node embeddings and extracted graph embedding features through a TopN strategy combined with bidirectional long short-term memory (BiLSTM) networks. Additionally, we introduced the Doc2vec algorithm to replace the Word2vec module in graph embedding algorithms, generating graph embedding feature vector encodings. By employing an attention mechanism to fuse graph embedding features with network topological features, we constructed a high-precision prediction model, achieving 87.85% prediction accuracy on a bacterial protein dataset. Furthermore, we analyzed the differences in the contributions of network topological features in the model and the differences among various graph embedding methods, and found that the combination of DeepWalk features with Doc2vec and all topological features was crucial for the identification of thermostable proteins. This study provides a practical and effective new method for protein thermostability prediction, and at the same time offers theoretical guidance for exploring protein diversity, discovering new thermostable proteins, and the intelligent modification of mesophilic proteins.

    Release date:2025-08-19 11:47 Export PDF Favorites Scan
  • ST segment morphological classification based on support vector machine multi feature fusion

    ST segment morphology is closely related to cardiovascular disease. It is used not only for characterizing different diseases, but also for predicting the severity of the disease. However, the short duration, low energy, variable morphology and interference from various noises make ST segment morphology classification a difficult task. In this paper, we address the problems of single feature extraction and low classification accuracy of ST segment morphology classification, and use the gradient of ST surface to improve the accuracy of ST segment morphology multi-classification. In this paper, we identify five ST segment morphologies: normal, upward-sloping elevation, arch-back elevation, horizontal depression, and arch-back depression. Firstly, we select an ST segment candidate segment according to the QRS wave group location and medical statistical law. Secondly, we extract ST segment area, mean value, difference with reference baseline, slope, and mean squared error features. In addition, the ST segment is converted into a surface, the gradient features of the ST surface are extracted, and the morphological features are formed into a feature vector. Finally, the support vector machine is used to classify the ST segment, and then the ST segment morphology is multi-classified. The MIT-Beth Israel Hospital Database (MITDB) and the European ST-T database (EDB) were used as data sources to validate the algorithm in this paper, and the results showed that the algorithm in this paper achieved an average recognition rate of 97.79% and 95.60%, respectively, in the process of ST segment recognition. Based on the results of this paper, it is expected that this method can be introduced in the clinical setting in the future to provide morphological guidance for the diagnosis of cardiovascular diseases in the clinic and improve the diagnostic efficiency.

    Release date:2022-10-25 01:09 Export PDF Favorites Scan
  • Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features

    To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.

    Release date:2025-06-23 04:09 Export PDF Favorites Scan
2 pages Previous 1 2 Next

Format

Content