west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Multi-scale" 25 results
  • Lung parenchyma segmentation based on double scale parallel attention network

    [Abstract]Automatic and accurate segmentation of lung parenchyma is essential for assisted diagnosis of lung cancer. In recent years, researchers in the field of deep learning have proposed a number of improved lung parenchyma segmentation methods based on U-Net. However, the existing segmentation methods ignore the complementary fusion of semantic information in the feature map between different layers and fail to distinguish the importance of different spaces and channels in the feature map. To solve this problem, this paper proposes the double scale parallel attention (DSPA) network (DSPA-Net) architecture, and introduces the DSPA module and the atrous spatial pyramid pooling (ASPP) module in the “encoder-decoder” structure. Among them, the DSPA module aggregates the semantic information of feature maps of different levels while obtaining accurate space and channel information of feature map with the help of cooperative attention (CA). The ASPP module uses multiple parallel convolution kernels with different void rates to obtain feature maps containing multi-scale information under different receptive fields. The two modules address multi-scale information processing in feature maps of different levels and in feature maps of the same level, respectively. We conducted experimental verification on the Kaggle competition dataset. The experimental results prove that the network architecture has obvious advantages compared with the current mainstream segmentation network. The values of dice similarity coefficient (DSC) and intersection on union (IoU) reached 0.972 ± 0.002 and 0.945 ± 0.004, respectively. This paper achieves automatic and accurate segmentation of lung parenchyma and provides a reference for the application of attentional mechanisms and multi-scale information in the field of lung parenchyma segmentation.

    Release date:2022-10-25 01:09 Export PDF Favorites Scan
  • Detection model of atrial fibrillation based on multi-branch and multi-scale convolutional networks

    Atrial fibrillation (AF) is a life-threatening heart condition, and its early detection and treatment have garnered significant attention from physicians in recent years. Traditional methods of detecting AF heavily rely on doctor’s diagnosis based on electrocardiograms (ECGs), but prolonged analysis of ECG signals is very time-consuming. This paper designs an AF detection model based on the Inception module, constructing multi-branch detection channels to process raw ECG signals, gradient signals, and frequency signals during AF. The model efficiently extracted QRS complex and RR interval features using gradient signals, extracted P-wave and f-wave features using frequency signals, and used raw signals to supplement missing information. The multi-scale convolutional kernels in the Inception module provided various receptive fields and performed comprehensive analysis of the multi-branch results, enabling early AF detection. Compared to current machine learning algorithms that use only RR interval and heart rate variability features, the proposed algorithm additionally employed frequency features, making fuller use of the information within the signals. For deep learning methods using raw and frequency signals, this paper introduced an enhanced method for the QRS complex, allowing the network to extract features more effectively. By using a multi-branch input mode, the model comprehensively considered irregular RR intervals and P-wave and f-wave features in AF. Testing on the MIT-BIH AF database showed that the inter-patient detection accuracy was 96.89%, sensitivity was 97.72%, and specificity was 95.88%. The proposed model demonstrates excellent performance and can achieve automatic AF detection.

    Release date:2024-10-22 02:33 Export PDF Favorites Scan
  • Non-rigid registration for medical images based on deformable convolution and multi-scale feature focusing modules

    Non-rigid registration plays an important role in medical image analysis. U-Net has been proven to be a hot research topic in medical image analysis and is widely used in medical image registration. However, existing registration models based on U-Net and its variants lack sufficient learning ability when dealing with complex deformations, and do not fully utilize multi-scale contextual information, resulting insufficient registration accuracy. To address this issue, a non-rigid registration algorithm for X-ray images based on deformable convolution and multi-scale feature focusing module was proposed. First, it used residual deformable convolution to replace the standard convolution of the original U-Net to enhance the expression ability of registration network for image geometric deformations. Then, stride convolution was used to replace the pooling operation of the downsampling operation to alleviate feature loss caused by continuous pooling. In addition, a multi-scale feature focusing module was introduced to the bridging layer in the encoding and decoding structure to improve the network model’s ability of integrating global contextual information. Theoretical analysis and experimental results both showed that the proposed registration algorithm could focus on multi-scale contextual information, handle medical images with complex deformations, and improve the registration accuracy. It is suitable for non-rigid registration of chest X-ray images.

    Release date:2023-08-23 02:45 Export PDF Favorites Scan
  • Multi-modal physiological time-frequency feature extraction network for accurate sleep stage classification

    Sleep stage classification is essential for clinical disease diagnosis and sleep quality assessment. Most of the existing methods for sleep stage classification are based on single-channel or single-modal signal, and extract features using a single-branch, deep convolutional network, which not only hinders the capture of the diversity features related to sleep and increase the computational cost, but also has a certain impact on the accuracy of sleep stage classification. To solve this problem, this paper proposes an end-to-end multi-modal physiological time-frequency feature extraction network (MTFF-Net) for accurate sleep stage classification. First, multi-modal physiological signal containing electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) are converted into two-dimensional time-frequency images containing time-frequency features by using short time Fourier transform (STFT). Then, the time-frequency feature extraction network combining multi-scale EEG compact convolution network (Ms-EEGNet) and bidirectional gated recurrent units (Bi-GRU) network is used to obtain multi-scale spectral features related to sleep feature waveforms and time series features related to sleep stage transition. According to the American Academy of Sleep Medicine (AASM) EEG sleep stage classification criterion, the model achieved 84.3% accuracy in the five-classification task on the third subgroup of the Institute of Systems and Robotics of the University of Coimbra Sleep Dataset (ISRUC-S3), with 83.1% macro F1 score value and 79.8% Cohen’s Kappa coefficient. The experimental results show that the proposed model achieves higher classification accuracy and promotes the application of deep learning algorithms in assisting clinical decision-making.

    Release date:2024-04-24 09:40 Export PDF Favorites Scan
  • Predicting epileptic seizures based on a multi-convolution fusion network

    Current epilepsy prediction methods are not effective in characterizing the multi-domain features of complex long-term electroencephalogram (EEG) data, leading to suboptimal prediction performance. Therefore, this paper proposes a novel multi-scale sparse adaptive convolutional network based on multi-head attention mechanism (MS-SACN-MM) model to effectively characterize the multi-domain features. The model first preprocesses the EEG data, constructs multiple convolutional layers to effectively avoid information overload, and uses a multi-layer perceptron and multi-head attention mechanism to focus the network on critical pre-seizure features. Then, it adopts a focal loss training strategy to alleviate class imbalance and enhance the model's robustness. Experimental results show that on the publicly created dataset (CHB-MIT) by MIT and Boston Children's Hospital, the MS-SACN-MM model achieves a maximum accuracy of 0.999 for seizure prediction 10 ~ 15 minutes in advance. This demonstrates good predictive performance and holds significant importance for early intervention and intelligent clinical management of epilepsy patients.

    Release date:2025-10-21 03:48 Export PDF Favorites Scan
  • The dual-stream feature pyramid network based on Mamba and convolution for brain magnetic resonance image registration

    Deformable image registration plays a crucial role in medical image analysis. Despite various advanced registration models having been proposed, achieving accurate and efficient deformable registration remains challenging. Leveraging the recent outstanding performance of Mamba in computer vision, we introduced a novel model called MCRDP-Net. MCRDP-Net adapted a dual-stream network architecture that combined Mamba blocks and convolutional blocks to simultaneously extract global and local information from fixed and moving images. In the decoding stage, we employed a pyramid network structure to obtain high-resolution deformation fields, achieving efficient and precise registration. The effectiveness of MCRDP-Net was validated on public brain registration datasets, OASIS and IXI. Experimental results demonstrated significant advantages of MCRDP-Net in medical image registration, with DSC, HD95, and ASD reaching 0.815, 8.123, and 0.521 on the OASIS dataset and 0.773, 7.786, and 0.871 on the IXI dataset. In summary, MCRDP-Net demonstrates superior performance in deformable image registration, proving its potential in medical image analysis. It effectively enhances the accuracy and efficiency of registration, providing strong support for subsequent medical research and applications.

    Release date:2024-12-27 03:50 Export PDF Favorites Scan
  • Research on multi-scale convolutional neural network hand muscle strength prediction model improved based on convolutional attention module

    In order to realize the quantitative assessment of muscle strength in hand function rehabilitation and then formulate scientific and effective rehabilitation training strategies, this paper constructs a multi-scale convolutional neural network (MSCNN) - convolutional block attention module (CBAM) - bidirectional long short-term memory network (BiLSTM) muscle strength prediction model to fully explore the spatial and temporal features of the data and simultaneously suppress useless features, and finally achieve the improvement of the accuracy of the muscle strength prediction model. To verify the effectiveness of the model proposed in this paper, the model in this paper is compared with traditional models such as support vector machine (SVM), random forest (RF), convolutional neural network (CNN), CNN - squeeze excitation network (SENet), MSCNN-CBAM and MSCNN-BiLSTM, and the effect of muscle strength prediction by each model is investigated when the hand force application changes from 40% of the maximum voluntary contraction force (MVC) to 60% of the MVC. The research results show that as the hand force application increases, the effect of the muscle strength prediction model becomes worse. Then the ablation experiment is used to analyze the influence degree of each module on the muscle strength prediction result, and it is found that the CBAM module plays a key role in the model. Therefore, by using the model in this article, the accuracy of muscle strength prediction can be effectively improved, and the characteristics and laws of hand muscle activities can be deeply understood, providing assistance for further exploring the mechanism of hand functions.

    Release date:2025-02-21 03:20 Export PDF Favorites Scan
  • Automatic epilepsy detection with an attention-based multiscale residual network

    The deep learning-based automatic detection of epilepsy electroencephalogram (EEG), which can avoid the artificial influence, has attracted much attention, and its effectiveness mainly depends on the deep neural network model. In this paper, an attention-based multi-scale residual network (AMSRN) was proposed in consideration of the multiscale, spatio-temporal characteristics of epilepsy EEG and the information flow among channels, and it was combined with multiscale principal component analysis (MSPCA) to realize the automatic epilepsy detection. Firstly, MSPCA was used for noise reduction and feature enhancement of original epilepsy EEG. Then, we designed the structure and parameters of AMSRN. Among them, the attention module (AM), multiscale convolutional module (MCM), spatio-temporal feature extraction module (STFEM) and classification module (CM) were applied successively to signal reexpression with attention weighted mechanism as well as extraction, fusion and classification for multiscale and spatio-temporal features. Based on the Children’s Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) public dataset, the AMSRN model achieved good results in sensitivity (98.56%), F1 score (98.35%), accuracy (98.41%) and precision (98.43%). The results show that AMSRN can make good use of brain network information flow caused by seizures to enhance the difference among channels, and effectively capture the multiscale and spatio-temporal features of EEG to improve the performance of epilepsy detection.

    Release date: Export PDF Favorites Scan
  • Research on the effect of multi-modal transcranial direct current stimulation on stroke based on electroencephalogram

    As an emerging non-invasive brain stimulation technique, transcranial direct current stimulation (tDCS) has received increasing attention in the field of stroke disease rehabilitation. However, its efficacy needs to be further studied. The tDCS has three stimulation modes: bipolar-stimulation mode, anode-stimulation mode and cathode-stimulation mode. Nineteen stroke patients were included in this research (10 with left-hemisphere lesion and 9 with right). Resting electroencephalogram (EEG) signals were collected from subjects before and after bipolar-stimulation, anodal-stimulation, cathodal-stimulation, and pseudo-stimulation, with pseudo-stimulation serving as the control group. The changes of multi-scale intrinsic fuzzy entropy (MIFE) of EEG signals before and after stimulation were compared. The results revealed that MIFE was significantly greater in the frontal and central regions after bipolar-stimulation (P < 0.05), in the left central region after anodal-stimulation (P < 0.05), and in the frontal and right central regions after cathodal-stimulation (P < 0.05) in patients with left-hemisphere lesions. MIFE was significantly greater in the frontal, central and parieto-occipital joint regions after bipolar-stimulation (P < 0.05), in the left frontal and right central regions after anodal- stimulation (P < 0.05), and in the central and right occipital regions after cathodal-stimulation (P < 0.05) in patients with right-hemisphere lesions. However, the difference before and after pseudo-stimulation was not statistically significant (P > 0.05). The results of this paper showed that the bipolar stimulation pattern affected the largest range of brain areas, and it might provide a reference for the clinical study of rehabilitation after stroke.

    Release date:2022-12-28 01:34 Export PDF Favorites Scan
  • Advances in methods and applications of single-cell Hi-C data analysis

    Chromatin three-dimensional genome structure plays a key role in cell function and gene regulation. Single-cell Hi-C techniques can capture genomic structure information at the cellular level, which provides an opportunity to study changes in genomic structure between different cell types. Recently, some excellent computational methods have been developed for single-cell Hi-C data analysis. In this paper, the available methods for single-cell Hi-C data analysis were first reviewed, including preprocessing of single-cell Hi-C data, multi-scale structure recognition based on single-cell Hi-C data, bulk-like Hi-C contact matrix generation based on single-cell Hi-C data sets, pseudo-time series analysis, and cell classification. Then the application of single-cell Hi-C data in cell differentiation and structural variation was described. Finally, the future development direction of single-cell Hi-C data analysis was also prospected.

    Release date:2023-10-20 04:48 Export PDF Favorites Scan
3 pages Previous 1 2 3 Next

Format

Content