west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Deep learning" 70 results
  • Establishment and test of intelligent classification method of thoracolumbar fractures based on machine vision

    Objective To develop a deep learning system for CT images to assist in the diagnosis of thoracolumbar fractures and analyze the feasibility of its clinical application. Methods Collected from West China Hospital of Sichuan University from January 2019 to March 2020, a total of 1256 CT images of thoracolumbar fractures were annotated with a unified standard through the Imaging LabelImg system. All CT images were classified according to the AO Spine thoracolumbar spine injury classification. The deep learning system in diagnosing ABC fracture types was optimized using 1039 CT images for training and validation, of which 1004 were used as the training set and 35 as the validation set; the rest 217 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. The deep learning system in subtyping A was optimized using 581 CT images for training and validation, of which 556 were used as the training set and 25 as the validation set; the rest 104 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. Results The accuracy and Kappa coefficient of the deep learning system in diagnosing ABC fracture types were 89.4% and 0.849 (P<0.001), respectively. The accuracy and Kappa coefficient of subtyping A were 87.5% and 0.817 (P<0.001), respectively. Conclusions The classification accuracy of the deep learning system for thoracolumbar fractures is high. This approach can be used to assist in the intelligent diagnosis of CT images of thoracolumbar fractures and improve the current manual and complex diagnostic process.

    Release date:2021-11-25 03:04 Export PDF Favorites Scan
  • Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis

    Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.

    Release date:2023-10-20 04:48 Export PDF Favorites Scan
  • A survey on the application of convolutional neural networks in the diagnosis of occupational pneumoconiosis

    Pneumoconiosis ranks first among the newly-emerged occupational diseases reported annually in China, and imaging diagnosis is still one of the main clinical diagnostic methods. However, manual reading of films requires high level of doctors, and it is difficult to discriminate the staged diagnosis of pneumoconiosis imaging, and due to the influence of uneven distribution of medical resources and other factors, it is easy to lead to misdiagnosis and omission of diagnosis in primary healthcare institutions. Computer-aided diagnosis system can realize rapid screening of pneumoconiosis in order to assist clinicians in identification and diagnosis, and improve diagnostic efficacy. As an important branch of deep learning, convolutional neural network (CNN) is good at dealing with various visual tasks such as image segmentation, image classification, target detection and so on because of its characteristics of local association and weight sharing, and has been widely used in the field of computer-aided diagnosis of pneumoconiosis in recent years. This paper was categorized into three parts according to the main applications of CNNs (VGG, U-Net, ResNet, DenseNet, CheXNet, Inception-V3, and ShuffleNet) in the imaging diagnosis of pneumoconiosis, including CNNs in pneumoconiosis screening diagnosis, CNNs in staging diagnosis of pneumoconiosis, and CNNs in segmentation of pneumoconiosis foci to conduct a literature review. It aims to summarize the methods, advantages and disadvantages, and optimization ideas of CNN applied to the images of pneumoconiosis, and to provide a reference for the research direction of further development of computer-aided diagnosis of pneumoconiosis.

    Release date: Export PDF Favorites Scan
  • Advances in the diagnosis of prostate cancer based on image fusion

    Image fusion currently plays an important role in the diagnosis of prostate cancer (PCa). Selecting and developing a good image fusion algorithm is the core task of achieving image fusion, which determines whether the fusion image obtained is of good quality and can meet the actual needs of clinical application. In recent years, it has become one of the research hotspots of medical image fusion. In order to make a comprehensive study on the methods of medical image fusion, this paper reviewed the relevant literature published at home and abroad in recent years. Image fusion technologies were classified, and image fusion algorithms were divided into traditional fusion algorithms and deep learning (DL) fusion algorithms. The principles and workflow of some algorithms were analyzed and compared, their advantages and disadvantages were summarized, and relevant medical image data sets were introduced. Finally, the future development trend of medical image fusion algorithm was prospected, and the development direction of medical image fusion technology for the diagnosis of prostate cancer and other major diseases was pointed out.

    Release date: Export PDF Favorites Scan
  • Exploration of classical deep learning algorithm in intelligent classification of Chinese randomized controlled trials

    ObjectivesTo explore the effect of the deep learning algorithm convolutional neural network (CNN) in screening of randomized controlled trials (RCTs) in Chinese medical literatures.MethodsLiterature with the topic " oral science” published in 2014 were retrieved from CNKI and exported citations containing title and abstract. RCTs screening was conducted by double independent screening, checking and peer discussion. The final results of the citations were used for CNN algorithm model training. After completing the algorithm model training, a prospective comparative trial was organized by searching all literature with the topic "oral science" published in CNKI from January to March 2018 to compare the sensitivity (SEN) and specificity (SPE) of algorithm with manual screening. The initial results of a single screener represented the performance of manual screening, and the final results after peer discussion were used as the gold standard. The best thresholds of algorithm were determined with the receptor operative characteristic (ROC) curve.ResultsA total of 1 246 RCTs and 4 754 non-RCTs were eventually included for training and testing of CNN algorithm model. 249 RCTs and 949 non-RCTs were included in the prospective trial. The SEN and SPE of manual screening were 98.01% and 98.82%. For the algorithm model, the SEN of RCTs screening decreased with the increase of threshold value while the SPE increased with the increase of threshold value. After 27 changes of threshold value, ROC curve were obtained. The area under the ROC curve was 0.9977, unveiling the optimal accuracy threshold (Threshold=0.4, SEN=98.39%, SPE=98.84%) and high sensitivity threshold (Threshold=0.06, SEN=99.60%, SPE=94.10%).ConclusionsA CNN algorithm model is trained with Chinese RCTs classification database established in this study and shows an excellent classification performance in screening RCTs of Chinese medical literature, which is proved to be comparable to the manual screening performance in the prospective controlled trial.

    Release date:2019-12-19 11:19 Export PDF Favorites Scan
  • Research progress on the application of artificial intelligence in the screening and treatment of retinopathy of prematurity

    Retinopathy of prematurity (ROP) is a major cause of vision loss and blindness among premature infants. Timely screening, diagnosis, and intervention can effectively prevent the deterioration of ROP. However, there are several challenges in ROP diagnosis globally, including high subjectivity, low screening efficiency, regional disparities in screening coverage, and severe shortage of pediatric ophthalmologists. The application of artificial intelligence (AI) as an assistive tool for diagnosis or an automated method for ROP diagnosis can improve the efficiency and objectivity of ROP diagnosis, expand screening coverage, and enable automated screening and quantified diagnostic results. In the global environment that emphasizes the development and application of medical imaging AI, developing more accurate diagnostic networks, exploring more effective AI-assisted diagnosis methods, and enhancing the interpretability of AI-assisted diagnosis, can accelerate the improvement of AI policies of ROP and the implementation of AI products, promoting the development of ROP diagnosis and treatment.

    Release date:2023-12-27 08:53 Export PDF Favorites Scan
  • A method for emotion transition recognition using cross-modal feature fusion and global perception

    Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.

    Release date:2025-10-21 03:48 Export PDF Favorites Scan
  • Research progress of breast pathology image diagnosis based on deep learning

    Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.

    Release date: Export PDF Favorites Scan
  • Research on prediction model of protein thermostability integrating graph embedding and network topology features

    Protein structure determines function, and structural information is critical for predicting protein thermostability. This study proposes a novel method for protein thermostability prediction by integrating graph embedding features and network topological features. By constructing residue interaction networks (RINs) to characterize protein structures, we calculated network topological features and utilize deep neural networks (DNN) to mine inherent characteristics. Using DeepWalk and Node2vec algorithms, we obtained node embeddings and extracted graph embedding features through a TopN strategy combined with bidirectional long short-term memory (BiLSTM) networks. Additionally, we introduced the Doc2vec algorithm to replace the Word2vec module in graph embedding algorithms, generating graph embedding feature vector encodings. By employing an attention mechanism to fuse graph embedding features with network topological features, we constructed a high-precision prediction model, achieving 87.85% prediction accuracy on a bacterial protein dataset. Furthermore, we analyzed the differences in the contributions of network topological features in the model and the differences among various graph embedding methods, and found that the combination of DeepWalk features with Doc2vec and all topological features was crucial for the identification of thermostable proteins. This study provides a practical and effective new method for protein thermostability prediction, and at the same time offers theoretical guidance for exploring protein diversity, discovering new thermostable proteins, and the intelligent modification of mesophilic proteins.

    Release date:2025-08-19 11:47 Export PDF Favorites Scan
  • The current applicating state of neural network-based electroencephalogram diagnosis of Alzheimer’s disease

    The electroencephalogram (EEG) signal is a general reflection of the neurophysiological activity of the brain, which has the advantages of being safe, efficient, real-time and dynamic. With the development and advancement of machine learning research, automatic diagnosis of Alzheimer’s diseases based on deep learning is becoming a research hotspot. Started from feedforward neural networks, this paper compared and analysed the structural properties of neural network models such as recurrent neural networks, convolutional neural networks and deep belief networks and their performance in the diagnosis of Alzheimer’s disease. It also discussed the possible challenges and research trends of this research in the future, expecting to provide a valuable reference for the clinical application of neural networks in the EEG diagnosis of Alzheimer’s disease.

    Release date:2023-02-24 06:14 Export PDF Favorites Scan
7 pages Previous 1 2 3 ... 7 Next

Format

Content