Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Objective To review the progress of artificial intelligence (AI) and radiomics in the study of abdominal aortic aneurysm (AAA). Method The literatures related to AI, radiomics and AAA research in recent years were collected and summarized in detail. Results AI and radiomics influenced AAA research and clinical decisions in terms of feature extraction, risk prediction, patient management, simulation of stent-graft deployment, and data mining. Conclusion The application of AI and radiomics provides new ideas for AAA research and clinical decisions, and is expected to suggest personalized treatment and follow-up protocols to guide clinical practice, aiming to achieve precision medicine of AAA.
In the extraction of fetal electrocardiogram (ECG) signal, due to the unicity of the scale of the U-Net same-level convolution encoder, the size and shape difference of the ECG characteristic wave between mother and fetus are ignored, and the time information of ECG signals is not used in the threshold learning process of the encoder’s residual shrinkage module. In this paper, a method of extracting fetal ECG signal based on multi-scale residual shrinkage U-Net model is proposed. First, the Inception and time domain attention were introduced into the residual shrinkage module to enhance the multi-scale feature extraction ability of the same level convolution encoder and the utilization of the time domain information of fetal ECG signal. In order to maintain more local details of ECG waveform, the maximum pooling in U-Net was replaced by Softpool. Finally, the decoder composed of the residual module and up-sampling gradually generated fetal ECG signals. In this paper, clinical ECG signals were used for experiments. The final results showed that compared with other fetal ECG extraction algorithms, the method proposed in this paper could extract clearer fetal ECG signals. The sensitivity, positive predictive value, and F1 scores in the 2013 competition data set reached 93.33%, 99.36%, and 96.09%, respectively, indicating that this method can effectively extract fetal ECG signals and has certain application values for perinatal fetal health monitoring.
In recent years, epileptic seizure detection based on electroencephalogram (EEG) has attracted the widespread attention of the academic. However, it is difficult to collect data from epileptic seizure, and it is easy to cause over fitting phenomenon under the condition of few training data. In order to solve this problem, this paper took the CHB-MIT epilepsy EEG dataset from Boston Children's Hospital as the research object, and applied wavelet transform for data augmentation by setting different wavelet transform scale factors. In addition, by combining deep learning, ensemble learning, transfer learning and other methods, an epilepsy detection method with high accuracy for specific epilepsy patients was proposed under the condition of insufficient learning samples. In test, the wavelet transform scale factors 2, 4 and 8 were set for experimental comparison and verification. When the wavelet scale factor was 8, the average accuracy, average sensitivity and average specificity was 95.47%, 93.89% and 96.48%, respectively. Through comparative experiments with recent relevant literatures, the advantages of the proposed method were verified. Our results might provide reference for the clinical application of epilepsy detection.
Heart rate is a crucial indicator of human health with significant physiological importance. Traditional contact methods for measuring heart rate, such as electrocardiograph or wristbands, may not always meet the need for convenient health monitoring. Remote photoplethysmography (rPPG) provides a non-contact method for measuring heart rate and other physiological indicators by analyzing blood volume pulse signals. This approach is non-invasive, does not require direct contact, and allows for long-term healthcare monitoring. Deep learning has emerged as a powerful tool for processing complex image and video data, and has been increasingly employed to extract heart rate signals remotely. This article reviewed the latest research advancements in rPPG-based heart rate measurement using deep learning, summarized available public datasets, and explored future research directions and potential advancements in non-contact heart rate measurement.
The electroencephalogram (EEG) signal is a general reflection of the neurophysiological activity of the brain, which has the advantages of being safe, efficient, real-time and dynamic. With the development and advancement of machine learning research, automatic diagnosis of Alzheimer’s diseases based on deep learning is becoming a research hotspot. Started from feedforward neural networks, this paper compared and analysed the structural properties of neural network models such as recurrent neural networks, convolutional neural networks and deep belief networks and their performance in the diagnosis of Alzheimer’s disease. It also discussed the possible challenges and research trends of this research in the future, expecting to provide a valuable reference for the clinical application of neural networks in the EEG diagnosis of Alzheimer’s disease.
Objective To develop a deep learning system for CT images to assist in the diagnosis of thoracolumbar fractures and analyze the feasibility of its clinical application. Methods Collected from West China Hospital of Sichuan University from January 2019 to March 2020, a total of 1256 CT images of thoracolumbar fractures were annotated with a unified standard through the Imaging LabelImg system. All CT images were classified according to the AO Spine thoracolumbar spine injury classification. The deep learning system in diagnosing ABC fracture types was optimized using 1039 CT images for training and validation, of which 1004 were used as the training set and 35 as the validation set; the rest 217 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. The deep learning system in subtyping A was optimized using 581 CT images for training and validation, of which 556 were used as the training set and 25 as the validation set; the rest 104 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. Results The accuracy and Kappa coefficient of the deep learning system in diagnosing ABC fracture types were 89.4% and 0.849 (P<0.001), respectively. The accuracy and Kappa coefficient of subtyping A were 87.5% and 0.817 (P<0.001), respectively. Conclusions The classification accuracy of the deep learning system for thoracolumbar fractures is high. This approach can be used to assist in the intelligent diagnosis of CT images of thoracolumbar fractures and improve the current manual and complex diagnostic process.
Synergistic effects of drug combinations are very important in improving drug efficacy or reducing drug toxicity. However, due to the complex mechanism of action between drugs, it is expensive to screen new drug combinations through trials. It is well known that virtual screening of computational models can effectively reduce the test cost. Recently, foreign scholars successfully predicted the synergistic value of new drug combinations on cancer cell lines by using deep learning model DeepSynergy. However, DeepSynergy is a two-stage method and uses only one kind of feature as input. In this study, we proposed a new end-to-end deep learning model, MulinputSynergy which predicted the synergistic value of drug combinations by integrating gene expression, gene mutation, gene copy number characteristics of cancer cells and anticancer drug chemistry characteristics. In order to solve the problem of high dimension of features, we used convolutional neural network to reduce the dimension of gene features. Experimental results showed that the proposed model was superior to DeepSynergy deep learning model, with the mean square error decreasing from 197 to 176, the mean absolute error decreasing from 9.48 to 8.77, and the decision coefficient increasing from 0.53 to 0.58. This model could learn the potential relationship between anticancer drugs and cell lines from a variety of characteristics and locate the effective drug combinations quickly and accurately.
Pneumoconiosis ranks first among the newly-emerged occupational diseases reported annually in China, and imaging diagnosis is still one of the main clinical diagnostic methods. However, manual reading of films requires high level of doctors, and it is difficult to discriminate the staged diagnosis of pneumoconiosis imaging, and due to the influence of uneven distribution of medical resources and other factors, it is easy to lead to misdiagnosis and omission of diagnosis in primary healthcare institutions. Computer-aided diagnosis system can realize rapid screening of pneumoconiosis in order to assist clinicians in identification and diagnosis, and improve diagnostic efficacy. As an important branch of deep learning, convolutional neural network (CNN) is good at dealing with various visual tasks such as image segmentation, image classification, target detection and so on because of its characteristics of local association and weight sharing, and has been widely used in the field of computer-aided diagnosis of pneumoconiosis in recent years. This paper was categorized into three parts according to the main applications of CNNs (VGG, U-Net, ResNet, DenseNet, CheXNet, Inception-V3, and ShuffleNet) in the imaging diagnosis of pneumoconiosis, including CNNs in pneumoconiosis screening diagnosis, CNNs in staging diagnosis of pneumoconiosis, and CNNs in segmentation of pneumoconiosis foci to conduct a literature review. It aims to summarize the methods, advantages and disadvantages, and optimization ideas of CNN applied to the images of pneumoconiosis, and to provide a reference for the research direction of further development of computer-aided diagnosis of pneumoconiosis.