This paper presents a kind of automatic segmentation method for white blood cell based on HSI corrected space information fusion. Firstly, the original cell image is transformed to HSI colour space conversion. Because the transformation formulas of H component piecewise function was discontinuous, the uniformity of uniform visual cytoplasm area in the original image was lead to become lower in this channel. We then modified formulas, and then fetched information of nucleus, cytoplasm, red blood cells and background region according to distribution characteristics of the H, S and I-channel, using the theory and method of information fusion to build fusion imageⅠand fusion imageⅡ, which only contained cytoplasm and a small amount of interference, and fetched nucleus and cytoplasm respectively. Finally, we marked the nucleus and cytoplasm region and obtained the final result of segmentation. The simulation results showed that the new algorithm of image segmentation for white blood cell had high accuracy, robustness and universality.
In this paper, we propose a new active contour algorithm, i.e. hierarchical contextual active contour (HCAC), and apply it to automatic liver segmentation from three-dimensional CT (3D-CT) images. HCAC is a learning-based method and can be divided into two stages. At the first stage, i.e. the training stage, given a set of abdominal 3D-CT training images and the corresponding manual liver labels, we tried to establish a mapping between automatic segmentations (in each round) and manual reference segmentations via context features, and obtained a series of self-correcting classifiers. At the second stage, i.e. the segmentation stage, we firstly used the basic active contour to segment the image and subsequently used the contextual active contour (CAC) iteratively, which combines the image information and the current shape model, to improve the segmentation result. The current shape model is produced by the corresponding self-correcting classifier (the input is the previous automatic segmentation result). The proposed method was evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results showed that we would get more and more accurate segmentation results by the iterative steps and the satisfied results would be obtained after about six rounds of iterations.
Objective To develop a neural network architecture based on deep learning to assist knee CT images automatic segmentation, and validate its accuracy. Methods A knee CT scans database was established, and the bony structure was manually annotated. A deep learning neural network architecture was developed independently, and the labeled database was used to train and test the neural network. Metrics of Dice coefficient, average surface distance (ASD), and Hausdorff distance (HD) were calculated to evaluate the accuracy of the neural network. The time of automatic segmentation and manual segmentation was compared. Five orthopedic experts were invited to score the automatic and manual segmentation results using Likert scale and the scores of the two methods were compared. Results The automatic segmentation achieved a high accuracy. The Dice coefficient, ASD, and HD of the femur were 0.953±0.037, (0.076±0.048) mm, and (3.101±0.726) mm, respectively; and those of the tibia were 0.950±0.092, (0.083±0.101) mm, and (2.984±0.740) mm, respectively. The time of automatic segmentation was significantly shorter than that of manual segmentation [(2.46±0.45) minutes vs. (64.73±17.07) minutes; t=36.474, P<0.001). The clinical scores of the femur were 4.3±0.3 in the automatic segmentation group and 4.4±0.2 in the manual segmentation group, and the scores of the tibia were 4.5±0.2 and 4.5±0.3, respectively. There was no significant difference between the two groups (t=1.753, P=0.085; t=0.318, P=0.752). Conclusion The automatic segmentation of knee CT images based on deep learning has high accuracy and can achieve rapid segmentation and three-dimensional reconstruction. This method will promote the development of new technology-assisted techniques in total knee arthroplasty.
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
The skin is the largest organ of the human body, and many visceral diseases will be directly reflected on the skin, so it is of great clinical significance to accurately segment the skin lesion images. To address the characteristics of complex color, blurred boundaries, and uneven scale information, a skin lesion image segmentation method based on dense atrous spatial pyramid pooling (DenseASPP) and attention mechanism is proposed. The method is based on the U-shaped network (U-Net). Firstly, a new encoder is redesigned to replace the ordinary convolutional stacking with a large number of residual connections, which can effectively retain key features even after expanding the network depth. Secondly, channel attention is fused with spatial attention, and residual connections are added so that the network can adaptively learn channel and spatial features of images. Finally, the DenseASPP module is introduced and redesigned to expand the perceptual field size and obtain multi-scale feature information. The algorithm proposed in this paper has obtained satisfactory results in the official public dataset of the International Skin Imaging Collaboration (ISIC 2016). The mean Intersection over Union (mIOU), sensitivity (SE), precision (PC), accuracy (ACC), and Dice coefficient (Dice) are 0.901 8, 0.945 9, 0.948 7, 0.968 1, 0.947 3, respectively. The experimental results demonstrate that the method in this paper can improve the segmentation effect of skin lesion images, and is expected to provide an auxiliary diagnosis for professional dermatologists.
There are some problems in positron emission tomography/ computed tomography (PET/CT) lung images, such as little information of feature pixels in lesion regions, complex and diverse shapes, and blurred boundaries between lesions and surrounding tissues, which lead to inadequate extraction of tumor lesion features by the model. To solve the above problems, this paper proposes a dense interactive feature fusion Mask RCNN (DIF-Mask RCNN) model. Firstly, a feature extraction network with cross-scale backbone and auxiliary structures was designed to extract the features of lesions at different scales. Then, a dense interactive feature enhancement network was designed to enhance the lesion detail information in the deep feature map by interactively fusing the shallowest lesion features with neighboring features and current features in the form of dense connections. Finally, a dense interactive feature fusion feature pyramid network (FPN) network was constructed, and the shallow information was added to the deep features one by one in the bottom-up path with dense connections to further enhance the model’s perception of weak features in the lesion region. The ablation and comparison experiments were conducted on the clinical PET/CT lung image dataset. The results showed that the APdet, APseg, APdet_s and APseg_s indexes of the proposed model were 67.16%, 68.12%, 34.97% and 37.68%, respectively. Compared with Mask RCNN (ResNet50), APdet and APseg indexes increased by 7.11% and 5.14%, respectively. DIF-Mask RCNN model can effectively detect and segment tumor lesions. It provides important reference value and evaluation basis for computer-aided diagnosis of lung cancer.
The detection of electrocardiogram (ECG) characteristic wave is the basis of cardiovascular disease analysis and heart rate variability analysis. In order to solve the problems of low detection accuracy and poor real-time performance of ECG signal in the state of motion, this paper proposes a detection algorithm based on segmentation energy and stationary wavelet transform (SWT). Firstly, the energy of ECG signal is calculated by segmenting, and the energy candidate peak is obtained after moving average to detect QRS complex. Secondly, the QRS amplitude is set to zero and the fifth component of SWT is used to locate P wave and T wave. The experimental results show that compared with other algorithms, the algorithm in this paper has high accuracy in detecting QRS complex in different motion states. It only takes 0.22 s to detect QSR complex of a 30-minute ECG record, and the real-time performance is improved obviously. On the basis of QRS complex detection, the accuracy of P wave and T wave detection is higher than 95%. The results show that this method can improve the efficiency of ECG signal detection, and provide a new method for real-time ECG signal classification and cardiovascular disease diagnosis.
Objective To propose an innovative self-supervised learning method for vascular segmentation in computed tomography angiography (CTA) images by integrating feature reconstruction with masked autoencoding. Methods A 3D masked autoencoder-based framework was developed, where in 3D histogram of oriented gradients (HOG) was utilized for multi-scale vascular feature extraction. During pre-training, random masking was applied to local patches of CTA images, and the model was trained to jointly reconstruct original voxels and HOG features of masked regions. The pre-trained model was further fine-tuned on two annotated datasets for clinical-level vessel segmentation. Results Evaluated on two independent datasets (30 labeled CTA images each), our method achieved superior segmentation accuracy to the supervised neural network U-Net (nnU-Net) baseline, with Dice similarity coefficients of 91.2% vs. 89.7% (aorta) and 84.8% vs. 83.2% (coronary arteries). Conclusion The proposed self-supervised model significantly reduces manual annotation costs without compromising segmentation precision, showing substantial potential for enhancing clinical workflows in vascular disease management.
ObjectiveTo propose a lung artery segmentation method that integrates shape and position prior knowledge, aiming to solve the issues of inaccurate segmentation caused by the high similarity and small size differences between the lung arteries and surrounding tissues in CT images. MethodsBased on the three-dimensional U-Net network architecture and relying on the PARSE 2022 database image data, shape and position prior knowledge was introduced to design feature extraction and fusion strategies to enhance the ability of lung artery segmentation. The data of the patients were divided into three groups: a training set, a validation set, and a test set. The performance metrics for evaluating the model included Dice Similarity Coefficient (DSC), sensitivity, accuracy, and Hausdorff distance (HD95). ResultsThe study included lung artery imaging data from 203 patients, including 100 patients in the training set, 30 patients in the validation set, and 73 patients in the test set. Through the backbone network, a rough segmentation of the lung arteries was performed to obtain a complete vascular structure; the branch network integrating shape and position information was used to extract features of small pulmonary arteries, reducing interference from the pulmonary artery trunk and left and right pulmonary arteries. Experimental results showed that the segmentation model based on shape and position prior knowledge had a higher DSC (82.81%±3.20% vs. 80.47%±3.17% vs. 80.36%±3.43%), sensitivity (85.30%±8.04% vs. 80.95%±6.89% vs. 82.82%±7.29%), and accuracy (81.63%±7.53% vs. 81.19%±8.35% vs. 79.36%±8.98%) compared to traditional three-dimensional U-Net and V-Net methods. HD95 could reach (9.52±4.29) mm, which was 6.05 mm shorter than traditional methods, showing excellent performance in segmentation boundaries. ConclusionThe lung artery segmentation method based on shape and position prior knowledge can achieve precise segmentation of lung artery vessels and has potential application value in tasks such as bronchoscopy or percutaneous puncture surgery navigation.
Lung diseases such as lung cancer and COVID-19 seriously endanger human health and life safety, so early screening and diagnosis are particularly important. computed tomography (CT) technology is one of the important ways to screen lung diseases, among which lung parenchyma segmentation based on CT images is the key step in screening lung diseases, and high-quality lung parenchyma segmentation can effectively improve the level of early diagnosis and treatment of lung diseases. Automatic, fast and accurate segmentation of lung parenchyma based on CT images can effectively compensate for the shortcomings of low efficiency and strong subjectivity of manual segmentation, and has become one of the research hotspots in this field. In this paper, the research progress in lung parenchyma segmentation is reviewed based on the related literatures published at domestic and abroad in recent years. The traditional machine learning methods and deep learning methods are compared and analyzed, and the research progress of improving the network structure of deep learning model is emphatically introduced. Some unsolved problems in lung parenchyma segmentation were discussed, and the development prospect was prospected, providing reference for researchers in related fields.