ObjectiveTo evaluate the effect of surgical treatment of vertebral artery stenosis and to summarize the experience.MethodThe clinical data of 6 patients undergoing surgical treatment from September 2018 to September 2019 were retrospectively analyzed.ResultsAll the procedures were successfully performed without intraoperative cerebral infarction, injury of thoracic duct or nerve disconnection by mistake. The operative time was 120 to 270 minutes, the median was 180 minutes. The blood loss was 50 to 150 milliliters, and the median was 65 milliliters. One patient suffered from Horner’s syndrome after the operation. One patient suffered from cerebral infarction on 4 days after the operation. During the follow-up of 3–10 months, three patients felt dizziness relieved and there were no anastomotic stricture or new cerebral infarction happened.ConclusionsSurgical treatment is safeand effective for vertebral artery stenosis. Revascularization of the carotid and vertebral arteries at the same time shouldbe avoided.
ObjectiveTo examine statistical performance of different rare-event meta-analyses methods.MethodsUsing Monte-Carlo simulation, we set a variety of scenarios to evaluate the performance of various rare-event meta-analysis methods. The performance measures included absolute percentage error, root mean square error and interval coverage.ResultsAcross different scenarios, the absolute percentage error and root mean square error were similar for Bayesian logistic regression model, generalized mixed linear effects model and continuity correction, but the interval coverage was higher with Bayesian logistic regression model. The statistical performances with Mantel-Haenszel method and Peto method were consistently suboptimal across different scenarios.ConclusionsBayesian logistic regression model may be recommended as a preferred approach for rare-event meta-analysis.
Repeated measurement quantitative data is a common data type in clinical studies, and is frequently utilized to assess the therapeutic effects of the intervention measures at a single time point in clinical trials. This study clarifies the concepts and calculation methods for sample size estimation of repeated measurement quantitative data, in order to explore the research question of "comparing group differences at a single time point", from three perspectives: the primary research questions in clinical studies, the main statistical analysis methods and the definitions of the primary outcome indicators. Discrepancies in sample sizes calculated by various methods under different correlation coefficients and varying numbers of repeated measurements were examined. The study revealed that the sample size calculation method based on the mixed-effects model or generalized estimating equations accounts for both the correlation coefficient and the number of repeated measurements, resulting in the smallest estimated sample size. Secondly, the sample size calculation method based on covariance analysis considers the correlation coefficient and produces a smaller estimated sample size than the t-test. The t-test based sample size calculation method requires an appropriate approach to be selected according to the definition of the primary outcome measure. The alignment between the sample size calculation method, the statistical analysis method and the definition of the primary outcome measure is essential to avoid the risk of overestimation or underestimation of the required sample size.
With the increasing improvement of real-world evidence as a research system and guideline specification for pre-market registration and post-market regulatory decision support of clinically urgent drug and mechanical products, identifying an approach to ensure the high quality and standards of real-world data and establishing a basis for the generation of real-world evidence is receiving increasing attention and concern from regulatory authorities. Based on the experience of Boao hope city real-world data research pattern and ophthalmic data platform construction, this paper discussed the "source data-database-evidence chain" generation process, data management, and data governance in real-world study from the special features and necessity of multiple sources and heterogeneity of data, multiple research designs, and standardized regulatory requirements, and provided references for further construction of comprehensive research data platforms in the future.
ObjectiveTo explore the utilization of longitudinal data in constructing non-time-varying outcome prediction models and to compare the impact of different modeling approaches on prediction performance. MethodsClinical predictors were selected using univariate analysis and Lasso regression. Non-time-varying outcome prediction models were developed based on latent class trajectory analysis, the two-stage model, and logistic regression. Internal validation was performed using Bootstrapping resampling, and model performance was evaluated using ROC curves, PR curves, sensitivity, specificity and other relevant metrics. ResultsA total of 49 629 pregnant women were included in the study, with mean age of 31.42±4.13 years and pre-pregnancy BMI of 20.91±2.62kg/m². Fourteen predictors were incorporated into the final model. Prediction models utilizing longitudinal data demonstrated high accuracy, with AUROC values exceeding 0.90 and PR-AUC values greater than 0.47. The two-stage model based on late-pregnancy hemoglobin data showed the best performance, achieving AUROC of 0.93 (95%CI 0.92 to 0.94) and PR-AUC of 0.60 (95%CI 0.56 to 0.64). Internal validation confirmed robust model performance, and calibration curves indicated a good agreement between predicted and observed outcomes. ConclusionFor the longitudinal data, the two-stage model can well capture the dynamic change trajectory of the longitudinal data. For different clinical outcomes, the predictive value of repeated measurement data is different.
With the establishment and development of regional healthcare big data platforms, regional healthcare big data is playing an increasingly important role in health policy program evaluations. Regional healthcare big data is usually structured hierarchically. Traditional statistical models have limitations in analyzing hierarchical data, and multilevel models are powerful statistical analysis tools for processing hierarchical data. This method has frequently been used by healthcare researchers overseas, however, it lacks application in China. This paper aimed to introduce the multilevel model and several common application scenarios in medicine policy evaluations. We expected to provide a methodological framework for medicine policy evaluation using regional healthcare big data or hierarchical data.
The use of repeated measurement data from patients to improve the classification ability of prediction models is a key methodological issue in the current development of clinical prediction models. This study aims to investigate the statistical modeling approach of the two-stage model in developing prediction models for non-time-varying outcomes using repeated measurement data. Using the prediction of the risk of severe postpartum hemorrhage as a case study, this study presents the implementation process of the two-stage model from various perspectives, including data structure, basic principles, software utilization, and model evaluation, to provide methodological support for clinical investigators.
Evidence synthesis is the process of systematically gathering, analyzing, and integrating available research evidence. The quality of evidence synthesis depends on the quality of the original studies included. Validity assessment, also known as risk of bias assessment, is an essential method for assessing the quality of these original studies. Currently, there are numerous validity assessment tools available, but some of them lack a rigorous development process and evaluation. The application of inappropriate validity assessment tools to assessing the quality of the original studies during the evidence synthesis process may compromise the accuracy of study conclusions and mislead the clinical practice. To address this dilemma, the LATITUDES Network, a one-stop resource website for validity assessment tools, was established in September 2023, led by academics at the University of Bristol, U.K. This Network is dedicated to collecting, sorting and promoting validity assessment tools to improve the accuracy of original study validity assessments and increase the robustness and reliability of the results of evidence synthesis. This study introduces the background of the establishment of the LATITUDES Network, the included validity assessment tools, and the training resources for the use of validity assessment tools, in order to provide a reference for domestic scholars to learn more about the LATITUDES Network, to better use the appropriate validity assessment tools to conduct study quality assessments, and to provide references for the development of validity assessment tools.
High-quality randomized controlled trials (RCTs) are regarded as the gold standard for assessing the efficiency and safety of drugs. However, conducting RCTs is expensive and time consumed, and providing timely evidence by RCTs for regulatory agencies and medical decision-makers can be challenging, particularly for new or emerging serious diseases. Additionally, the strict design of RCTs often results in a weakly external validity, making it difficult to provide the evidence of the clinical efficacy and safety of drugs in a broader population. In contrast, large simple clinical trials (LSTs) can expedite the research process and provide better extrapolation and reliable evidence at a lower cost. This article presents the development, features, and distinctions between LSTs and RCTs, as well as special considerations when conducting LSTs, in accordance with literature and guidance principles from regulatory agencies both from China and other countries. Furthermore, this paper assesses the potential of real-world data to bolster the development of LSTs, offering relevant researchers’ insight and guidance on how to conduct LSTs.
Interrupted time series (ITS) analysis is a quasi-experimental design for evaluating the effectiveness of health interventions. By controlling the time trend before the intervention, ITS is often used to estimate the level change and slope change after the intervention. However, the traditional ITS modeling strategy might indicate aggregation bias when the data was collected from different clusters. This study introduced two advanced ITS methods of handling hierarchical data to provide the methodology framework for population-level health intervention evaluation.