With the increasing availability of clinical and biomedical big data, machine learning is being widely used in scientific research and academic papers. It integrates various types of information to predict individual health outcomes. However, deficiencies in reporting key information have gradually emerged. These include issues like data bias, model fairness across different groups, and problems with data quality and applicability. Maintaining predictive accuracy and interpretability in real-world clinical settings is also a challenge. This increases the complexity of safely and effectively applying predictive models to clinical practice. To address these problems, TRIPOD+AI (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis+artificial intelligence) introduces a reporting standard for machine learning models. It is based on TRIPOD and aims to improve transparency, reproducibility, and health equity. These improvements enhance the quality of machine learning model applications. Currently, research on prediction models based on machine learning is rapidly increasing. To help domestic readers better understand and apply TRIPOD+AI, we provide examples and interpretations. We hope this will support researchers in improving the quality of their reports.
[Abstract]Expert consensus serves as a crucial supplement to clinical practice guidelines, offering guidance when evidence is insufficient or controversial. However, it often suffers from low reporting quality, incomplete content, and lack of transparency in processes. Reporting guideline provides a standardized framework for medical research documentation by prescribing content requirements and structural formats. As reporting guidelines for different guideline documents, the RIGHT (Reporting Items for Practice Guidelines in Healthcare) checklist emphasizes evidence quality assessment and recommendation formulation, while the ACCORD (ACcurate COnsensus Reporting Document) checklist focuses on standardizing consensus processes - each with distinct strengths and limitations. To address these gaps, this study proposes an integrated framework (TIMER-DO) to compensate for the deficiencies of individual checklists and enhance the reporting quality of consensus statements. Future efforts should develop consensus-specific methodological quality assessment tools, streamline and optimize reporting guideline, strengthen the dissemination and personnel training initiatives for consensus reporting standards, and enhance the global impact and recognition of consensus documents.