SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Survival analysis is a branch of statistics that deals with both the tracking of time and the survival status simultaneously as the dependent response. Current comparisons of survival model performance mostly center on clinical data with classic statistical survival models, with prediction accuracy often serving as the sole metric of model performance. Moreover, survival analysis approaches for censored omics data have not been thoroughly investigated. The common approach is to binarize the survival time and perform a classification analysis.

Here, we develop a benchmarking design, SurvBenchmark, that evaluates a diverse collection of survival models for both clinical and omics data sets. SurvBenchmark not only focuses on classical approaches such as the Cox model but also evaluates state-of-the-art machine learning survival models. All approaches were assessed using multiple performance metrics; these include model predictability, stability, flexibility, and computational issues. Our systematic comparison design with 320 comparisons (20 methods over 16 data sets) shows that the performances of survival models vary in practice over real-world data sets and over the choice of the evaluation metric. In particular, we highlight that using multiple performance metrics is critical in providing a balanced assessment of various models. The results in our study will provide practical guidelines for translational scientists and clinicians, as well as define possible areas of investigation in both survival technique and benchmarking strategies.

Article activity feed

  1. Survival

    **Reviewer 2. Animesh Acharjee **

    SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data.

    Authors compared many survival analysis methods and created a benchmarking framework called as SurvBenchmark. This is one of the extensive study using survival analysis and will be useful for translational community. I have few suggestions to improve the quality of the manuscript.

    1. Figure 1: LASSO, EN and Ridge are regularization methods. So, I would suggest including a new classification category as "regularization" or "penalization methods" and take out those from non-parametric models. Obviously this also need to be included accordingly in the methodology section and discussions
    2. Data sets: please provide a table with six clinical and ten omics data sets with number of samples, features and reference link.
    3. Discussion section: How the choice of the method need to be chosen? What criteria need to be used? I understand one does not fit all but some sort of clear guidance will be very useful. Also sample size related aspects need to be more discussed. In the omics research number of samples are really limited and deep learning based survival analysis is not feasible as authored mentioned in the line number 328-331. So, question come, when we should used deep learning based methods and when we should not.

    **Reviewer 3. Xiangqian Guo ** Accept

  2. Abstract

    This work has been published in GigaScience Journal under a CC-BY 4.0 license (https://doi.org/10.1093/gigascience/giac071) and has published the reviews under the same license.

    Reviewer 1. Moritz Herrmann

    First review: Summary:

    The authors conducted a benchmark study of survival prediction methods. The design of the study is reasonable in principle. The authors base their study on a comprehensive set of methods and performance evaluation criteria. In addition to standard statistical methods such as the CoxPH model and its variants, several machine learning methods including deep learning methods were used. In particular, the intention to conduct a benchmark study based on a large, diverse set of datasets is welcome. There is indeed a need for general, large-scale survival prediction benchmark studies. However, I have serious concerns about the quality of the study, and there are several points that need clarification and/or improvement.

    Major issues:

    1. The method comparison does not seem fair As far as I can tell from the description of the methods, the method comparison is not fair and/or not informative. In particular, given the information provided in Supp-Table-3 and the code provided in the Github repository, hyperparameter tuning has not been conducted for some methods. For example, Supp-Table-3 indicates that the parameters 'stepnumber' and 'penaltynumber' of the CoxBoost method are set to 10 and 100, respectively. Similarly, only two versions of RSF with fixed ntree (100 and 1000) and mtry (10, 20) values are used. Also, the deep learning methods appear not to be extensively tuned. On the other hand, telling form the code, methods such as the Cox model variants (implemented via glmnet) and MTLR have been tuned at least a little. Please clearly explain in detail, how the hyperparameters have been specified respectively how hyperparameter tuning has been conducted for the different methods? If, in fact, not all methods have been tuned, this is a serious issue and the experiments need to be rerun under a sound and fair tuning regime.

    2. Description of the study design Related to the first point, the description of the study design needs to be improved in general as it does not allow to assess the conducted experiments in detail. A few examples, which require clarification:

    • as already mentioned, the method configurations and implementations are not described sufficiently. It is unclear how exactly the hyperparameter settings have been obtained, how tuning as been applied and why only for some methods
    • concerning the methods Cox(GA), MTLR(GA), COXBOOST(GA), MTLR(DE), COXBOOST(DE): have the feature selection approaches been applied on the complete datasets or only on the training sets
    • Supp-Table-3 lists two implementations of the Lasso, Ridge and Elastic Net Cox methods (via penalized and glmnet); yet, Figure 2 in the main manuscript only lists one version. Which implementations have been used and are reported in Figure 2?
    • l. 221: it is stated that "the raw Brier score" has been calculated. At which time point(s) and why at this/these time point(s)?
    • Supp-Table-2: it is stated that "some methods are not fully successful for all datasets", but only DNNSurv is further examined. Is it just DNNSurv or are there other methods that have failed in some iterations? Moreover, what has been done about the failing iterations? Have the missing values be imputed? Are the failing iterations ignored?

    I recommend that section 3 be comprehensively revised and expanded, in particular including the methods implementations, how hyperparamters are obtained/tuning has been conducted, aggregation of performance results, handling of failing iterations. Moreover, I suggest to provide summary tables of the methods and datasets in the main manuscript and not in the supplement.

    1. Reliability of the presented results In other studies [BRSB20, SCS+20, HPH+20] differences in (mean) model prediction performance have been reported to be small (while variation over datasets can be large). This can also be seen in Figure 3 of the main manuscript. Please include more analyses on the variability of prediction performances and also include a comparison to a baseline method such as the Kaplan-Meier estimate. Most importantly, if some methods have been tuned while others have not, the reported results are not reliable. For example, the untuned methods are likely to be ill-specified for the given datasets and thus may yield sub-optimal prediction performances. Moreover, if internal hyperparameter tuning is conducted for some methods, for example via cv.glmnet for the Cox model variants, and not for others, the computation times are also not comparable.

    2. Clarity of language, structure and scope I believe that the quality of the written English is not up to the standard of a scientific publication and consider language editing necessary (yet, it has to be taken into account that I am not a native speaker). Unlike related studies [BWSR21, SCS+20, e.g.], the paper lacks clarity and/or coherence. Although clarity and coherence can be improved with language editing, there are also imprecise descriptions in section 2 that may additionally require editing from a technical perspective. For example:

    • l. 76 - 78: The way censoring is described is not coherent, e.g.: "the class label '0' (referring to a 'no-event') does not mean an event class labelled as '0'". Furthermore, it is not true that "the event-outcome is 'unknown'". The event is known, but the exact event time is not observed for censored observations.
    • The authors aim to provide a comprehensive benchmarking study of survival analysis methods. However, they do not, for example, provide significance tests for performance differences nor critical differences plots (it should be noted that the number of datasets included may not provide enough power to do so). This is in stark contrast to the work of Sonabend [Son21].

    I suggest revising section 2 using more precise terminology and clearly describing the scope of the study, e.g., what type of censoring is being studied, whether time-dependent variable and effects are of interest, etc. I think this is very important, especially since the authors aim to provide "practical guidelines for translational scientists and clinicians" (l. 32) who may not be familiar with the specifics of survival analysis.

    Minor issues

    • l. 43: Include references for specific examples
    • l. 60: The cited reference probably is not correct
    • l. 266: "MTLR-based approaches perform significantly better". Was a statistical test performed to determine significant differences in performance? If yes, indicate which test was performed. If not, do not use the term "significant" as this may be misunderstood as statistical significance.
    • Briefly explain what the difference is between data sets GE1 to GE6.
    • It has been shown that omics data alone may not be very useful [VDBSB19]. Please explain why only omics variables are used for the respective datasets.
    • Figure 1: Consider changing the caption to 'An overview of survival methods used in this study' as there are survival methods that are not covered. Moreover, consider referencing Wang et al [WLR19] as Figure 1a resembles Figure 3 presented therein.
    • Figure 2: Please add more meaningful legends (e.g., title of legend; change numbers to Yes, No, etc.).
    • Figure 2 a & b: What do the dendrograms relate to?
    • Figure 2 d: The c-index is not a proper scoring rule [BKG19] (and only measures discrimination), better use the integrated Brier score (at best, at different evaluation time points) as it is a proper scoring rule and measures discrimination as well as calibration.
    • Figure 3: At which time point is the Brier score evaluated and why at that time point? Consider using the integrated Brier score instead.
    • This is rather subjective, but I find the use of the term "framework", especially that the study contributes by "the development of a benchmarking framework" (l. 60), irritating. For example, a general machine learning framework for survival analysis was developed by Bender et al. [BRSB20], while general computational benchmarking frameworks in R are provided, e.g., by mlr3 [LBR+19] or tidymodels [KW20]. The present study conducts a benchmark experiment with specific design choices, but in my opinion it does not develop a new benchmarking framework. Thus, I would suggest not using the term "framework" but better "benchmark design" or "study design".
    • In addition, the authors speak of a "customizable weighting framework" (l. 241), but never revisit this weighting scheme in relation to the results and/or provide practical guidance for it. Please explain w.r.t. the results how this scheme can and should be applied in practice.

    The references need to be revised. A few examples:

    • l. 355 & 358: This seems to be the same reference.
    • l. 384: Title missing
    • l. 394: Year missing
    • l. 409: Year missing
    • l. 438: BioRxiv identifier missing
    • l. 441: ArXiv identifier missing
    • l. 445: Journal & Year missing

    Typos:

    • l. 66: . This
    • l. 89: missing comma after the formula
    • l. 93: missing whitespace
    • l. 107: therefore, (no comma)
    • l. 121: where for each, (no comma)
    • l. 170: examineS
    • l. 174: therefore, (no comma)
    • l. 195: as part of A multi-omics study; whitespace on wrong position; the sentence does not appear correct
    • l. 323: comes WITH a

    Data and code availability

    Data and code availability is acceptable. Yet, the ANZDATA and UNOS_kidney data are not freely available and require approval and/or request. Moreover, for better reproducibility and accessibility, the experiments could be implemented with a general purpose benchmarking framework like mlr3 or tidymodels.

    References

    [BKG19] Paul Blanche, Michael W Kattan, and Thomas A Gerds. The c-index is not proper for the evaluation of-year predicted risks. Biostatistics, 20(2):347-357, 2019. [BRSB20] Andreas Bender, David Rügamer, Fabian Scheipl, and Bernd Bischl. A general machine learning framework for survival analysis.arXiv preprint arXiv:2006.15442, 2020. [BWSR21] Andrea Bommert, Thomas Welchowski, Matthias Schmid, and Jörg Rahnenführer. Benchmark of filter methods for feature selection in high-dimensional gene expression survival data. Briefings in Bioinformatics, 2021. bbab354. [HPH+20] Moritz Herrmann, Philipp Probst, Roman Hornung, Vindi Jurinovic, and Anne-Laure Boulesteix. Large-scale benchmark study of survival prediction methods using multi-omics data. Briefings in Bioinformatics, 22(3), 2020. bbaa167. [KW20] M Kuhn and H Wickham. Tidymodels: Easily install and load the 'tidymodels' packages. R package version 0.1.0, 2020. [LBR+19] Michel Lang, Martin Binder, Jakob Richter, et al. mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, 4(44):1903, 2019. [SCS+20] Annette Spooner, Emily Chen, Arcot Sowmya, Perminder Sachdev, Nicole A Kochan, Julian Trollor, and Henry Brodaty. A comparison of machine learning methods for survival analysis of high-dimensional clinical data for dementia prediction. Scientific reports,10(1):1-10, 2020. [Son21] Raphael Edward Benjamin Sonabend. A theoretical and methodological framework for machine learning in survival analysis: Enabling transparent and accessible predictive modelling on right-censored time-to-event data. PhD thesis, UCL (University College London), 2021. [VDBSB19] Alexander Volkmann, Riccardo De Bin, Willi Sauerbrei, and Anne-Laure Boulesteix. A plea for taking all available clinical information into account when assessing the predictive value of omics data. BMC medical research methodology, 19(1):1-15, 2019. [WLR19] Ping Wang, Yan Li, and Chandan K Reddy. Machine learning for survival analysis: Asurvey. ACM Computing Surveys (CSUR), 51(6):1-36, 2019.

    Re-review:

    Many thanks for the very careful revision of the manuscript. Most of my concerns have been thoroughly addressed. I have only a few remarks left.

    Regarding 1. Fair comparison and parameter selection The altered study design appears much better suited to this end. Thank you very much for the effort, in particular the additional results regarding the two tuning approaches. Although I think a single simple tuning regime would be feasible here, using the default settings is reasonable and very well justified. I agree that this is much closer to what is likely to take place in practice. However, it should be more clearly emphasized that better performance may be achievable if tuning is performed.

    Regarding 2. Description Thanks, all concerns properly addressed. No more comments.

    Regarding 3. Reliability I am aware that Figure 2c provides information to this end. I think additional boxplots which aggregate the methods' performance (e.g. for unoc and bs) over all runs and datasets would provide valuable additional information. For example, from Figure 2c one can tell that MTLR variants obtain overall higher ranks based on mean prediction performance than the deep learning methods. However, it says nothing about how large the differences in mean performance are.

    Kaplan-Meier-Estimate (KM) I'm not quite sure I understood the authors' answer correctly. The KM does not use variable information to produce an estimate of the survival function, and I think that is why it would be interesting to include it. This would shed light on how valuable the variables are in the different data sets.

    Regarding 4. Scope and clarity Thanks, all concerns properly addressed. No more comments.

    Minor points:

    • Since the authors decided to change 'framework' to 'design', note that in Figure 1b it still says 'framework'
    • l.51 & l.54/55 appear to be redundant
    • Figure 2 a and b:
    • Please elaborate more on how similarity (reflected in the dendrograms) is defined?
    • Why is the IBS more similar to Bregg's and GH C-Index than to the Brier Score?
    • Why is the IBS not feasible for so many methods, in particular Lasso_Cox, Rdige_Cox, and CoxBoost?