Accessibility and Reproducible Research Practices in Cardiovascular Literature

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This is a descriptive paper in the field of metascience, which documents levels of accessibility and reproducible research practices in the field of cardiovascular science. As such, it does not make a theoretical contribution, but it argues, first, that there is a problem for this field, and second, it provides a baseline against which the impact of future initiatives to improve reproducibility can be assessed. The study was pre-registered and the methods and data are clearly documented. This kind of study is extremely labour-intensive and represents a great deal of work.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Background

Several fields have described low reproducibility of scientific research and poor accessibility in research reporting practices. Although previous reports have investigated accessible reporting practices that lead to reproducible research in other fields, to date no study has explored the extent of accessible and reproducible research practices in cardiovascular science literature.

Methods

To study accessibility and reproducibility in cardiovascular research reporting, we screened 400 randomly selected articles published in 2019 in three top cardiovascular science publications: Circulation, the European Heart Journal, and the Journal of the American College of Cardiology (JACC). We screened each paper for accessible and reproducible research practices using a set of accessibility criteria including protocol, materials, data, and analysis script availability, as well as accessibility of the publication itself. We also quantified the consistency of open research practices within and across cardiovascular study types and journal formats.

Results

We identified that fewer than 2% of cardiovascular research publications provide sufficient resources (materials, methods, data, and analysis scripts) to fully reproduce their studies. After calculating an accessibility score as a measure of the extent to which an articles makes its resources available, we also showed that the level of accessibility varies across study types (p = 2e-16) and across journals (p = 5.9e-13). We further show that there are significant differences in which study types share which resources. We also describe the relationship between accessibility scores and corresponding author nationality.

Conclusion

Although the degree to which reproducible reporting practices are present in publications varies significantly across journals and study types; current cardiovascular science reports frequently do not provide sufficient materials, protocols, data, or analysis information to reproduce a study. In the future, having higher standards of accessibility mandated by either journals or funding bodies will help increase the reproducibility of cardiovascular research.

Funding

Authors Gabriel Heckerman, Arely Campos-Melendez, and Chisomaga Ekwueme were supported by an NIH R25 grant from the National Heart Lung and Blood Institute (R25HL147666). Eileen Tzng was supported by an AHA Institutional Training Award fellowship (18UFEL33960207).

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    This manuscript reports a study to investigate the reporting practices in three top cardiovascular research journals for articles published in 2019. The study was preregistered, which makes the intent and methodology transparent, and the authors also make their materials, data, and code open. While the preregistration and sample strategy is a strength, it suffers from a higher than expected number of non-empirical articles decreasing the sample size and thus inference that can be drawn. The author's focus was mainly on transparency of reporting and not on the actual reproducibility or replicability of the articles; however, the accessibility of data, code, materials, and methods is a prerequisite. While the authors were still able to draw inferences to their main objectives, they could not perform some of their proposed analyses because of a small sample size (due partly to the less than half empirical articles in their sample as well as the low number of papers with accessible information to code). One of the descriptive analyses they performed, the country level scores (Figure 6), in particular suffers from the small sample size and while the authors state indicates this in their manuscript I do not think it would be reasonable to include as it has the potential to be misinterpreted since so many are based on an n=1. Overall, I found the authors presentation and discussion clear and concise; however, a lack of a more in-depth discussion is an area to improve the current manuscript. The manuscript outlines opportunities for researchers, journals, funders, and institutions to improve the way cardiovascular research is reported to enable discovery, reuse, and reproducibility.

    We appreciate the reviewer’s recognition of our pre-registration, methodology, and resource sharing and also their feedback regarding the small sample size of empirical research articles and need for a more in-depth discussion of the impacts of our study. We have now increased the number of empirical studies to a total of 393 out of 639 articles screened. We also agree that our study focuses more on transparency than reproducibility and replicability, and we have changed our title to reflect this. While the sample size of empirical papers has increased, a comparison of accessibility scores across countries continued to suffer from small sample size and we have removed it based on the recommendation of the reviewers. We have updated the Materials and Methods section to reflect our updated analyses, as well as included additional paragraphs on Limitations and Future Work in our Discussion to acknowledge future improvements that could be made to the accessibility score used in our study.

    Reviewer #2 (Public Review):

    This is a descriptive paper in the field of metascience, which documents levels of accessibility and reproducible research practices in the field of cardiovascular science. As such, it does not make a theoretical contribution, but it argues, first, that there is a problem for this field, and second, it provides a baseline against which the impact of future initiatives to improve reproducibility can be assessed. The study was pre-registered and the methods and data are clearly documented. This kind of study is extremely labour-intensive and represents a great deal of work.

    I have a major concern about the analysis. It is stated that to be fully reproducible, publications must include sufficient resources (materials, methods, data and analysis scripts). But how about cases where materials are not required to reproduce the work? In line 128-129 it is noted that the materials criterion was omitted for meta-analyses, but what about other types of study where materials may be either described adequately in the text, readily available (eg published questionnaires), or impossible to share (e.g. experimental animals).

    To see how valid these concerns might be, I looked at the first 4 papers in the deposited 'EmpricalResearchOnly.csv' file. Two had been coded as 'No Materials availability statement' and for two the value was blank.

    Study 1 used registry data and was coded as missing a Materials statement. The only materials that I could think might be useful to have might be 'standardized case report forms' that were referred to. But the authors did note that the Registry methods were fully documented elsewhere (I am not sure if that is the case).

    Study 2 was a short surgical case report - for this one the Materials field was left blank by the coder.

    Study 3 was a meta-analysis; the Materials field was blank by the coder

    Study 4 was again coded as lacking a Material statement. It presented a model predicting outcome for cardiac arrhythmias. The definitions of the predictor variables were provided in supplementary materials. I am not clear what other materials might be needed.

    These four cases suggest to me that it is rather misleading to treat lack of a Materials statement as contributing to an index of irreproducibility. Certainly, there are many studies where this is the case, but it will vary from study to study depending on the nature of the research. Indeed, this may also be true for other components of the irreproducibility index: for instance, in a case study, there may be no analysis script because no statistical analysis was done. And in some papers, the raw data may all be present in the text already - that may be less common, but it is likely to be so for case studies, for instance.

    A related point concerns the criteria for selecting papers for screening: it was surprising that the requirement for studies to have empirical data was not imposed at the outset: it should be possible to screen these out early on by specifying 'publication type'; instead, they were included and that means that the numbers used for the actual analysis are well below 400. The large number of non-empirical papers is not of particular relevance for the research questions considered here. In the Discussion, the authors expressed surprise at the large number of non-empirical papers they found; I felt it would have been reasonable for them to depart from their pre registered plan on discovering this, and to review further papers to bring the number up to 400, restricting consideration to empirical papers only - also excluding case reports, which pose their own problems in this kind of analysis.

    A more minor point is that some of the analyses could be dropped. The analysis of authorship by country had too few cases for many countries to allow for sensible analysis.

    Overall, my concern is that the analysis presented here may create a backlash against metascientific analyses like this because it appears unfair on authors to use a metric based on criteria that may not apply to their study. I am strongly in favour of open, reproducible science, and agree it is important to document the state of the science for different disciplines. But what this study demonstrates to me is that if you are going to evaluate papers as to whether they include things like materials/data/ availability statements, then you need to have a N/A option. Unfortunately, I suspect it may not be possible to rely on authors' self-evaluation of N/A and that means that metascientists doing an evaluation would need to read enough of the paper to judge whether such a statement should apply.

    We thank the reviewer for the time taken to review our paper, the appreciation of the work we conducted, and for the suggestions for improving our research methods. To address the initial concern about our analytical approach, the definition for fully reproducible publications that we used was only applicable to research that utilized empirical research methods. We recognize that publications such as editorials and reviews are not inherently reproducible experimental studies; thus, such papers were not provided with an accessibility score, were only screened for the components such as funding and conflict of interest information, and were only compared amongst each other. Additionally, articles such as meta-analyses and systematic reviews that do not include materials had adjusted accessibility scores. We expanded our Methods and Discussion section to further explain our screening process and our assumption that all empirical research articles contain methods, data, and analysis scripts and to acknowledge the limitations of our approach. We also agree that screening more empirical research articles is more in line with the intent of our pre-registration and we expanded the number of empirical research articles screened to 393. We also agree with the reviewer that the analysis by country should be excluded because of the small sample size for most countries, and we have adjusted the manuscript accordingly.

  2. eLife assessment

    This is a descriptive paper in the field of metascience, which documents levels of accessibility and reproducible research practices in the field of cardiovascular science. As such, it does not make a theoretical contribution, but it argues, first, that there is a problem for this field, and second, it provides a baseline against which the impact of future initiatives to improve reproducibility can be assessed. The study was pre-registered and the methods and data are clearly documented. This kind of study is extremely labour-intensive and represents a great deal of work.

  3. Reviewer #1 (Public Review):

    This manuscript reports a study to investigate the reporting practices in three top cardiovascular research journals for articles published in 2019. The study was preregistered, which makes the intent and methodology transparent, and the authors also make their materials, data, and code open. While the preregistration and sample strategy is a strength, it suffers from a higher than expected number of non-empirical articles decreasing the sample size and thus inference that can be drawn. The author's focus was mainly on transparency of reporting and not on the actual reproducibility or replicability of the articles; however, the accessibility of data, code, materials, and methods is a prerequisite. While the authors were still able to draw inferences to their main objectives, they could not perform some of their proposed analyses because of a small sample size (due partly to the less than half empirical articles in their sample as well as the low number of papers with accessible information to code). One of the descriptive analyses they performed, the country level scores (Figure 6), in particular suffers from the small sample size and while the authors state indicates this in their manuscript I do not think it would be reasonable to include as it has the potential to be misinterpreted since so many are based on an n=1. Overall, I found the authors presentation and discussion clear and concise; however, a lack of a more in-depth discussion is an area to improve the current manuscript. The manuscript outlines opportunities for researchers, journals, funders, and institutions to improve the way cardiovascular research is reported to enable discovery, reuse, and reproducibility.

  4. Reviewer #2 (Public Review):

    This is a descriptive paper in the field of metascience, which documents levels of accessibility and reproducible research practices in the field of cardiovascular science. As such, it does not make a theoretical contribution, but it argues, first, that there is a problem for this field, and second, it provides a baseline against which the impact of future initiatives to improve reproducibility can be assessed. The study was pre-registered and the methods and data are clearly documented. This kind of study is extremely labour-intensive and represents a great deal of work.

    I have a major concern about the analysis. It is stated that to be fully reproducible, publications must include sufficient resources (materials, methods, data and analysis scripts). But how about cases where materials are not required to reproduce the work? In line 128-129 it is noted that the materials criterion was omitted for meta-analyses, but what about other types of study where materials may be either described adequately in the text, readily available (eg published questionnaires), or impossible to share (e.g. experimental animals).

    To see how valid these concerns might be, I looked at the first 4 papers in the deposited 'EmpricalResearchOnly.csv' file. Two had been coded as 'No Materials availability statement' and for two the value was blank.
    Study 1 used registry data and was coded as missing a Materials statement. The only materials that I could think might be useful to have might be 'standardized case report forms' that were referred to. But the authors did note that the Registry methods were fully documented elsewhere (I am not sure if that is the case).
    Study 2 was a short surgical case report - for this one the Materials field was left blank by the coder.
    Study 3 was a meta-analysis; the Materials field was blank by the coder
    Study 4 was again coded as lacking a Material statement. It presented a model predicting outcome for cardiac arrhythmias. The definitions of the predictor variables were provided in supplementary materials. I am not clear what other materials might be needed.
    These four cases suggest to me that it is rather misleading to treat lack of a Materials statement as contributing to an index of irreproducibility. Certainly, there are many studies where this is the case, but it will vary from study to study depending on the nature of the research. Indeed, this may also be true for other components of the irreproducibility index: for instance, in a case study, there may be no analysis script because no statistical analysis was done. And in some papers, the raw data may all be present in the text already - that may be less common, but it is likely to be so for case studies, for instance.

    A related point concerns the criteria for selecting papers for screening: it was surprising that the requirement for studies to have empirical data was not imposed at the outset: it should be possible to screen these out early on by specifying 'publication type'; instead, they were included and that means that the numbers used for the actual analysis are well below 400. The large number of non-empirical papers is not of particular relevance for the research questions considered here. In the Discussion, the authors expressed surprise at the large number of non-emprical papers they found; I felt it would have been reasonable for them to depart from their preregistered plan on discovering this, and to review further papers to bring the number up to 400, restricting consideration to empirical papers only - also excluding case reports, which pose their own problems in this kind of analysis.

    A more minor point is that some of the analyses could be dropped. The analysis of authorship by country had too few cases for many countries to allow for sensible analysis.

    Overall, my concern is that the analysis presented here may create a backlash against metascientific analyses like this because it appears unfair on authors to use a metric based on criteria that may not apply to their study. I am strongly in favour of open, reproducible science, and agree it is important to document the state of the science for different disciplines. But what this study demonstrates to me is that if you are going to evaluate papers as to whether they include things like materials/data/ availability statements, then you need to have a N/A option. Unfortunately, I suspect it may not be possible to rely on authors' self-evaluation of N/A and that means that metascientists doing an evaluation would need to read enough of the paper to judge whether such a statement should apply.