Correction of the scientific production: publisher performance evaluation using a dataset of 4844 PubMed retractions

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Withdrawal of problematic scientific articles after publication is one of the mechanisms for correcting the literature available to publishers, especially in the conditions of the ever-increasing trend of publishing activity in the medical field. The market volume and the business model justify publishers’ involvement in the post-publication quality control(QC) of scientific production. The limited information about this subject determined us to analyze retractions and the main retraction reasons for publishers with many withdrawn articles. We also propose a score to measure the evolution of their performance. The data set used for this article consists of 4844 PubMed retracted papers published between 1.01.2009 and 31.12.2020.

Methods

We have analyzed the retraction notes and retraction reasons, grouping them by publisher. To evaluate performance, we formulated an SDTP score whose calculation formula includes several parameters: speed (article exposure time(ET)), detection rate (percentage of articles whose retraction is initiated by the editor/publisher/institution without the authors’ participation), transparency (percentage of retracted articles available online and clarity of retraction notes), precision (mention of authors’ responsibility and percentage of retractions for reasons other than editorial errors).

Results

The 4844 withdrawn articles were published in 1767 journals by 366 publishers, the average number of withdrawn articles/journal being 2.74. Forty-five publishers have more than ten withdrawn articles, holding 88% of all papers and 79% of journals. Combining our data with data from another study shows that less than 7% of PubMed journals withdrew at least one article. Only 10.5% of the withdrawal notes included the individual responsibility of the authors. Nine of the top 11 publishers had the largest number of articles withdrawn in 2020, in the first 11 places finding, as expected, some big publishers. Retraction reasons analysis shows considerable differences between publishers concerning the articles ET: median values between 9 and 43 months (mistakes), 9 and 73 months (images), 10 and 42 months (plagiarism & overlap).

The SDTP score shows, between 2018 and 2020, an improvement in QC of four publishers in the top 11 and a decrease in the gap between 1st and 11th place. The group of the other 355 publishers also has a positive evolution of the SDTP score.

Conclusions

Publishers have to get involved actively and measurably in the post-publication evaluation of scientific products. The introduction of reporting standards for retraction notes and replicable indicators for quantifying publishing QC can help increase the overall quality of scientific literature.

Article activity feed

  1. Discussion, revision and decision


    Author response


    To: Adam Marcus, co-founder Retraction Watch & Alison Abritis, PhD, researcher at Retraction Watch

    Major Problems: I found serious deficits in both for this article, and thus I have serious concerns as to the usefulness of this article. Therefore, I have not proceeded in a line-by-line, as I consider the overall problems to be grave enough to require attention and revision before getting to lesser items of clarity.

    I would like to point out that the authors show a marvelous attention to their work, and they have much to contribute to the field of retraction studies, and I do honestly look forward to their future work. However, in order for the field to move ahead with accuracy and validity, we must no longer just rely on superficial number crunching, and must start including the complexities of publishing in our analyses, as difficult and labor-intensive as it might be.

    We do not consider that our article presents serious problems nor that it would be useless.

    It is possible that a different view on the subject, some tendency to forbearance (understandable) for the difficult life of the publishing industry, along with some difficulties in understanding the ideas presented in the article, may have led to a series of points of view that we would like to comment on below.

    We would first like to thank the reviewers for their comments, some of which will allow us to improve and nuance, using objective elements, the analysis of this bumpy field represented by the ecosystem of retracted publications. Because we have based our study on data from freely accessible sources of information, we will not insist too much on commenting on this issue.

    The authors stated that they used the search protocol (and therefore presumably the same dataset) as described in Toma & Padureanu, 2021, and do not indicate any process to compensate for its weaknesses. In the referenced study, the authors (same as for this article) utilized a PubMed search using only “Retracted Publication” in Publication Type. This search method is immediately insufficient, as some retracted articles are not bannered or indexed as retracted in PubMed. This issue is well-understood among scholars who search databases for retractions, and by now one would expect that these searches would strive to be more comprehensive.

    A better method, if one insists on restricting the search to PubMed, would have been to use Publication Type to search for “retracted publication,” and then to search for “retraction of publication,” and to compare the output to eliminate duplications. There are even more comprehensive ways to search PubMed, especially since some articles are retitled as “Withdrawn” – Elsevier, for example, uses the term instead of “Retracted” for papers removed within a year of their publication date – but do not come in searches for either publication type. Even better would have been to use databases with more comprehensive indexing of retractions.

    In an ideal world, if any effort were to be made, it would be aimed at better indexing and managing existing databases, not at generating query strategies to make up for their shortcomings.

    Thank you very much for the suggestions on the search strategy. We do not consider that the use of "Retracted Publication [PT]" should be compensated in any way but, if it should be compensated, we wouldn't want to add "Retraction of publication". We consider that using a search protocol more specific to systematic reviews is not very useful in our case: data are added/updated continuously (sometimes late), incorrect indexing can be corrected, the number of retracted articles increases from month to month; the same strategy can give different results at different times regardless of its complexity. Putting extra effort into detecting problematic articles without knowing the benefit but expecting it only highlights issues that can be improved at the publisher/editor(content delivery) and database level(indexing).

    The dataset analyzed is a snapshot of a particular time interval and nothing more. Even during the analysis we found, in the case of one publisher, the addition of details to the initially incomplete retraction notes. Hence the need for follow-up studies. Therefore in the case of retractions, unlike the reviewer, we prefer an approach based on simple and easily reproducible strategies, widely accessible sources of information, and several steps. The first step in this strategy is the "number crunching" stage which includes this article.

    1. The authors are using the time from publication to retraction based on the notice dates and using them to indicate efficacy of oversight by publishers. However, this approach is seriously problematic. It takes no notice of when the publisher was first informed that the article was potentially compromised. Publishers who respond rapidly to information that affects years/decades old publications will inevitably show worse scores than those who are advised upon an article’s faults immediately upon its publication, but who drag their heels a few months in dealing with the problem.

    Indeed, the article uses the time between publication and retraction(exposure time – ET) as one of the SDTP score components for assessing editorial/publisher performance. Data on when a publisher or editor has been informed of problems with an article, apart from being relatively rare, is not a substitute for a retraction note. Moreover, the use of such information may induce a risk of bias.

    We mention in the article the need to use reporting standards for retraction notes, and one element that might be useful is, indeed, the date on which the publisher or editor was informed of problems with an article. Unfortunately, as the author of this review knows very well, information precedes investigation; the retraction note contains (or should contain) much more data than the initial information about the quality problems of an article.

    Our article aims to suggest a score for measuring publication performance in the context of retracted articles that would also allow an assessment of the dynamics of the activity of correcting the scientific record and, more importantly, how publishers engage in post-publication quality control. ET is only one component of this score.

    It is quite clear from the data presented in the article that a publisher/journal that emphasizes systematic back-checking will have an increasingly longer average lifespan of retracted articles, logically higher than one that does not do this type of checking. We don't see precisely where the reviewer thinks there is a problem: once the checking is done, the ET will decrease, and a publisher that takes concrete steps to correct the literature will ultimately have a better reputation. This does not mean that a higher ET is laudable, it suggests that there is a post-publication quality control but also that the peer review process has let problematic articles through and that the control of these articles has been carried out late. This is an argument for more active involvement of publishers (as potential generators of editorial policies) in post-publication control.

    Second, there is little consistency in dealing with retractions between publishers, within the same publishers or even within the same journal. Under the same publisher, one journal editor may be highly responsive during their term, while the next editor may not be. Most problems with articles quite often are first addressed by contacting the authors and/or journal editors, and publishers – especially those with hundreds of journals – may not have any idea of the ensuing problem for weeks or months, if at all. Therefore, the larger publishers would be far more likely to show worse scores than publishers with few journals to manage oversight.

    It is exactly this inconsistency that we highlight in the article. Differing policies, attitudes, and responsiveness does not mean that a publisher cannot/should not ask questions about the effectiveness of internal processes and resources used for post-publication quality control or the implementation of uniform measures across journals in its portfolio.

    Third, the dates on retraction notices are not always representative of when an article was watermarked or otherwise indicated as retracted. Elsevier journals often overwrite the html page of the original article with the retraction notice, leaving the original article’s date of publication alone. A separate retraction notice may not be published until days, weeks or even years after the article has been retracted. Springer and Sage have done this as well, as have other publishers – though not to the same extent (yet).

    Historically, The Journal of Biological Chemistry would publish a retraction notice and link it immediately to the original article, but a check of the article’s PDF would show it having been retracted days to weeks earlier. They have recently been acquired by Elsevier, so it is unknown how this trend will play out. And keep in mind, in some ways this is in itself not a bad thing – as it gives the user quicker notice that an article is unsuitable for citation, even while the notice itself is still undergoing revisions. It just makes tracking the time of publication to retraction especially difficult.

    We used the same date for all articles in our study (the one listed in PubMed), thus ensuring a uniform criterion for all publishers. If this date was not in PubMed we used the date from the retraction notes on the journal website but this was for a small number of articles. How different publishers handle retraction processes or the delay with which these are published is primarily related to internal editorial procedures, and these delays are reflected in the ET. In our experience, most articles retracted by Elsevier are available online, supplemented, and not replaced by retraction notes, which we think is an excellent policy.

    1. As best as can be determined, the authors are taking the notices at face value, and that has been repeatedly shown to be flawed. Many notices are written as a cooperative effort between the authors and journal, regardless of who initiated the retraction and under the looming specter of potential litigation.

    Shown to be flawed by who? Indeed, in our study, we refer to the retraction notes published by the journals. The fact that they are incomplete or formulated under the threat of litigation only supports our view that publishers and editors need to make a more significant effort to correct the biomedical literature, including avoiding litigation when the retraction note clearly describes the reasons for retraction. The way the retraction note is worded should be an editorial prerogative and should primarily aim at correcting scientific literature, not at appeasing egos, careers, or financial interests.

    Trying to establish who initiated a retraction process strictly by analyzing the notice language is destined to produce faulty conclusions. Looking just at PubPeer comments, questions about the data quality may be raised days/month/years before a retraction, with indications of having contacted the journal or publisher. And yet, an ensuing notice may be that the authors requested the retraction because of concerns about the data/image – where the backstory clearly shows that impetus for the retraction was prompted by a journal’s investigation of outside complaints. As an example, the recent glut of retractions of papers coming from paper mills often suggest the authors are requesting the retraction. This interpretation would be false, however, as those familiar with the backstory are aware that the driving force for many of these retractions were independent investigators contacting the journals/publishers for retraction of these manuscripts.

    Once again, the author of this review does not seem to fully understand our study, apparently favouring information published on third-party websites over that the journals officially assumed. The retraction notes represent the material available to a researcher doing documentation on a particular topic. The clarity and information contained in the note is the editor's or publisher’s responsibility, reflecting their performance and concern for the integrity of the science. Interpretation of a retraction note/analyzing an article occurs in this context. Not everyone has time for further investigation or to search third-party sites for information that is, with a notable exception, the result of a selection bias.

    Assigning the reason for retraction from only the text of the notice will absolutely skew results. As already stated, in many cases, journal editors and authors work together to produce the language. Thus, the notice may convey an innocuous but unquestionable cause (e.g., results not reproducible) because the fundamental reason (e.g., data/image was fabricated or falsified) is too difficult to prove to a reasonable degree. Even the use of the word “plagiarism” is triggering for authors’ reputations – and notices have been crafted to avoid any suggestion of such, with euphemisms that steer well clear of the “p” word. Furthermore, it has been well-documented that some retractions required by institutional findings of misconduct have used language in the notice indicating simple error or other innocuous reasons as the definitive cause.

    We understand your point of view and the situations presented may be accurate. However, from our point of view, the only valid reference remains the retraction note published on the journal's website. The existence of wording difficulties and various other problems that may arise are more likely to do with a tendency of the reviewer to make excuses for journals reluctant to indicate precisely what the reasons for retracting the article are. There are plenty of retraction notes in which the images with problems (including whether they were plagiarized, reused, manipulated, fabricated, etc.) are indicated with great precision, there are equally plenty of notes in which the word plagiarism is used without hesitation, indicating the sources, how they were informed, what was plagiarized. No matter how many hesitant publishers/editors there are, it should not be forgotten that there are many journals/publishers who take their role seriously, acknowledge and learn from their mistakes, thus providing a real service to the scientific community.

    The authors also discuss changes in the quality of notices increasing or decreasing in publishers – but without knowing the backstory. Having more words in a notice or giving one or two specific causes cannot in itself be an indicator of the quality (i.e., accuracy) of said notice.

    "Knowing the backstory" is not part of our objectives, and neither is assessing the quality of the retraction notes. This is also very difficult to do due to the lack of an accepted standard format. We are trying to propose a score composed of several parameters resulting from existing (or non-existing) data in the retraction notes so that we can have a picture of retractions at publisher level. Knowing the backstory is not relevant, reading and interpreting the official retraction note is relevant.

    1. The authors tend to infer that the lack of a retraction in a journal implies a degree of superiority over journals with retractions. Although they qualify it a bit ( “Are over 90% of journals without a retracted article perfect? It is a question that is quite difficult to answer at this time, but we believe that the opinion that, in reality, there are many more articles that should be retracted (Oransky et al. 2021) is justified and covered by the actual figures.”), the inference is naive. First, they have not looked at the number of corrections within these journals. Even ignoring that these corrections may be disproportionate within different journals and require responsive editorial staff, some journals have gone through what can only be called great contortions to issue corrections rather than retractions.

    We believe that this is a case of reviewer confusion generated either by the insufficiently precise wording of the text or a lack of understanding of our study objectives. We are trying to point out that more than 90% of the journals in the NLM catalogue-PubMed subset have not retracted a single article. We are not trying to say that journals without retracted articles are superior to the others. As explained in the article, we referred to retraction notes, not corrections.

    Second, the lack of retractions in a journal speaks nothing to the quality of the articles therein. Predatory journals generally avoid issuing retractions, even when presented with outright proof of data fabrication or plagiarism. Meanwhile, high-quality journals are likely to have more, and possibly more astute, readers, who could be more adept at spotting errors that require retraction.

    Of course, the quality level of articles in a journal is not determined by the number of articles removed.

    Third, smaller publishers/journals may not have the fiscal resources to deal with the issues that come with a retraction. As an example, even though there was an institutional investigation finding data fabrication, at least one journal declined to issue a retraction for an article by Joachim Boldt (who has more than 160 retractions for misconduct) after his attorneys made threats of litigation.

    Threats of lawsuits are instead a failure of a publisher/journal to adapt to the realities of the publishing business or to the risk of misconduct. This is something that needs to change.

    Simply put, the presence or lack of a retraction in a journal is no longer a reasonable speculation about the quality of the manuscripts or the efficiency of the editorial process.

    We have not attempted to suggest this, we have only analyzed the retracted articles and their associated retraction notes. On the other hand, the way a journal/publisher handles the retraction of problematic articles still reflects, to some extent, the quality/performance of the editorial processes.

    1. I am concerned that the authors appear to have made significant errors in their analysis of publishers. For example, they claim that neither PLOS nor Elsevier retracted papers in 2020 for problematic images. That assertion is demonstrably false.

    This is wrong. In our dataset, there are eleven PLOS articles related to human health with the publication year 2019 and 2020. None of these have images as retraction reasons.

    Regarding the 21 Elsevier articles published in 2020, there is nothing in the retraction notes to indicate that the article was retracted because of the images. In 2 retraction notes there is mention of the comments made by Dr. Bik (The Tadpole Paper Mill - Science Integrity Digest) but the text of these (retraction notes) stops at the authors' inability to provide the raw data underlying the article.

    Our study is based only on the content of the retraction notes published and assumed by the journal, not on opinions/comments appearing on other sites, which, for unknown/unmentioned reasons, are not officially assumed in the retraction note. Therefore, we consider the statement in the review to be questionable at best, as the use of material other than the retraction notes has severe implications for the internal and external validity of the study and the suggestion to use such methods is, in our opinion, wrong. We would also like to draw attention to the fact that many retraction notes are explicitely mentioning the request to provide raw images and the authors' inability to provide them.

    Anyway, as far as images are concerned, our article suggested that there are publishers which seem to adopt image analysis technologies faster than others. The numbers are not really relevant in this case but the trend is: it describes the publishing activity complexity better than the numbers.

    Reviewer response

    We appreciate the authors’ zeal in standing by their work.

    In regard to the deficits in the search process, the author states, “We do not consider that the use of ‘Retracted Publication [PT]’ should be compensated in any way but, if it should be compensated, we wouldn't want to add ‘Retraction of publication’”

    There is a lack of appreciation for the complexities of indexing retracted materials in an indexing site such as PubMed. To have a comprehensive search, one should not be choosing to use either “Retracted Publication [PT]” OR “Retraction of Publication [PT].” One would use both, and then filter out the duplicates, because some retractions are indexed by retraction notices, some only have “Retracted” added to the indexed title and the publication type changed to “Retracted Publication.” Use of only one or the other guarantees that the search is far less comprehensive than it should be.

    The authors state, “In an ideal world, if any effort were to be made, it would be aimed at better indexing and managing existing databases, not at generating query strategies to make up for their shortcomings.”

    There is at least one database (http://retractiondatabase.org) that has a far more comprehensive indexing of retractions and is publicly available for use.

    In Item 3, where it is pointed out that retraction notices themselves are inaccurate and cannot be taken at face value as to the reason behind the retraction, the authors responded, “Shown to be flawed by who?” — By an article cited in the manuscript:

    Fang, Ferric C.; Steen, R. Grant; Casadevall, Arturo (2012): Misconduct accounts for the majority of retracted scientific publications. In Proceedings of the National Academy of Sciences of the United States of America 109 (42), pp. 17028–17033. DOI: 10.1073/pnas.1212247109.

    “To understand the reasons for retraction, we consulted reports from the Office of Research Integrity and other published resources (7, 8), in addition to the retraction announcements in scientific journals. Use of these additional sources of information resulted in the reclassification of 118 of 742 (15.9%) retractions in an earlier study (4) from error to fraud.” Followed by “These factors have contributed to the systematic underestimation of the role of misconduct and the overestimation of the role of error in retractions (3, 4), and speak to the need for uniform standards regarding retraction notices (5).”

    The authors then choose to state that it is the “editorial prerogative” – and that when notices “are incomplete or formulated under the threat of litigation [it] only supports our view that publishers and editors need to make a more significant effort to correct the biomedical literature, including avoiding litigation when the retraction note clearly describes the reasons for retraction.”

    Following our attempt to explain why understanding the real reason behind a retraction is important to study the publication of notices, the authors respond: “Once again, the author of this review does not seem to fully understand our study, apparently favouring information published on third-party websites over that the journals officially assumed.”

    First, yes, we do understand the study. We read a lot of these. Second, the “third-party websites” we prefer include the Office of Research Integrity and the Retraction Watch blog, where background investigations into the causes of retraction notices are described. If the authors are challenging the reference to PubPeer, keep in mind that journals initiate investigations based on comments on that website, and have taken to citing the website in their notices.

    Had the authors not chosen to categorize the reasons for retraction, their reasoning may have had more support – but they did, and in doing so, by just using the notice with no further review, their findings address only the notice itself, with no context.

    We recommend that the manuscript be substantially revised with strong attention to the comments we made in our original review.

  2. Peer review report

    Reviewer: Adam Marcus, co-founder Retraction Watch & Alison Abritis, PhD, researcher at Retraction Watch


    General comments

    Major Problems: I found serious deficits in both for this article, and thus I have serious concerns as to the usefulness of this article. Therefore, I have not proceeded in a line-by-line, as I consider the overall problems to be grave enough to require attention and revision before getting to lesser items of clarity.

    I would like to point out that the authors show a marvelous attention to their work, and they have much to contribute to the field of retraction studies, and I do honestly look forward to their future work. However, in order for the field to move ahead with accuracy and validity, we must no longer just rely on superficial number crunching, and must start including the complexities of publishing in our analyses, as difficult and labor-intensive as it might be.

    1) The authors stated that they used the search protocol (and therefore presumably the same dataset) as described in Toma & Padureanu, 2021, and do not indicate any process to compensate for its weaknesses. In the referenced study, the authors (same as for this article) utilized a PubMed search using only “Retracted Publication” in Publication Type. This search method is immediately insufficient, as some retracted articles are not bannered or indexed as retracted in PubMed. This issue is well-understood among scholars who search databases for retractions, and by now one would expect that these searches would strive to be more comprehensive.

    A better method, if one insists on restricting the search to PubMed, would have been to use Publication Type to search for “retracted publication,” and then to search for “retraction of publication,” and to compare the output to eliminate duplications. There are even more comprehensive ways to search PubMed, especially since some articles are retitled as “Withdrawn” – Elsevier, for example, uses the term instead of “Retracted” for papers removed within a year of their publication date – but do not come in searches for either publication type. Even better would have been to use databases with more comprehensive indexing of retractions.

    2) The authors are using the time from publication to retraction based on the notice dates and using them to indicate efficacy of oversight by publishers. However, this approach is seriously problematic. It takes no notice of when the publisher was first informed that the article was potentially compromised. Publishers who respond rapidly to information that affects years/decades old publications will inevitably show worse scores than those who are advised upon an article’s faults immediately upon its publication, but who drag their heels a few months in dealing with the problem.

    Second, there is little consistency in dealing with retractions between publishers, within the same publishers or even within the same journal. Under the same publisher, one journal editor may be highly responsive during their term, while the next editor may not be. Most problems with articles quite often are first addressed by contacting the authors and/or journal editors, and publishers – especially those with hundreds of journals – may not have any idea of the ensuing problem for weeks or months, if at all. Therefore, the larger publishers would be far more likely to show worse scores than publishers with few journals to manage oversight.

    Third, the dates on retraction notices are not always representative of when an article was watermarked or otherwise indicated as retracted. Elsevier journals often overwrite the html page of the original article with the retraction notice, leaving the original article’s date of publication alone. A separate retraction notice may not be published until days, weeks or even years after the article has been retracted. Springer and Sage have done this as well, as have other publishers – though not to the same extent (yet).

    Historically, The Journal of Biological Chemistry would publish a retraction notice and link it immediately to the original article, but a check of the article’s PDF would show it having been retracted days to weeks earlier. They have recently been acquired by Elsevier, so it is unknown how this trend will play out. And keep in mind, in some ways this is in itself not a bad thing – as it gives the user quicker notice that an article is unsuitable for citation, even while the notice itself is still undergoing revisions. It just makes tracking the time of publication to retraction especially difficult.

    3) As best as can be determined, the authors are taking the notices at face value, and that has been repeatedly shown to be flawed. Many notices are written as a cooperative effort between the authors and journal, regardless of who initiated the retraction and under the looming specter of potential litigation.

    Trying to establish who initiated a retraction process strictly by analyzing the notice language is destined to produce faulty conclusions. Looking just at PubPeer comments, questions about the data quality may be raised days/month/years before a retraction, with indications of having contacted the journal or publisher. And yet, an ensuing notice may be that the authors requested the retraction because of concerns about the data/image – where the backstory clearly shows that impetus for the retraction was prompted by a journal’s investigation of outside complaints. As an example, the recent glut of retractions of papers coming from paper mills often suggest the authors are requesting the retraction. This interpretation would be false, however, as those familiar with the backstory are aware that the driving force for many of these retractions were independent investigators contacting the journals/publishers for retraction of these manuscripts.

    Assigning the reason for retraction from only the text of the notice will absolutely skew results. As already stated, in many cases, journal editors and authors work together to produce the language. Thus, the notice may convey an innocuous but unquestionable cause (e.g., results not reproducible) because the fundamental reason (e.g., data/image was fabricated or falsified) is too difficult to prove to a reasonable degree. Even the use of the word “plagiarism” is triggering for authors’ reputations – and notices have been crafted to avoid any suggestion of such, with euphemisms that steer well clear of the “p” word. Furthermore, it has been well-documented that some retractions required by institutional findings of misconduct have used language in the notice indicating simple error or other innocuous reasons as the definitive cause.

    The authors also discuss changes in the quality of notices increasing or decreasing in publishers – but without knowing the backstory. Having more words in a notice or giving one or two specific causes cannot in itself be an indicator of the quality (i.e., accuracy) of said notice.

    4) The authors tend to infer that the lack of a retraction in a journal implies a degree of superiority over journals with retractions. Although they qualify it a bit ( “Are over 90% of journals without a retracted article perfect? It is a question that is quite difficult to answer at this time, but we believe that the opinion that, in reality, there are many more articles that should be retracted (Oransky et al. 2021) is justified and covered by the actual figures.”), the inference is naive. First, they have not looked at the number of corrections within these journals. Even ignoring that these corrections may be disproportionate within different journals and require responsive editorial staff, some journals have gone through what can only be called great contortions to issue corrections rather than retractions.

    Second, the lack of retractions in a journal speaks nothing to the quality of the articles therein. Predatory journals generally avoid issuing retractions, even when presented with outright proof of data fabrication or plagiarism. Meanwhile, high-quality journals are likely to have more, and possibly more astute, readers, who could be more adept at spotting errors that require retraction.

    Third, smaller publishers/journals may not have the fiscal resources to deal with the issues that come with a retraction. As an example, even though there was an institutional investigation finding data fabrication, at least one journal declined to issue a retraction for an article by Joachim Boldt (who has more than 160 retractions for misconduct) after his attorneys made threats of litigation.

    Simply put, the presence or lack of a retraction in a journal is no longer a reasonable speculation about the quality of the manuscripts or the efficiency of the editorial process.

    5) I am concerned that the authors appear to have made significant errors in their analysis of publishers. For example, they claim that neither PLOS nor Elsevier retracted papers in 2020 for problematic images. That assertion is demonstrably false.


    Decision

    Requires revisions: The manuscript contains objective errors or fundamental flaws that must be addressed and/or major revisions are suggested.