Controlled experiment finds no detectable citation bump from Twitter promotion
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (PREreview)
- Preprint review and curation (mark2d2)
- ASAPbio Meta-Research Crowd PREreviews (prereview)
Abstract
Multiple studies across a variety of scientific disciplines have shown that the number of times that a paper is shared on Twitter (now called X) is correlated with the number of citations that paper receives. However, these studies were not designed to answer whether tweeting about scientific papers causes an increase in citations, or whether they were simply highlighting that some papers have higher relevance, importance or quality and are therefore both tweeted about more and cited more. The authors of this study are leading science communicators on Twitter from several life science disciplines, with substantially higher follower counts than the average scientist, making us uniquely placed to address this question. We conducted a three-year-long controlled experiment, randomly selecting five articles published in the same month and journal, and randomly tweeting one while retaining the others as controls. This process was repeated for 10 articles from each of 11 journals, recording Altmetric scores, number of tweets, and citation counts before and after tweeting. Randomization tests revealed that tweeted articles were downloaded 2.6–3.9 times more often than controls immediately after tweeting, and retained significantly higher Altmetric scores (+81%) and number of tweets (+105%) three years after tweeting. However, while some tweeted papers were cited more than their respective control papers published in the same journal and month, the overall increase in citation counts after three years (+7% for Web of Science and +12% for Google Scholar) was not statistically significant ( p > 0.15). Therefore while discussing science on social media has many professional and societal benefits (and has been a lot of fun), increasing the citation rate of a scientist’s papers is likely not among them.
Article activity feed
-
-
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/10044712.
This review reflects comments and contributions from Melissa Chim, Allie Tatarian, Martyn Rittman, Pen-Yuan Hsing. Review synthesized by Stephen Gabrielson.
Selected journal articles were tweeted from one of several Twitter accounts with a large number of followers. The altmetrics and citations of these papers were compared with a set of control papers for a study period of three years. While altmetrics saw an increase immediately after tweeting, there was no statistically significant increase in citations for the study papers versus the controls by the end of the study period.
Major comments:
I would like to see a more explicit acknowledgement that this experiment was conducted with only …
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/10044712.
This review reflects comments and contributions from Melissa Chim, Allie Tatarian, Martyn Rittman, Pen-Yuan Hsing. Review synthesized by Stephen Gabrielson.
Selected journal articles were tweeted from one of several Twitter accounts with a large number of followers. The altmetrics and citations of these papers were compared with a set of control papers for a study period of three years. While altmetrics saw an increase immediately after tweeting, there was no statistically significant increase in citations for the study papers versus the controls by the end of the study period.
Major comments:
I would like to see a more explicit acknowledgement that this experiment was conducted with only ecological papers - the results are written as if the conclusions apply to scientific research broadly and not one specific discipline. For example, disciplinary differences in citation politics and mechanisms may have a big impact on the effects of social media dissemination. Would social media affect citations of monographs in the humanities the same as ecology papers? If the authors believe their findings can be generalised to other domains, I'm happy for them to make that argument, too.
I like the authors' methodical approach to the study. It's well-designed and takes into account weaknesses of previous similar studies. I appreciate how thoroughly the authors explained their criteria for choosing articles. It's a shame that it is statistically under-powered to detect citation changes in WoS/Scopus, but that is an interesting result in itself and sets parameters for future studies.
From my perspective, the current discussion section of this paper (1) summarises the key learnings from the experiment, (2) acknowledges that social media engagement is useful beyond paper citation counts, and (3) a "wistful" commentary on the value of social media dissemination of research. These points are worthwhile. However, I'd like to see a deeper, constructive dissection into the limitations of this experiment in the discussion section. In addition to mostly ecologist accounts tweeting ecology papers, there are various potential minor issues which could be tackled in future studies. I'd love to see a discussion of them.
While the authors state that the dataset collected from this experiment is shared in the Supplementary Materials, I was not able to find it from reading the paper. Where is the dataset, and can the authors directly cite it in the text? Similarly, the current Acknowledgements section states that the publisher of these journals (John Wiley & Sons) wrote the scripts to collect much of the raw social media data. Where are these scripts published? And what about the statistical analyses? Did the authors also write scripts to do that, or were they done in some other way? There is currently very little reporting in this paper on the data and implementation details (e.g. source code). I suggest a dedicated data and code availability section that discusses which aspects of that have been published (with full citation and open source license metadata), and a discussion of limitations and reproducibility. This is not a box-ticking exercise. For example, this paper describes using classical frequentist statistics, but it may be interesting to apply a different analytical approach (e.g. Bayesian modeling). Any code that has been written should also be published in commented form for others to study, peer review, and build upon. For components which the authors could not publish for any reason, and discussion of these limitations could inform future efforts.
Not everyone is familiar with English-language social media platforms and how they work. I think this paper would be informative and useful to a wider international audience if the authors could briefly discuss how Twitter works, and how it compares to other popular platforms. This information would allow a more critical analysis into how much of the effects seen in this experiment can be attributed to Twitter versus social media in general. And because the authors are social media experts, the Discussion section could also discuss if the way Twitter and journal publishers make (nor not make) it easy to access data for this experiment. This would be a useful methodology discussion to inform future studies.
Minor comments:
I would have liked to see a bit more detail about the authors' backgrounds since their expertise played such a large role in the study overall.
I agree - also it seemed to me that all of the authors and the journals they targeted were in the ecology/conservation field, but I don't think there was an explicit acknowledgement of it in the text.
Did the authors check if any of their control articles were tweeted by other scientists during the time of their study? If this happened, it could weaken the effect and the authors could be drawing false negative conclusions.
This study was explicitly designed as a hypothesis-based experiment. In line with that, I suggest the authors explicitly state what their hypotheses is/are (and the null hypotheses, etc.) in the Materials and Methods section.
The authors acknowledge that in at least one prior study, "Twitter promotion was also associated with 24 hours of free access to the articles." For the current reported experiment, did the authors track and account for the ease of access to the 110 articles in the study? If so, how?
The authors "obtained daily download counts for articles in five of the journals" - Did the publishers of the other journals simply refuse to provide that data? Also, how is "download" defined? Is it literally someone clicking to download the PDF file? If so, did the authors account for the possibility that some of the articles studied can be read online in addition to being downloadable as a PDF file? What are the potential limitations here?
I appreciate the reporting on the growth of followers for all 11 Twitter accounts used in this experiment. Is it possible that an article tweeted later in the 3-year study period would receive higher Altmetrics or citations because the account tweeting it had more followers at that time? I suspect the randomisation tests would account for this, but I'd like to double check.
Figure(s) with color (e.g. Figure 2) should be checked and edited (if necessary) for color-blind and black-and-white printing accessibility.
Important point on inclusive terminology: The current text describes the authors as "scientists" who use social media to communicate with a diverse audience, including those in the "general public". There is a growing body of research that critiques this dichotomy. For example, are authors not also members of the "public"? And for those in the "public", can they not be called "scientists" even if they happen to perform science in some capacity? Etc. Without having to cite the relevant body of peer-reviewed literature, I suggest a few ways to make the text more inclusive of the diverse ways in which people perform science. For example, the authors could state in the Introduction that they are those who "are professionally employed to conduct scientific research at universities/research institutions, which we will shorten as 'scientists' for the practical purposes of this article. The 'general public' in this text refers to those whose primary vocation is not conducting scientific research."
Under Experimental design, the authors report that "other non-standard article types" were excluded. That is fine, but I suggest removing "non-standard" as it unnecessarily devalues those "other" articles for the purposes of this experiment or paper.
Can the authors please include a statement on contributor roles, such as expressed through the CRediT contributor roles taxonomy? (https://credit.niso.org/) This can be located in the Acknowledgements section, or elsewhere depending on their preference.
In the first paragraph of the introduction, it would be good to spell out "AP" to "Associated Press" for a diverse international audience.
Comments on reporting:
(see comment on data and code availability under major comments)
Suggestions for future studies:
The authors acknowledged that their study focused on articles only from John Wiley & Sons. I can see further studies being done with a focus on other journals and/or other disciplines.
+1, and would also like to see if you see the same effect with journals that have a more broad focus or higher impact factors.
It would be interesting to see if there is an effect based on tweeting multiple times from the same or different accounts. Previous studies took more of a marketing campaign approach, it would be interesting to see where the boundary lies in how much effort is needed to increase citations (if indeed there is a boundary!)
An interesting future meta study would be to investigate how the mechanics of different social media platforms and the makeup of their users (e.g. Twitter vs Mastodon vs Threads, etc.) relate to if and how they impact the citational metrics and citation politics of academic research across different fields of study (including non-STEM!).
Competing interests
The author declares that they have no competing interests.
-