Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (PREreview)
- Preprint review and curation (mark2d2)
Abstract
Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.
Article activity feed
-
-
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/11320176.
The paper addresses an in important and timely issue in open science: the appropriateness of adopting generative AI into open science practices. The authors delve into benefits and limitations of genAI in the conduct and dissemination of science by using the UNESCO open science recommendations as a type of rubric.
Major issues
There are no major issues by my read with the manuscript, however, there are a significant number of minor issues the authors should consider addressing in their next draft. The authors should be commended for putting this research and thought leadership together into the manuscript.
Minor issues
The abstract of the paper implies that there's a direct, one way …
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/11320176.
The paper addresses an in important and timely issue in open science: the appropriateness of adopting generative AI into open science practices. The authors delve into benefits and limitations of genAI in the conduct and dissemination of science by using the UNESCO open science recommendations as a type of rubric.
Major issues
There are no major issues by my read with the manuscript, however, there are a significant number of minor issues the authors should consider addressing in their next draft. The authors should be commended for putting this research and thought leadership together into the manuscript.
Minor issues
The abstract of the paper implies that there's a direct, one way relationship between genAI and open science: genAI affects open science practices. However, the relationship is better characterized as reciprocal and the section titled can OS open up Genai is a good start to acknowledging this but the authors would serve the scope of the paper more fairly by stating it in the abstract as well. Yes, new technology affects open science practices, but those technologies are often built upon, predicted on, or depend on open science. The limitations of genAI 's model training, to for instance, can be greatly improved with open data that is transparently communicated to the users. Greater explication of this nuance would improve the scope of the paper.
UNESCO'S definition of open science is a widely adopted and a very good definition. The paper, however, does discuss limitations of genAI for equity based upon the UNESCO framework. It would be worthwhile mentioning that the UNESCO definition does not address equity directly, Focusing more on inclusiveness. The official US federal definition however, does address equity directly and perhaps the authors would help readers make the connection between open science and The issues related to equity in generative AI by referencing that definition at least as a complimentary construct.
One issues in terms of the use of AI in science communication, particularly in publications such as the Frontiers rat-gate that the author's cite, One wonders. Whether AI is the issue here at all. It seems reasonable that accelerated review models such as those promised by For-Profit publications such as Frontiers might be the real problem. A lack of editorial and reviewer oversight, coupled with the lack of genAI lookout training of reviewers, may very well have resulted seems to be symptomatic of a much larger problem with the review process than with generative AI and it's widespread availability. This isn't just a human in the loop problem. This is a market problem with for-profit publishing being misaligned with the public good of science. Certainly, generative AI magnifies this problem and it will become increasingly incumbent on reviewers and editors to make more difficult decisions in terms of editorial process throughout all of scholarly communication, but this process will not be helped at all until the incentives for quality review are disentangled from bottom lines. I think it's very clear how this fits into the UNESCO framework and it would be good for the authors to pontificate a bit on the implications of having generative AI interfacing with a seemingly broken peer review system.
I absolutely love the recommendation section and I think this is very much needed for the Scientific community. I think that it would also be helpful to add an additional section for the use of AI by researchers, not just as a precondition of using AI, but an active condition. Such recommendations might include things like: transparently communicate that you have used generative AI and explain when and where and how including what prompts were used and whether any differentiation for the selection of results was done by the user ; describe any known limitations of the generative AI model used ; and, attempt to replicate an approximation of the results using more than one model for robustness.
Competing interests
The author declares that they have no competing interests.
-