redBERT
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
A natural language processing (NLP) method was used to uncover various issues and sentiments surrounding COVID-19 from social media and get a deeper understanding of fluctuating public opinion in situations of wide-scale panic to guide improved decision making with the help of a sentiment analyser created for the automated extraction of COVID-19-related discussions based on topic modelling. Moreover, the BERT model was used for the sentiment classification of COVID-19 Reddit comments. These findings shed light on the importance of studying trends and using computational techniques to assess the human psyche in times of distress.
Article activity feed
-
-
SciScore for 10.1101/2021.03.02.21252747: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources This section breaks down the methods used to achieve this study’s main contributions, proposing a topic model based on unsupervised learning with a collaborative deep-learning model that draws on BERT to analyse COVID-19 related comments in various subreddits. BERTsuggested: (BERT, RRID:SCR_018008)BERT-LARGE is trained mainly on raw text data from Wikipedia (3.5B words) and a free book corpus (0.8B words) [2]. Wikipediasuggested: (Wikipedia, RRID:SCR_004897)BIOBERT [21] and SCIBERT [22] are trained using the same unsupervised training techniques as the main models (MLM/NSP/SOP). BIOBERTsugg…SciScore for 10.1101/2021.03.02.21252747: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources This section breaks down the methods used to achieve this study’s main contributions, proposing a topic model based on unsupervised learning with a collaborative deep-learning model that draws on BERT to analyse COVID-19 related comments in various subreddits. BERTsuggested: (BERT, RRID:SCR_018008)BERT-LARGE is trained mainly on raw text data from Wikipedia (3.5B words) and a free book corpus (0.8B words) [2]. Wikipediasuggested: (Wikipedia, RRID:SCR_004897)BIOBERT [21] and SCIBERT [22] are trained using the same unsupervised training techniques as the main models (MLM/NSP/SOP). BIOBERTsuggested: (BioBERT, RRID:SCR_017547)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
-