The evaluation of a novel tool to remotely assess visual acuity in chronic uveitis patients during the COVID-19 pandemic
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
Background
Restrictions due to the recent COVID-19 pandemic catalysed the deployment of telehealth solutions. A novel web‐based visual acuity test, validated in a healthy population, may be of great value in the follow‐up of uveitis patients.
Objective
To determine the measurement accuracy of the unsupervised remote Easee web‐based visual acuity test in uveitis patients, when compared to a conventional in‐hospital assessment.
Methods
Cross‐sectional diagnostic accuracy study. Between April 2020 and September 2020, consecutive adult uveitis patients were invited for the web‐based visual acuity test (index test) within two weeks prior to their conventional in‐hospital assessment (reference test).
Results
A total of 269 patients were invited by mail, of whom 84 visited the website (31%). Ultimately 98 eyes met the criteria for statistical analysis. The mean difference between the two tests was low and non‐significant: 0.02 logMAR (SD 0.12, P = 0.085). The 95% limits of agreement ranged from ‐0.21 to 0.26 logMAR. No relevant differences were identified in clinical characteristics between subgroups with a good performance (i.e. difference between the tests 0.15 logMAR) or underperformance (i.e. difference >0.15 logMAR) on the web‐based test.
Conclusion
The web‐based visual acuity test is a promising tool to remotely assess visual acuity in the majority of uveitis patients, especially relevant when access to ophthalmic care is limited. No association between patient‐ or uveitis‐related variables and (under)performance of the test were identified. These outcomes underline the potential of remote vision testing in other common ophthalmic conditions. A proper implementation of this web‐based tool in health care could be of great value for revolutionizing teleconsultations.
Article activity feed
-
SciScore for 10.1101/2021.04.14.21255457: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
Ethics Consent: All included patients were registered in the registry ‘Ocular inflammation, METC 17‐363’, that contains a written informed consent for the use of data. Sex as a biological variable not detected. Randomization The mean difference can be interpreted as the systematic difference between the assessments (bias) and the 95%LoA as the range within 95% of the differences between one assessment and the other are included (random error). Blinding The health care providers were blinded for the previous web‐based test results of the subjects. Power Analysis not detected. Table 2: Resources
No key resources detected.
Results from OddPub: We did not detect open data. We also did not detect open code. …
SciScore for 10.1101/2021.04.14.21255457: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
Ethics Consent: All included patients were registered in the registry ‘Ocular inflammation, METC 17‐363’, that contains a written informed consent for the use of data. Sex as a biological variable not detected. Randomization The mean difference can be interpreted as the systematic difference between the assessments (bias) and the 95%LoA as the range within 95% of the differences between one assessment and the other are included (random error). Blinding The health care providers were blinded for the previous web‐based test results of the subjects. Power Analysis not detected. Table 2: Resources
No key resources detected.
Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:A limitation of the study design was that only patients were included who were willing to participate, and who were able to successfully complete the web‐based eye exam in their home environment. The study design induced a participation bias of digitally competent patients, often of a younger age, which might affect the generalizability of the study outcomes. Diagnostic accuracy might be poorer in less digitally competent patients, as remote exams might be performed incorrectly. However, it is important to bear in mind that our inclusion criterion of successful completion of the web‐based eye exam does not imply adequate performance. In other words: a patient can ‘finish’ an exam (at home) without conducting it correctly. In this study, these patients were included in our analyses as well, as information of outliers is important for interpreting the measurement accuracy and identifying patient characteristics that relate to bad performance. Another possible limitation of the design is the interval between the two assessments. Ideally, two compared VA assessments should be conducted within a time interval as short as possible, to prevent clinical changes to impact observed differences. We consider the mean interval in our interval to be fairly short (mean 5 days ± 3 days). In addition, patients were instructed to redo the web‐based exam or contact the research team if they experienced a change in visual acuity between performing the web‐based eye exam and their hospital appoin...
Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
-