Evaluating convergence between two data visualization literacy assessments
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Data visualizations play a crucial role in communicating patterns in quantitative data, making data visualization literacy a key target of STEM education. However, it is currently unclear to what degree different assessments of data visualization literacy measure the same underlying constructs. Here, we administered two widely used graph comprehension assessments (Galesic & Garcia-Retamero, 2011; Lee, Kim, & Kwon, 2016) to both a university-based convenience sample and a demographically representative sample of adult participants in the United States (N=1,113). Our analysis of individual variability in test performance suggests that overall scores are correlated between assessments and associated with the amount of prior coursework in mathematics. However, further exploration of individual error patterns suggests that these assessments probe somewhat distinct components of data visualization literacy, and we do not find evidence that these components correspond to the categories that guided the design of either test (e.g., questions that require retrieving values rather than making comparisons). Together, these findings suggest opportunities for development of more comprehensive assessments of data visualization literacy that are organized by components that better account for detailed behavioral patterns.