Investigation of Response Aggregation Methods in Divergent Thinking Assessments

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Divergent thinking (DT) ability is widely regarded as a central cognitive capacity underlying creativity. The scoring of DT performance, however, is challenging since DT tasks yield a variable number of responses with varying levels of creative quality. Over the years, many different approaches for the scoring of DT tasks have been proposed, which differ in how single responses are evaluated and how response scores are aggregated within a task. The present study aimed to identify methods that maximize psychometric quality while also reducing the confounding effect of DT fluency. We compared traditional scoring approaches of summative scoring and average scoring to more recent methods such as snapshot scoring as well as top- and max-scoring for varying numbers of top/max responses. We further explored the potential moderating role of task complexity as well as metacognitive abilities. A diverse sample of 300 participants was recruited via Prolific. Reliability evidence of DT scores was assessed in terms of internal consistency, and concurrent criterion validity in terms of correlations with real-life creative behavior, creative self-beliefs, and openness to experience. Findings confirm that alternative aggregation methods effectively reduce the confounding effect of DT fluency observed in summative scoring. Reliability tends to increase as a function of the number of included responses with three responses as a minimal requirement for decent reliability evidence. Convergent validity was highest for snapshot scoring as well as max-scoring when using a medium number of about three ideas.

Article activity feed