Assessing the Viability of AI Custom Bots as Data Collection Instruments: A Pilot Study

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This pilot study explores the feasibility of using AI-powered custom chatbots as assessment data collection instruments. A custom bot developed on the Chatbase.co platform was deployed to conduct semi-structured interviews assessing AI competency, collecting 322 user interactions over seven months with a 38.5% completion rate. The study examines both advantages and limitations of this approach compared to traditional assessment methods. AI-driven assessments demonstrated significant cost efficiency (40% less expensive than paper-based assessments and 90% less than oral interviews) while enabling automated analysis of qualitative and quantitative responses. Conversation logs revealed distinct patterns in human-AI interactions, particularly in prompt crafting, iterative refinement, feedback-seeking, and ethical awareness. Users demonstrated varying levels of cognitive engagement, with high performers exhibiting sophisticated prompting strategies and metacognitive adaptability. Preliminary psychometric evaluation showed strong reliability (Cronbach's alpha = 0.94) and construct validity aligned with established frameworks. Key challenges included variability in user engagement, concerns about assessment fairness, and issues with transparency in scoring mechanisms. The study contributes to understanding AI's potential in data collection methodologies and offers insights for developing more responsive assessment instruments that foster metacognitive reflection through real-time feedback loops..

Article activity feed