Design and Evaluation of a Sound-Driven Robot Quiz System with Fair First-Responder Detection and Gamified Multimodal Feedback

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper presents the design and evaluation of a sound-driven robot quiz system that enhances fairness and engagement in educational human–robot interaction (HRI). The system integrates a real-time sound-based first-responder detection mechanism with gamified multimodal feedback, including verbal cues, music, gestures, points, and badges. Motivational design followed the Octalysis framework, and the system was evaluated using validated scales from the Technology Acceptance Model (TAM), the Intrinsic Motivation Inventory (IMI), and the Godspeed Questionnaire. An experimental study was conducted with 32 university students comparing the proposed multimodal system combined with sound-driven first quiz responder detection to a sequential turn-taking quiz response with a verbal-only feedback system as a baseline. Results revealed significantly higher scores for the experimental group across perceived usefulness (M = 4.32 vs. 3.05, d = 2.14), perceived ease of use (M = 4.03 vs. 3.17, d = 1.43), behavioral intention (M = 4.24 vs. 3.28, d = 1.62), and motivation (M = 4.48 vs. 3.39, d = 3.11). The sound-based first-responder detection system achieved 97.5% accuracy and was perceived as fair and intuitive. These findings highlight the impact of fairness, motivational feedback, and multimodal interaction on learner engagement. The proposed system offers a scalable model for designing inclusive and engaging educational robots that promote active participation through meaningful and enjoyable interactions.

Article activity feed