ChatGPTest: A Mixed-Methods Study on AI-Generated Questionnaires Based on Theoretical Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study employed a hybrid research design that combines quantitative expert assessment and qualitative in-depth interviews to systematically examine the performance of AI-generated questionnaires based on single and integrated theoretical models, comparing them with manually created questionnaires. The research found that, under a single model, AI questionnaires showed significant differences from manual questionnaires in three dimensions: accuracy, comprehensiveness, and clarity. AI demonstrated limitations when handling complex and abstract variables, lacked the flexibility of human-designed questions, and tended to satisfy model requirements rather than research topic requirements. Under the integrated models, significant differences existed between the AI and manual questionnaires in four dimensions: accuracy, comprehensiveness, clarity, and redundancy. As the model complexity increased, the probability of problems in the AI questionnaires correspondingly increased. However, regardless of whether single or integrated models were used, AI questionnaires exhibited advantages in language fluency and generation efficiency and surpassed manual questionnaires in objectivity. Researchers may benefit from using AI to generate initial questionnaire drafts followed by manual corrections. This study not only reveals the advantages and limitations of AI-generated questionnaires but also provides theoretical and practical guidance for researchers. Future research should improve AI's contextual understanding capabilities and generation logic to further enhance the effectiveness of the AI-assisted questionnaire design.