Practices of studying with AI chatbots: How university students actually use ChatGPT and co
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study addresses a significant gap in our understanding of how university students actually use AI chatbots such as ChatGPT. While existing literature has explored students’ attitudes, the acceptance and adoption of AI tools, as well as potential implications for academic integrity, little is known about the students’ concrete practices of using AI chatbots. We conducted an exploratory, qualitative study involving focus group interviews with 61 students at a German university. Drawing on sociological practice theory, particularly structuration theory, we show how students use (and not use) AI chatbots very reflexively and often in highly elaborate ways. We identify five generalized practices of usage: Students use AI chatbots as support tool (e.g., language editing), as learning facilitator, as sole author (students just copy-paste AI-generated text), as first author (students substantially modify AI-generated text), and as second author (students outline their own ideas and arguments and let the AI write them up). Hence, in addition to mere copy-pasting, students often co-author their works with AI chatbots, bringing up the question of originality. Furthermore, we reveal the related norms and interpretative schemes. Further findings include that the non-usage of AI chatbots has become a source of self-validation and pride for some students. We also find a continued necessity for students to acquire academic competencies. Our findings have significant implications for university policies. Measures such as mandatory disclosure of AI use, reliance on AI detection tools, or outright bans may prove ineffective and problematic, and might even exacerbate existing inequalities.