Inclusion or Illusion: A Philosophical Reflection on the Use of Gender-Inclusive Language in ChatGPT
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper explores how ChatGPT-4.0 addresses gender inclusivity through a comparative analysis of English and Filipino text generation. Three types of prompts (neutral, biased, and explicitly gender-inclusive) were used to test whether AI-generated educational content reinforces or challenges gender stereotypes in depictions of teaching professionals. Findings show that while ChatGPT-4.0 reliably uses gender-neutral pronouns (e.g., “they/them” in English, “sila” and “siya” in Filipino), subtle stereotypes remain in culturally shaped descriptors, particularly those portraying female teachers as nurturing and caregiving. This reveals a gap between surface-level neutrality and deeper inclusivity, suggesting that true gender inclusiveness requires more than grammatical precision; it demands critical attention to cultural assumptions embedded in language. The paper highlights practical implications for educators, noting that meaningful inclusivity in AI-generated content can be achieved only through culturally sensitive prompting and ongoing evaluation to ensure fair representation of all genders in educational contexts.