Addressing Gender Bias in Generative Large Language Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The examination of gender bias, alongside other demographic biases like race, nationality, and religion, within generative large language models (LLMs), is increasingly capturing the attention of both the scientific community and industry stakeholders. These biases often permeate generative LLMs, influencing widely used products and potentially compromising user experiences. A growing body of research is dedicated to enhancing gender representations in natural language processing (NLP) across a spectrum of generative LLMs. This paper explores the current research focused on identifying and evaluating gender bias in generative LLMs. A comprehensive investigation is conducted to assess and mitigate gender bias across five distinct generative LLMs. The mitigation strategies implemented yield significant improvements in gender bias scores, with performance enhancements of up to 46% compared to zero-shot text generation approaches. Additionally, we explore how different levels of LLM precision and quantization impact gender bias, providing insights into how technical factors influence bias mitigation strategies. By tackling these challenges and suggesting areas for future research, we aim to contribute to the ongoing discussion about gender bias in language technologies, promoting more equitable and inclusive NLP systems.

Article activity feed