Monitoring Racial Disparities in the Era of Artificial Intelligence

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper utilized primary findings from an AI image generator of faces to analyse how racially inclusive the program is. This paper rendered 100 results and found that, from the findings, 77% of the faces generated were white faces, while the program failed to render a single black face. This paper calls for algorithmic auditing at the state level to ensure algorithmic inclusivity. Not rectifying this dilemma may yield AI algorithms that are not sufficiently catered to the South African context and application. Another consideration pertains to evaluating whether these algorithms are safe for public dissemination. This paper rests on the position that strong policy and state instruments are required to help circumvent the encroachment of technology that has not been sufficiently vetted for public use. This paper critically engages with the already prevalent discourse on bias mitigation and looks at ways people could bypass these regulatory mechanisms, such as the European Union’s Artificial Intelligence Act. This critical perspective looks to help assess if the proposed conjectures are sufficient.

Article activity feed