Looking Outward 11 – Racial Biases in Artificial Intelligence

In this article, Meilan Solly discusses a project (ImageNet Roulette) by Trevor Paglen and Kate Crawford that was created to expose the highly flawed and derogatory nature of AI human categorization. The project in question took the form of an AI driven identification tool that, when supplied with an image of a person, would return the category to which that image belongs (according to the algorithm). Categories or identifiers ranged on a spectrum of neutral to problematic terms like ‘pervert’, ‘alky (alcoholic)’, and ‘slut’.

While the category names are disturbing in and of themselves, the trends of categorization were far more so. Generally, the algorithm would sort people of color and women into extremely offensive categories at a disproportionately high rate. While no type of person was entirely safe from a harmful identifier, the disparity was clear. Solly describes a trial of the algorithm by a twitter user who uploaded photos of himself in various contexts and was only returned the tags “Black, Black African, Negroid, and Negro” (Solly). Paglen and Crawford have since removed ImageNet Roulette from the internet given that it has “made it’s point” (Solly), however it is still available as an art installation in Milan.

7-Training-Humans-24.jpg
ImageNet Roulette Installation

The implications of this project run deep. Arbitrary categorization of people on its own may have little consequence, but the underlying system to which it alludes is the same system that functions in the background of active AI processes with real world applications. Beyond this, the project makes comments on the concept of privacy online, having used thousands of images sourced from various locations without consent.

Leave a Reply