blog 11

   This week’s reading was related to AI. Something interesting that is arising from assistive technology based on AI is the influence of our biases. Because AI is fed on data that we give the programs, both the data and the process used to determine which data should be used allows for biases to seep in. An upsetting example I remember was from a few years ago, a black couple had taken a picture and had their image brought up when people were searching for gorilla pictures. Another example is that security cameras and face detection technology have far more errors when needing to identify the faces of people of color. These weren’t intention choices made by some asshole in some random cubicle but were instead a combination of numerous errors and biases baked into our society that we then fed to the algorithm. In one of my design classes we learned about how even cameras themselves were made to make and replicate lighter skin better, resulting in needing for overcorrection of darker skin tones and difficulty capturing faces etc. While it’s easy to be upset at Google, the algorithm, or technology for having these “biases” just from a simple glance it becomes obvious that they are only reflecting what we as a society have already created and enforced. AI is almost like a child, we teach it everything by feeding it what we know, yet when it comes out a certain way we get mad even though we were the direct creators.

Leave a Reply