ClearViewAI really stood out to me since its a technology than can be so easily abused. The unpreparedness of its CEO also made me really concerned since this technology seems extremely dangerous in the hands of the wrong people. This particular point also really emphasizes the importance ethnical computing.
Algorithmic bias also really stood to me and to me highlights the quick need for diversity in the technology we develop.
One idea that I thought was very interesting from the john Oliver video, although it was touched upon in a few other texts too is the idea that AI is commonly seen as purely objective. often people will view the product of some AI as proof to a statement rather than a product of inherent biased .
Another interesting point that was made in quite a few texts was the idea of universality in machine learning and AI. A lot of machines are not trained on wide cultural or racial background. This then favors people of a certain race or culture that would otherwise not be, and makes their life much easier when it comes to this piece of technology.
I didn’t know about Clearview.ai until I watched “Face Recognition” from Last Week Tonight with John Oliver. I was extremely surprised that there are a great number of people who aren’t aware of the critical problems these face recognition softwares and accessing/collecting data from online are. Although I did thought about the possibility that others who I don’t know may access my photos on my SNS page, for example, and use it for whatever purpose I do not give consent to, knowing that there exists a company that actively collects and analyzes photos from online really concerns me. I think I’ll be thinking twice before I upload anything to my SNS…
- Inaccuracy in face recognition leading to social justice issues
I learned that inaccuracy in face recognition software could lead to a severe results from Joy Buolamwini’s Tedtalk video. I too have a memory of laughing at the inaccurately tagged names on social media, but just like what Joy Buolamwini said, it no longer becomes something trivial that we could just laugh over when it comes to crimes and suspects. People could be blamed for something they weren’t even aware of, at the most unexpected moments. I strongly agree that face recognitions not recognizing faces of people of colors enhances the pre-existing racism in the country, and that this problem should be brought up to the surface even more and be fixed anytime soon.
“Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison.”
This is from Joy Buolamwini’s Ted Talk and the most shocking thing I’ve heard from the readings. It’s terrifying that the length of people’s prison sentences are determined by their facial features, something that they do not control. How is this even legal?
Face recognition is used extensively by law enforcement despite it inaccuracies.
This is a general idea I got from a couple of the readings. Why has face recognition been used by the law enforcement and government when it algorithms are known to be biased and inaccurate?
From watching John Oliver’s video, I was shocked by the use of facial recognition by Clearview.ai. The audacity of the company to undermine basic human rights and weaponize technology against civilians was horrifying. After reading another article related to it, I found that Clearview.ai only has an unverified “75% accuracy” of detecting faces. Not only are they capable of scraping 3 billion photos against people’s will, but they also take no responsibility for potentially mis-identifying people and further perpetuate racial profiling. I wonder, then, whether the federal government is even capable of regulating these technologies, as previous hearings with tech giants have proven that lawmakers have little to no understanding of how technology operates.
After reading Nabil Hassein’s response to the Algorithmic Justice League, I found it interesting that Hassein would rather see anti-racist technology efforts put into meddling with machine learning models, rather than filling up the missing gap of identifying black faces. I wonder whether facial detection should’ve never been invented in the first place, as more efforts need to be placed to combat the growing technology, rather than help develop it more.