ClearViewAI  really stood out to me since its a technology than can be so easily abused. The unpreparedness of its CEO also made me really concerned since this technology seems extremely dangerous in the hands of the wrong people. This particular point also really emphasizes the importance ethnical computing.

Algorithmic bias also really stood to me and to me highlights the quick need for diversity in the technology we develop.


  1. Facial recognition algorithms used in policing are entirely based on training data. Preexisting racial bias in police provided data is contributing to weaponizing facial recognition to target poor people and minorities. Furthermore, because computer algorithms are “logical” and “correct” it canonizes biases.
  2. I like the toast fooling the facial recognition algorithms. It reminds me of “adversarial machine learning” where structured noise can completely fool a facial recognition model. It points out the flaws in these systems and makes it clear they are far from perfect.


One idea that I thought was very interesting from the john Oliver video, although it was touched upon in a few other texts too is the idea that AI is commonly seen as purely objective. often people will view the product of some AI as proof to a statement rather than a product of inherent biased .

Another interesting point that was made in quite a few texts was the idea of universality in machine learning and AI. A lot of machines are not trained on wide cultural or racial background. This then favors people of a certain race or culture that would otherwise not be, and makes their life much easier when it comes to this piece of technology.



I didn’t know about until I watched “Face Recognition” from Last Week Tonight with John Oliver. I was extremely surprised that there are a great number of people who aren’t aware of the critical problems these face recognition softwares and accessing/collecting data from online are. Although I did thought about the possibility that others who I don’t know may access my photos on my SNS page, for example, and use it for whatever purpose I do not give consent to, knowing that there exists a company that actively collects and analyzes photos from online really concerns me. I think I’ll be thinking twice before I upload anything to my SNS…

  • Inaccuracy in face recognition leading to social justice issues

I learned that inaccuracy in face recognition software could lead to a severe results from Joy Buolamwini’s Tedtalk video. I too have a memory of laughing at the inaccurately tagged names on social media, but just like what Joy Buolamwini said, it no longer becomes something trivial that we could just laugh over when it comes to crimes and suspects. People could be blamed for something they weren’t even aware of, at the most unexpected moments. I strongly agree that face recognitions not recognizing faces of people of colors enhances the pre-existing racism in the country, and that this problem should be brought up to the surface even more and be fixed anytime soon.


  1. Currently, there’s little to no regulations around how facial recognition is used in law enforcement. As a result, facial recognition algorithms that might not even be extremely accurate are being used to identify suspects. Algorithmic bias make this extremely dangerous.
  2. Inclusive code shouldn’t be an after thought. The Who, how, and why are extremely important for when we code because things like algorithmic bias can potentially ruin some people’s lives.


  • From Brian Reverman’s Mask Around the World video, the narrator mentions the idea of masks used to celebrate the coming of seasons. The arrival of seasons is a concept so familiar in my head that I completely neglected how special it is in some cultures. It’s also interesting to see masks almost try to portray something that isn’t tangible and become wearable.
  • “Asians and African-Americans are a 100 times more likely to be misidentified than white men” In John Oliver’s video, he mentions this extremely alarming fact about failures that exist within the technology of facial recognition. Not only that, but they struggle to identify between a male and a female. When this system is used in ways that it wasn’t designed to be used, the public is surely in danger. It worries me that people may use this technology in serious situations while misunderstanding its accuracies, resulting in harming an innocent.


“Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison.”

This is from Joy Buolamwini’s Ted Talk and the most shocking thing I’ve heard from the readings. It’s terrifying that the length of people’s prison sentences are determined by their facial features, something that they do not control. How is this even legal?

Face recognition is used extensively by law enforcement despite it inaccuracies.

This is a general idea I got from a couple of the readings. Why has face recognition been used by the law enforcement and government when it algorithms are known to be biased and inaccurate?


  1. From John Oliver’s Facial Recognition video, I found and facial recognition technology in general to be concerning and even frightening in the way in which facial recognition could be exploited and used outside of entertainment. The accurate and elite facial recognition program is well developed as it has collected over 3-billion photos of people from the internet. However, despite being solely used for “law enforcement” agencies, the facial recognition service is said to be used in businesses such as Kohl’s and Walmart, and has been implemented secretively used in many other areas and countries. The amount of ways which facial recognition could be exploited is frightening, and makes me question the functionality and legality of facial recognition technology.
  2. Why should we desire our faces to be legible for efficient automated processing by systems of their design?  It’s no doubt that development of this kind of technology has been made possible by data from millions, even billions of people’s faces, and the idea that they have access to our faces could potentially result in the creation of new, oppressive technologies and ways in which our faces could be exploited.


From watching John Oliver’s video, I was shocked by the use of facial recognition by The audacity of the company to undermine basic human rights and weaponize technology against civilians was horrifying. After reading another article related to it, I found that only has an unverified  “75% accuracy” of detecting faces. Not only are they capable of scraping 3 billion photos against people’s will, but they also take no responsibility for potentially mis-identifying people and further perpetuate racial profiling. I wonder, then, whether the federal government is even capable of regulating these technologies, as previous hearings with tech giants have proven that lawmakers have little to no understanding of how technology operates.

After reading Nabil Hassein’s response to the Algorithmic Justice League, I found it interesting that Hassein would rather see anti-racist technology efforts put into meddling with machine learning models, rather than filling up the missing gap of identifying black faces. I wonder whether facial detection should’ve never been invented in the first place, as more efforts need to be placed to combat the growing technology, rather than help develop it more.