gregariosa-facereadings

From watching John Oliver’s video, I was shocked by the use of facial recognition by Clearview.ai. The audacity of the company to undermine basic human rights and weaponize technology against civilians was horrifying. After reading another article related to it, I found that Clearview.ai only has an unverified  “75% accuracy” of detecting faces. Not only are they capable of scraping 3 billion photos against people’s will, but they also take no responsibility for potentially mis-identifying people and further perpetuate racial profiling. I wonder, then, whether the federal government is even capable of regulating these technologies, as previous hearings with tech giants have proven that lawmakers have little to no understanding of how technology operates.

After reading Nabil Hassein’s response to the Algorithmic Justice League, I found it interesting that Hassein would rather see anti-racist technology efforts put into meddling with machine learning models, rather than filling up the missing gap of identifying black faces. I wonder whether facial detection should’ve never been invented in the first place, as more efforts need to be placed to combat the growing technology, rather than help develop it more.

pinkkk-facereadings

"expression alone is sufficient to create marked changes in the autonomic nervous system. "

I'm most intrigued by this idea that a simple movement of our facial muscle actually influence our biological state. Yet overdoing an expression can leave trauma. The fact that if you smile too much, you get a disorder named smile mask disorder if the reason behind your smiles was unnatural, i.e. smiling because your boss told you too. I'm very interested in exploring creating motivations for people to have certain facial expression that leave them with change of their mental state.
From the talk by Joy Buolamwini,

The theme of what you do not see can matter more than what you do see is echoed throughout this talk. This is something I definitely did not recognize and was made aware of, and I am very glad that now I know this issue.

lampsauce-facereadings

  1.  At the end of the Kyle McDonald reading, there were images of people’s faces burned onto toast as examples of how facial recognition software can be fooled. I think it is interesting to explore these edge cases because it is helps put the power of facial recognition in perspective: sure, it’s cool that this code can tell me from you, but it can’t tell me from a picture of me burned onto piece of bread.
  2. An idea that struck me from Joy Buolamwini’s Ted Talk is that as facial recognition gets more and more powerful, the edge cases mentioned before become a lot less funny; the implication of a piece of code not working for people with darker skin is that those people do not exist, which furthers inherent and existing biases in society.

junebug-facereadings

One thing I found super interesting from the readings was the conflict of wanting the facial recognition programs to be more inclusive and removing the algorithmic bias that exists within technology, but also the problems that will arise with better facial recognition technology in the hands of an oppressive and racist society. Maybe the fact that there is color bias in existing facial recognition programs is a good thing in our current racially biased justice and prison system to help provide a temporary advantage to those who will be targeted with this technology. I’m just disappointed in the fact that it is our reality where we have to debate about whether to fight for our inclusivity in technology that could potentially danger the lives of those who we are fighting inclusivity for.

The second thing I found interesting is just the whole concept of how we (as human civilization) is now just moving in uncharted territory with our innovations and improvements in technology. In regards of whether something is moral or not is the debate we have with many things related to new technology. In my Designing Human-Centered Software course, we discussed the morals and ethics of A/B testing. If you think generically, the idea of A/B testing is in no way of violating anyone’s privacy or morals – for example, Google’s 41 shades of blue test for their new Google search button. However, depending on how the test was conducted or what the test is testing, it could quickly become immoral (for example, Facebook’s “Mood Manipulation Study”). It just makes me wonder how the future will look like when we imagine how much further we plan on incorporating technology in our lives; will our boundaries for privacy change in about 30 years? Will our definition of moral/ethical change too?

marimonda-facereadings

I tried to keep this somewhat short since I have a bad habit of writing essays on these blog posts:

  1. In law enforcement:  I think one main topic that got covered in most of the readings was using facial recognition in law enforcement. The reason this topic is super interesting to me is because it’s literally one of the most dystopian applications of AI. It is not just seen with facial recognition, but it’s even being used in decisions of who gets a longer sentence than not. 
  2. Is being recognized a good thing? I think the whole section of the John Oliver video that focused on Clearview.ai was INSANE, I think this makes me wonder how this doesn’t already bypass federal regulations or any sort of fair use copyright, especially when multiple corporations like Facebook have requested Clearview to stand down. I was under the impression that this type of applications of AI would be illegal, and its so terrifying to know that its not.