junebug-facereadings

One thing I found super interesting from the readings was the conflict of wanting the facial recognition programs to be more inclusive and removing the algorithmic bias that exists within technology, but also the problems that will arise with better facial recognition technology in the hands of an oppressive and racist society. Maybe the fact that there is color bias in existing facial recognition programs is a good thing in our current racially biased justice and prison system to help provide a temporary advantage to those who will be targeted with this technology. I’m just disappointed in the fact that it is our reality where we have to debate about whether to fight for our inclusivity in technology that could potentially danger the lives of those who we are fighting inclusivity for.

The second thing I found interesting is just the whole concept of how we (as human civilization) is now just moving in uncharted territory with our innovations and improvements in technology. In regards of whether something is moral or not is the debate we have with many things related to new technology. In my Designing Human-Centered Software course, we discussed the morals and ethics of A/B testing. If you think generically, the idea of A/B testing is in no way of violating anyone’s privacy or morals – for example, Google’s 41 shades of blue test for their new Google search button. However, depending on how the test was conducted or what the test is testing, it could quickly become immoral (for example, Facebook’s “Mood Manipulation Study”). It just makes me wonder how the future will look like when we imagine how much further we plan on incorporating technology in our lives; will our boundaries for privacy change in about 30 years? Will our definition of moral/ethical change too?