miniverse-facereadings

  1. Facial recognition algorithms used in policing are entirely based on training data. Preexisting racial bias in police provided data is contributing to weaponizing facial recognition to target poor people and minorities. Furthermore, because computer algorithms are “logical” and “correct” it canonizes biases.
  2. I like the toast fooling the facial recognition algorithms. It reminds me of “adversarial machine learning” where structured noise can completely fool a facial recognition model. It points out the flaws in these systems and makes it clear they are far from perfect.