From Joy Buolamwini’s talk “1 in 2 adults in the U.S. have their face in facial recognition networks”… a terrifying fact because as she says, these networks are very often wrong. Misidentifying someone in the context of policing and the justice system takes this fact to an entirely new level of terrifying. There are many people out there that because they do not know how these systems work (or do, but know that others don’t), take it to be full-proof and factual, using these “facts” to leverage their goals.
In Kyle McDonald’s Appropriating New Technologies: Face as Interface, he describes how “Without any effort, we maintain a massively multidimensional model that can recognize minor variations in shape and color,” Going further to reference a theory that says “color vision evolved in apes to help us empathize.” I found this super interesting and read the article that it linked to. The paper, published by a team of California Institute of Technology researchers “[suggested] that we primates evolved our particular brand of color vision so that we could subtly discriminate slight changes in skin tone due to blushing and blanching.” This is just so funny to me, we are such emotional and empathetic creatures.
One interesting fact I came across while reading Kyle McDonald’s lecture is that they found that just by simulating expressions of certain emotions, their bodies physiologically reacted as if they were truly experiencing those emotions. It goes to show that the phrase “fake it till you make it” really has some truth to it.
A link added in this lecture points to a Microsoft site which encourages users to submit facial data to enable “a seamless and highly secured user experience”. Allowing facial recognition and tracking is an interesting concept because while it does ease the use of certain technologies, it also enables the addition of a more diverse set of faces to a larger dataset, making the dataset more reliable and less biased. This is helpful for advancing inclusion of a wider range of faces which will then lower the discrimination which is currently of issue in many facial recognition softwares. However, there is also the issue where a lack of recognition can be helpful for people in situations such as policing. Having unbiased datasets for facial recognition is both a good and a bad thing depending on what the set is used for, so it is interesting to see arguments for both the benefits and disadvantages of a more advanced and robust dataset.
The most striking and disheartening thing about “Against Black Inclusion in Facial Recognition” is the realization that the system itself is so broken that people would rather face the racism of machines unable to detect their faces than face the racism that would occur if the machines were able to detect their faces. The fact that people would rather not be included at all to protect themselves… it really makes you think.
I really enjoy Last Week Tonight, so any excuse to watch it I will take! In one part, there’s a Russian TV presenter demonstrating the app FindFace, under the scenario where you see a pretty girl at a coffee shop and are too nervous to approach her. Apparently, all you have to do is take a photo of her and wait for the FindFace results that will bring up her “profile.” Whether that’s Instagram, Facebook, or FindFace’s own hypothetical social media platform (I don’t know), it is TERRIFYING! I mean, what a way to empower the creeps of this world!
As technology advances in these ways, it should really only be in the hands of ethical people… whom I don’t really think exist among the elites who would be making and accessing this technology. In fact, this reminds me of a demonstration of deep fakes, where whichever company that had developed it (someone recognizable like Microsoft or Sony or something, but I can’t remember) showed how after getting a bunch of samples of someone’s voice, you could type in whatever text you wanted and it could be said perfectly in that person’s voice. They said that, though they had developed the technology well, they would not be releasing it to the public… for obvious reasons!
The reading Against Black Inclusion in Facial Recognition was very interesting to me because it is the first time I have encountered an opinion which is against the inclusion of nonwhite faces in facial recognition software. After reading, it only makes sense to have this opinion – if the tool is made by and for the oppressor, then it makes sense to not want to be included in the software’s scope. This stuck out to me because it is easy to get caught up in the hype of modern technology and to want to have a part in it, but this reading has brought up the very important point which is to be aware of the possibility that the effects of this modern technology can be very harmful.
In Last Week Tonight’s Face Recognition video, it mentioned how facial recognition no longer applies only to humans. As an example, it demonstrates how sensors can scan fish and apply image processing to identify the fish as well as any relevant symptoms. This struck me as I wasn’t aware or ever thought about this idea. I believe that this opens a myriad of opportunities not only for creative practices but also for real-life problem solving, such as protecting endangered animals. Personally, as a pet owner, it would be helpful to have a pet recognition technology: it would detect and share the pet’s location with the pet owner to minimize pet loss.
The idea of algorithm bias itself was very striking to me as I’ve haven’t deeply thought about the negative sides of face technology before. More specifically, in Against Black Inclusion in Facial Recognition by Nabil Hassein, it mentioned facial recognition’s racial bias and law enforcement use. Though it can be used for good, it raised a point that it could strengthen police control and thereby deepen on-going racial bias. This made me wonder if facial recognition technology is “really” good and if there are ways to ensure that it is solely used for its benefits.
Kyle McDonald – Appropriating New Technologies
“What we discovered is that that expression alone is sufficient to create marked changes in the autonomic nervous system” (Paul Ekman)
I thought this was interesting because it seems like human emotion is usually dictated by some internal mechanism within the brain and being able to induce emotion purely based on physical motions was not something I’d considered before.
Nabil Hassein – Against Black Inclusion in Facial Recognition
“Machine learning researchers have even reinvented the racist pseudoscience of physiognomy, in a study claiming to identify criminals with approximately 90% accuracy based on their faces alone — using data provided by police.”
I thought the arguments the author discussed were very interesting, the quote above reminded me about how important it is to understand the societal impact of any piece of technology especially when it can perpetuate racist ideas and not simply focus on innovation or technical mastery.
Against Black Inclusion & How I’m Fighting Bias
The discussion around algorithmic justice (especially with facial recognition) had been at the center of attention for a while now. I’ve thought a lot about it and concluded the following:
- I don’t think we’ll be solving the problem by eliminating the technology. Technological advancements have been a constant in human history except for a few cases where mass destruction (biological, chemical, nuclear weapons) or major ethical challenges (human cloning) are concerned. Facial recognition and its application are not qualitatively different from many other biometric systems such as fingerprinting.
- Though facial recognition is more powerful, it is not qualitatively disruptive as other technologies such as cloning humans. Thus, the technology itself should not be the subject of concern — it merely reveals and amplifies the existing systemic injustice and oppression. For example, it was mentioned in one of the articles that the US is planning on using facial recognition to trach everyone who is leaving the country. However, the same tracking system (implemented with fingerprinting) has been in the U.S. for aliens and in China for all citizens. With qualitatively different and destructive technologies, it would make more sense to control the technology itself, as its destructive quality is independent of its ownership and usage. Whereas in this case, it would make more sense to direct the fight for justice towards those who are using the weapon (the state, the government, the capital, etc.) than the weapons themselves (the technology.)
- I’m by no means downplaying the potential harm of misusing technology as a tool of oppression. As someone who’s lived in China and seen how technology is fast-tracking the blatant violation of basic human rights in Xinjiang and Tibet (and many other places around the world,) I’ve witnessed the destructive power of information technology. This is not new — IBM’s punchcard and computer systems found some of their earliest uses in the Holocaust and greatly improved the efficiency of ethnic cleansing. However, compared to other technologies, information technology does provide hope for more open-sourced, democratic distribution of power. Unlike military and nuclear power, which are held exclusively by the state, we see more individuals and NGOs leveraging the power of information technology to exert influence on a previously unimaginable scale (a recent example would be the involvement of civilian hackers in the Russian invasion of Ukraine; decentralized cryptocurrency is another example.) Though alarmed by its potential harm, I also remain optimistic for the future of technology and democracy.
Happy Things by Kyle McDonald & Theo Watson (link)
Dug a bit into this project and I love the concept – it posts a screenshot of your computer whenever you smile (w the implicit assumption that something on your screen made you smile). Sadly most of the screenshots showcased are people testing out the app 🙁 Would want to “live” with it for a while.
Useful tip: Aligning eye position to do face averaging.
Last Week Tonight / Against Black Inclusion … / How I’m Fighting Bias …
Reflecting on why exactly we need to be wary of applying facial recognition / “coptech.”
1) As we rely on facial recognition more, the times it is wrong are disastrous. The points where an infrastructure (technologies that are so embedded/hidden in the way we do things that they’re invisible) become visible are where it breaks/fails.
2) The benefit is not worth the invasion of privacy. There’s always going to be some terrible human who has the power to abuse access to personal information and will do it (like the Russian app demo’d for creeping on women, finding excuses to arrest BLM protesters)
3) Reinforces existing privileges in our society. Lack of inclusion/recognition of black faces – proposed as a form of privacy protection by Hassein, but as long as law enforcement uses facial recognition that increases the risk of misidentification.
Agree with John Oliver then that we need laws that require a person’s permission to use facial recognition on them, or stop developing facial recognition entirely. And instead, let’s use it to make art! (I think someone notable said that it’s the best use of surveillance tech.)
“Joy Buolamwini of the Algorithmic Justice League speak on wearing a white mask to get her own highly imaginative “Aspire Mirror” project involving facial recognition to perceive her existence”. I didn’t realize that biases in AI would compound into many projects that did not introduce any bias.
“In the early 1960s, Paul Ekman set out to discover whether facial expressions for communicating emotions are universal or cultural. He travelled the world with photographs of distinct faces, even traveling to remote locations, and found that they were interpreted consistently”. It seems obvious today that facial features are based in nature as opposed to nurture, I hadn’t realized that someone most likely had to prove it before it became common knowledge.
Zach Lieberman, Más Que La Cara Overview
I liked how their process went above and beyond what I was expecting. They hosted workshops with high schoolers to create cardboard masks to connect with the community as well as get some ideas. I think that this was really thoughtful and considered the local culture’s perception of masks. It was also an interesting way to collect ideas for masks.
Last Week Tonight: Face Recognition
It was interesting how the government of many large countries are investing in facial recognition technology for the reason of discovering criminals, but the idea is terrifying when thinking about if the government is corrupt because the technology is extremely powerful. Both gathering information about one individual or many similar individuals and could be abused in the wrong hands. I think that even if some countries decide to implement laws and regulations on the technology, other countries will not and the tools will still be abused.