bookooBread – Readings on Faces

From Joy Buolamwini’s talk “1 in 2 adults in the U.S. have their face in facial recognition networks”… a terrifying fact because as she says, these networks are very often wrong. Misidentifying someone in the context of policing and the justice system takes this fact to an entirely new level of terrifying. There are many people out there that because they do not know how these systems work (or do, but know that others don’t), take it to be full-proof and factual, using these “facts” to leverage their goals.

In Kyle McDonald’s Appropriating New Technologies: Face as Interface, he describes how “Without any effort, we maintain a massively multidimensional model that can recognize minor variations in shape and color,” Going further to reference a theory that says  “color vision evolved in apes to help us empathize.” I found this super interesting and read the article that it linked to. The paper, published by a team of California Institute of Technology researchers “[suggested] that we primates evolved our particular brand of color vision so that we could subtly discriminate slight changes in skin tone due to blushing and blanching.” This is just so funny to me, we are such emotional and empathetic creatures.

spingbing-facereadings

One interesting fact I came across while reading Kyle McDonald’s lecture is that they found that just by simulating expressions of certain emotions, their bodies physiologically reacted as if they were truly experiencing those emotions. It goes to show that the phrase “fake it till you make it” really has some truth to it.

A link added in this lecture points to a Microsoft site which encourages users to submit facial data to enable “a seamless and highly secured user experience”. Allowing facial recognition and tracking is an interesting concept because while it does ease the use of certain technologies, it also enables the addition of a more diverse set of faces to a larger dataset, making the dataset more reliable and less biased. This is helpful for advancing inclusion of a wider range of faces which will then lower the discrimination which is currently of issue in many facial recognition softwares. However, there is also the issue where a lack of recognition can be helpful for people in situations such as policing. Having unbiased datasets for facial recognition is both a good and a bad thing depending on what the set is used for, so it is interesting to see arguments for both the benefits and disadvantages of a more advanced and robust dataset.

bumble_b-AugmentedBody

Scotty Dog Simulator

 

I started this project pretty late, so I knew I had to stick with something pretty simple. I had a few ideas about making filters of School of Drama professors and their iconic caricatures, but I don’t really like them and hate the idea of idolizing them. Plus, it wouldn’t really land well with our class who doesn’t know them. I was pretty set on doing a filter, since I had a feeling it’d be simple, so I thought a little longer and landed on our adorable scotty dog mascot, keeping with the Carnegie Mellon theme I originally had!

Since I really want to learn how to make games (and also needed to add some complexity to a very simple idea), I added a little start screen and some bagpipe background music (that ended up stuck in my head for like a whole freaking day).

I also decided to add a bark sound every time the user opened their mouth, which I accomplished by calculating the distance between a point on the top lip and a point on the bottom lip. That’s definitely my favorite part of the project now.

Something I didn’t think about, and therefore did not give myself enough time to implement, was how to scale the filter based on the distance the user is to the screen. Also, if they turn their head, the filter does not rotate with them. When I got the feeling something was wrong, I opened Snapchat and experiemented with their filters, seeing how they scaled and rotated perfectly! That’s definitely where this project has fallen short, and I wish I started earlier to give myself that time.

Here is an early process photo where my scotty dog kind of looked more like a cat than a dog…

 

starry – AugmentedBody

My project utilized the limbs and body to control the movement of a forest. I wanted to explore movement in inanimate objects, not just a single tree, and I think with natural subjects it’s easier to build complexity visually since I could copy paste the trees to create a forest. By moving the arms back and forth, the user can simulate movement of branches, and their distance from the camera determines the size of the sun / visibility of the ground.

I liked how it turned out visually but I didn’t like the user interaction since the frame rate was pretty bad and it caused the movement to look very laggy. I also wanted to process the user’s movement somehow to create smoother looking movement, similar to wind moving through trees, but couldn’t figure out how to do so.

video

bumble_b-facereadings

The most striking and disheartening thing about “Against Black Inclusion in Facial Recognition” is the realization that the system itself is so broken that people would rather face the racism of machines unable to detect their faces than face the racism that would occur if the machines were able to detect their faces. The fact that people would rather not be included at all to protect themselves… it really makes you think.

I really enjoy Last Week Tonight, so any excuse to watch it I will take! In one part, there’s a Russian TV presenter demonstrating the app FindFace, under the scenario where you see a pretty girl at a coffee shop and are too nervous to approach her. Apparently, all you have to do is take a photo of her and wait for the FindFace results that will bring up her “profile.” Whether that’s Instagram, Facebook, or FindFace’s own hypothetical social media platform (I don’t know), it is TERRIFYING! I mean, what a way to empower the creeps of this world!

As technology advances in these ways, it should really only be in the hands of ethical people… whom I don’t really think exist among the elites who would be making and accessing this technology. In fact, this reminds me of a demonstration of deep fakes, where whichever company that had developed it (someone recognizable like Microsoft or Sony or something, but I can’t remember) showed how after getting a bunch of samples of someone’s voice, you could type in whatever text you wanted and it could be said perfectly in that person’s voice. They said that, though they had developed the technology well, they would not be releasing it to the public… for obvious reasons!

merlerker-AugmentedBody

My project is quite simple and silly: a “nose isolator” that finds your nose and masks everything else. Bodies are strange, and I appreciate projects that acknowledge that universally-felt, awkward but intimate relationship we have with our bodies, like Dominic Wilcox’s “Tummy Rumbling Amplification Device” [link] and Daniel Eatock’s “Draw Your Nose” [link]. Isolating the nose has the effect of forcing you to confront a body part that you’ve probably felt self-conscious about at some point in your life, and allowing it to become an endearing little creature in itself. Though it’s a simple project and treatment, I feel it’s successful in creating a delightful and different relationship with your body. I’m proud of the conceptual bang-for-buck: it’s an important exercise for me to let go of perfection and overambitious projects that never end.

Originally I was trying to apply the nose isolator to scenes from films, but got frustrated trying to get handsfree.js to run on a <video> source and doing the correct mirroring and translating to get it all to line up. I instead created a performance using my own nose that leans into the nose-as-creature.

kong-AugmentedBody

My initial idea was to represent the homesickness I was feeling by utilizing the distance between one’s face and hand: the face would represent self and the hand would represent home. I wanted to play with the distance between the face and the hand to foster different interactions. 

While playing around with the connections, I came across the idea of recreating the period cramps I feel as I am currently on my period. Based’s on one’s face and hand movement, various lines stretch, shrink, and strangle with one another. When another hand enters, the lines switch over to the other side, representing the sudden pains that occur.

As this was a quick project, I believe that there is a myriad of ways to extend this project. For instance, I could also bring in points from the body to not only create more interactions but also present more diverse forms. Further, instead of utilizing lines, I could incorporate shapes such as triangles and make use of gradients to build a 3d object.

duq-AugmentedBody

 

In this project I tried to make a program that would track the positions of your hands (using the code given to us) and track generative fire onto your fingertips on the screen. I used several different factors to try to generate realistic fire that determined how rapidly it would ascend, how spread out the fire would get and how much smoke it produced. Overall, I am happy with how I was able to convert the idea behind my project into a reality, but the program does run very slowly, making the fire move upwards far more slowly than real fire would. I tried to fix this issue by using pixels instead of circles, but it did continue to run slowly. Something that I wish I could have added was the option to only have fire coming out of your index finger if you close your other fingers as this would allow you to draw with the fire, but I really did not know how to begin to implement this.

Sneeze-AugmentedBody

My concept for the Augmented body project was to have an altered way of speaking. I wanted for people to type in their words and have the words come out of their mouth without them having to actually voice themselves. I wanted to make this because sometimes writing or typing out words is easier than saying them. When you open your mouth and press a letter on the keyboard, multiple of that letter will float out of your mouth.

I wanted to have the things floating out of your mouth be words, but I ended up doing single letters instead. As I started coding for words, I ran into lots of problems (that probably could be fixed if I had more time). I was not able to get full words as input since I had a difficult time figuring out input text boxes combined with the web screen display (every time I had the input text box on screen, it would cover the webcam and vise versa). I settled for detecting single letters pressed and outputting that instead of full words. Now if the user wants to type a word, they must view the word by reading the floating letters from farthest from the mouth to closest (so disjoint letters).