There are two components to this Assignment:
- Readings on Faces (Due Monday 3/14)
- The Augmented Body: An Interactive Software (Due Weds. 3/16)
1. Readings (Due 3/14)
Please watch/read the following media for Monday 3/14. This should take under an hour:
- Zach Lieberman, Más Que La Cara Overview (~12 minute read)
- Kyle McDonald, Appropriating New Technologies: Face as Interface (~15 minute read)
- Last Week Tonight: Face Recognition (21 minutes)
- Joy Buolamwini: How I’m fighting bias in algorithms (9 minutes)
- Nabil Hassein, Against Black Inclusion in Facial Recognition (~5 minutes)
In a blog post, please jot down two ideas, facts, claims, or images that you came across in the above media that struck you. Briefly write a sentence about why you found each of them interesting. Title your blog post nickname-facereadings, and Categorize the post 06-FaceReadings.
2. The Augmented Body (Due 3/16)
For this project, you are asked to write software which creatively interprets, uses, or responds to the actions of the body. Put another way, you are asked to develop an interesting treatment of real-time motion-capture data captured with a camera from a person’s face, hands, and/or other body part(s).
Consider whether your treatment is a kind of entertainment, or whether it serves a ritual purpose, a practical purpose, or operates to some other end. It may be a game. It may be a tool for puppeteering, or a musical instrument, or a tool for drawing. It could visualize or obfuscate your personal information. It could be a costume that allows you to assume a new identity, inhabiting something nonhuman or even inanimate. It may have articulated parts and dynamic behaviors. It may blur the line between self and others, or between self and not-self. This list is not exhaustive.
To share your project in documentation, you will record a short video in which you use it. Design your software for a specific “performance”, and plan your performance with your software in mind; be prepared to explain your creative decisions. Rehearse and record your performance.
Now:
- Create your project using p5.js and Handsfree.js, and post it in Exercise #28 on our OpenProcessing classroom. Some links to code for template projects are provided below. Alternatively, you may use ShaderBooth if you would like to learn GLSL shader programming.
- Create a blog post for your project on this website. Title it nickname-AugmentedBody and categorize it 06-AugmentedBody.
- Enact a brief performance-demo that makes use of your software. Be deliberate about how you perform, demonstrate, or use your software. Consider how your performance should be tailored to your software, and your software should be tailored to your performance.
- Document your “performance” with a video and/or animated GIF. Embed these media in the blog post.
- Document your project with one or more screenshots; embed these in the blog post.
- Embed a photo of any paper sketches you made for your project.
- Write a paragraph or two (100-200 words) that explains and evaluates your project.
Template code for accessing hands, face, and body pose information can be found in the following sketches:
- Face+Hands+Body at Editor.p5js.org (https://editor.p5js.org/golan/sketches/0Sho5V1KY)
- Face+Hands+Body at OpenProcessing (https://openprocessing.org/sketch/1503759)
- Gesture Recognition (Rock-Paper-Scissors) https://editor.p5js.org/golan/sketches/4UggchtU-
- Crappy Blink Detection at Editor.p5js.org (https://editor.p5js.org/golan/sketches/d-JfFcGws)
Some Creative Opportunities
The following suggestions, which are by no means comprehensive, are intended to prompt you to appreciate the breadth of the conceptual space you may explore.
You could make a body-controlled game. An example is shown above; Lingdong Huang made this “Face Powered Shooter” in 60-212 in 2016, when he was a sophomore. Another example, Face Pinball, is shown below.
Face Pinball: Use your face to play our new pinball-inspired game! Raise and lower your eyebrows to control the flippers, then open your mouth to swallow the ball and reach the next level. Download the first iPhone X only game here: https://t.co/fnLSIJ8eVe pic.twitter.com/46Iri9i0Nt
— Nexus Studios (@nexusstories) July 30, 2018
You could make a mask. Costumes, masks, cosmetics, and digital face filters allow the wearer to fit in or act out. We dress up, hide or alter our identity, play with social signifiers, or express our inner fursona. We use them to ritualistically mark life events or spiritual occasions, or simply to obtain “temporary respite from more explicit and determinate forms of sociality, freeing us to interact more imaginatively and playfully with others and ourselves.”
You could make a sound-responsive costume. You can develop a piece of interactive real-time audiovisual performance software (perhaps similar to Setsuyakurotaki, 2016, by Zach Lieberman + Rhizomatiks).
You could make a creativity tool, like a drawing program. In 2019, Design junior Eliza Pratt built this eye-tracking drawing program in 60-212.
Mary Huang made a project to control the parameters of a typeface with her face.
You may capture more than one person. Your software doesn’t have to be limited to just one body. Instead, it could visualize the relationship (or create a relationship) between two or more bodies (as in Scott Snibbe’s Boundary Functions or this sketch by Zach Lieberman). It could visualize or respond to a duet. It could visualize the interactions of multiple people’s bodies, even across the network (for example, one of Char’s templates transmits shared skeletons, using PoseNet in a networked Glitch application.)
You may focus on just part of the body. Your software doesn’t need to respond to the entire body; it could focus on interpreting the movements of a single part of the body (as in Emily Gobeille & Theo Watson’s prototype for Puppet Parade, which responds to a single arm).
You may focus on how an environment is affected by a body. Your software doesn’t have to re-skin or visualize the body. Instead, you can develop an environment that is affected by the movements of the body (as in Theo & Emily’s Weather Worlds).
You may control the behavior of something non-human. Just because your data was captured from a human, doesn’t mean you must control a human. Just because your data is from a hand, doesn’t mean it has to control a representation of a hand. Consider using your data to puppeteer an animal, monster, plant, or even a non-living object (as in this research on “animating non-humanoid characters with human motion data” from Disney Research, and in this “Body-Controlled Head” (2018) by 60-212 student, Nik Diamant). Here’s a simple sketch for a quadruped which is puppeteered by your hand (here).
You could make software which is analytic or expressive. While some of your peers may choose to “interpret the actions of the human body” by developing a character animation or interactive software mirror, you might instead elect to create an “information visualization” that presents an ergonometric analysis of the body’s movements over time. Your software could present comparisons different people making similar movements, or could track the accelerations of movements by a violinist.
You could make something altogether unexpected. Above is a project, What You Missed (2006), by CMU student Michael Kontopoulos. Michael built a custom blink detector, and then used it to take photos of the world that he otherwise missed when blinking.
Cheese by Christian Moeller is an experiment in the “architecture of sincerity”. On camera, six actresses each try to hold a smile for as long as they could, up to one and half hours. Each ongoing smile is scrutinized by an emotion recognition system and whenever the display of happiness fell below a certain threshold, an alarm alerted them to show more sincerity. The performance of sincerity is hard work.