Student Area

tale-mask

A little bit off then what I expected, but I’m satisfied with the amount of new things I’ve learned & tried.

Before motion (mouth open):

After motion (mouth open):

Original render image:

(Somehow one of the texture files looked different in Lens Studio than in Blender…)

mokka – mask

I tried to find an interesting way to make a fire ant look cool despite my irrational fear of them. God, I hate them so much.

I drew a design(painted on procreate) for the clear mask that the ant rests on.

marimonda-mask

LINK TO PROJECT

marimonda face tracking

(The real gif)

It’s me!~

I am really homesick for a country I haven’t lived in for 9 years and this piece is about that.

The background is a drawing of La Iglesia de San Nicolas de Tolentino. 

My face is a marimonda.

My process/context about my guy  below the cut, since it’s mostly reflection on this body of work for myself.

Continue reading “marimonda-mask”

pinkkk-07mask

My idea is to create a filter that requires collaborative effort from the user and the user’s pet. There are two objects in the 3D space where the Minecraft pig is fixated on the user’s head, and the duck moves along with the pig in space but it is further up front in the z direction. The goal was to have both the user and the pet move their head in the same directions at the same time, which I thought would be a small challenge but it turns out to be a big challenge. So that was that.

If you want to know how hard it is with a dog you can watch this chaotic video,

 

 

gregariosa-mobiletelematic

SCRUB LOVE is a virtual scrubbing experience for two people to connect nonverbally, based on a Korean bath house culture.

Going to a public bath house is fairly common in Korea. Oftentimes at a bath house, people scrub the backs of their loved ones, as it’s difficult to reach your back with your own hands. I wanted to emulate this affectionate experience on a digital platform, to help people connect in remote circumstances.

Experience for yourself here.

Full Code

For this project, I created a (2) synchronous, and (3) equal experience for two people. As my project was a fictional interpretation of a very tangible, physical experience, I had to do enough revising to capture the essence of the experience under the affordances of digital media.

I achieved this by having two people scrub each other’s back synchronously and at equal positions, rather than making one scrub the other’s back asynchronously. As synchronous back-scrubbing between two people is impossible in real life, I think the digital experience became something unique on its own, transcending beyond a digital replica of the physical experience.

Overall, I’m satisfied with the final product, as I was able to maintain the original intention from start to finish. Figuring out the right animation for the noodles was really difficult, but the experimentations paid off in the end.

My initial ideas: I had around four ideas in mind, based on what I thought would be a delightful experience:

  1. “Don’t look at me!” –  Every user is a person in a bath. If your eyes dart towards someone else’s body, you get some sort of punishment. Based on the awkward experience of being naked and being surrounded by other naked people in a Korean bath house.
  2. “Flashlight cursers” – Every curser has a yellow ball of flashlight. Once enough cursors cluster, it reveals an image hiding behind a black screen.
  3. “Bike together” – A collaborative, side-scroller biking experience. Every user bikes on a “chain of bicycles” (like an extended tandem bike), jumping and avoiding obstacles by angling the phone. If one person fails to avoid the obstacle, the entire group fails the game.
  4. “Scrub together”* – A collaborative scrubbing experience. Based on a culture of scrubbing loved one’s backs in a Korean bath house.

I ended up following through with the fourth direction, after receiving some positive feedback from my peers on the idea.

Screenshots from my process

I tried out various different ways to animate the noodles. I ultimately chose to keep the noodles short and squiggly, for the right balance of a noodle and dead skin.

Too noodle-like:

Too dead skin/worm-like:

*For reference, this is what the back scrubbing would look like in real life:

sweetcorn-mask

Children’s Stories

You are your own virtual teacher as you generate your own children’s stories. Open your mouth to show each new word.

I determine if the user is opening their mouth by comparing the distance between their lips to the distance between the top of their head and the bottom of their head.

Above is a very early demonstration of this in my debug-mode, which also shows the indices of the vertices that I use to draw the face. I am using Beyond Reality Face for face tracking.

Below, I had started filling in features using the given vertices. Most features were very simple to turn into shapes by using beginShape() and endShape(CLOSE) around a for loop across each point and creating a curveVertex() at the coordinates of that point. A few features had to be estimated or otherwise fiddled with.

For example, Beyond Reality does not supply the top of the head, so I estimated it using the width of the face and drew an ellipse. I had to angle it to match the angle of the face with atan2(). At those same angles, I added a couple hair arcs for bangs and a couple triangles for pigtails.

The nose vertices, as you can see in the debug-view, make up two curves: the bridge and the underside of the nose. I could have made the nose simply those two strokes, but I thought it would be kind of cute to create a triangle from the top of the bridge to the two ends of the nostrils.

There were no pupils given either, so I had to guess where those would be. Luckily that’s pretty predictable if I assume the character will be looking straight ahead. The center of each circle was placed at the average of the coordinates of two diagonal vertices in each eye. The radius of the pupil is some fraction of the distance between those two points. It looked kind of boring without eyelashes, so I added a line anchored at each eye vertex and angled it away from the pupil using atan2() and some simple math.

The face still seemed a little bare at this point, so I added cheeks using calculations similar to those of the pupils. The center of the blush circle is the average of the coordinates of the side of the face and the bridge of the nose. The radius is some fraction of the distance between the two—this one mattered more than with the pupils, since as you turn your face, the size of one side becomes a lot larger than that of the other side.

The last thing I did was add a simple neck and body using a rectangle and a quadrilateral, with proportions relative to the head width. I had to use the width, rather than the height (the more intuitive option), as the height of the head changes greatly when you open your mouth and it would be silly to have the size of your neck and body change with it.

I then focused on text generation using RiTa.js, which I fed a bunch of children’s stories from this page of short stories for children. When you open your mouth, the next word of a generated sentence (stripped of punctuation) appears. I wrote these sentences with Lingdong’s p5.hershey.js. Originally, I had these words simply appearing, but it would have been a shame if I hadn’t utilized hershey fonts to at least some of their potential.

Animating the text was one of the trickiest parts of this project, but by modifying Lingdong’s code to have an extra putChar() function called putAnimatedChar(), which takes a time parameter and draws vertices only if time elapsed is over a certain multiple of their index. With this, I could successfully draw words, but only by drawing all the words at the same time. To solve this, I made a boolean parameter for putText(),  which determines if the text passed to it should be animated. I broke the text into two strings in an array: one of previous words and one of the current word. The former passes false and the latter passes true. I use estimateTextWidth() to determine the leftward translation of the previous words.

Following the advice of a surprising amount of people, I made it my last endeavor to synthesize a voice for this character. I was reminded of the time Everest Pipkin told my EMS class about how Nintendo created Animalese in Animal Crossing and also of the teacher’s voice in Charlie Brown, which Golan mentioned in office hours. I figured I could make something nice using Tone.js and formants for vowels I would obtain through RiTa’s getPhonemes(). I was somewhat successful in this, though the resulting sound is far less charming and far more sinister than I’d hoped. Below is a video that demonstrates this result.

To be more in line with the affect I had hoped for, I implemented a slightly-modified version of Josh Simmons’ Animalese.js project, in which he uses javascript audio and a .wav asset to create Animal Crossing-type speech based on text input. I slowed the voice down a bit and had to change a few things to make it compatible with my current program, but the conversion was simple enough. Each time your mouth opens, I feed the Animalese object the current word. It has a lot of charm, is very cute, and makes me much happier. Below is a video which contains the generated sound and a gif of one generated story.

Link to code

tale-facereadings

  • Clearview.ai

I didn’t know about Clearview.ai until I watched “Face Recognition” from Last Week Tonight with John Oliver. I was extremely surprised that there are a great number of people who aren’t aware of the critical problems these face recognition softwares and accessing/collecting data from online are. Although I did thought about the possibility that others who I don’t know may access my photos on my SNS page, for example, and use it for whatever purpose I do not give consent to, knowing that there exists a company that actively collects and analyzes photos from online really concerns me. I think I’ll be thinking twice before I upload anything to my SNS…

  • Inaccuracy in face recognition leading to social justice issues

I learned that inaccuracy in face recognition software could lead to a severe results from Joy Buolamwini’s Tedtalk video. I too have a memory of laughing at the inaccurately tagged names on social media, but just like what Joy Buolamwini said, it no longer becomes something trivial that we could just laugh over when it comes to crimes and suspects. People could be blamed for something they weren’t even aware of, at the most unexpected moments. I strongly agree that face recognitions not recognizing faces of people of colors enhances the pre-existing racism in the country, and that this problem should be brought up to the surface even more and be fixed anytime soon.

shoez-facereadings

  1. Currently, there’s little to no regulations around how facial recognition is used in law enforcement. As a result, facial recognition algorithms that might not even be extremely accurate are being used to identify suspects. Algorithmic bias make this extremely dangerous.
  2. Inclusive code shouldn’t be an after thought. The Who, how, and why are extremely important for when we code because things like algorithmic bias can potentially ruin some people’s lives.