sweetcorn-mask

Children’s Stories

You are your own virtual teacher as you generate your own children’s stories. Open your mouth to show each new word.

I determine if the user is opening their mouth by comparing the distance between their lips to the distance between the top of their head and the bottom of their head.

Above is a very early demonstration of this in my debug-mode, which also shows the indices of the vertices that I use to draw the face. I am using Beyond Reality Face for face tracking.

Below, I had started filling in features using the given vertices. Most features were very simple to turn into shapes by using beginShape() and endShape(CLOSE) around a for loop across each point and creating a curveVertex() at the coordinates of that point. A few features had to be estimated or otherwise fiddled with.

For example, Beyond Reality does not supply the top of the head, so I estimated it using the width of the face and drew an ellipse. I had to angle it to match the angle of the face with atan2(). At those same angles, I added a couple hair arcs for bangs and a couple triangles for pigtails.

The nose vertices, as you can see in the debug-view, make up two curves: the bridge and the underside of the nose. I could have made the nose simply those two strokes, but I thought it would be kind of cute to create a triangle from the top of the bridge to the two ends of the nostrils.

There were no pupils given either, so I had to guess where those would be. Luckily that’s pretty predictable if I assume the character will be looking straight ahead. The center of each circle was placed at the average of the coordinates of two diagonal vertices in each eye. The radius of the pupil is some fraction of the distance between those two points. It looked kind of boring without eyelashes, so I added a line anchored at each eye vertex and angled it away from the pupil using atan2() and some simple math.

The face still seemed a little bare at this point, so I added cheeks using calculations similar to those of the pupils. The center of the blush circle is the average of the coordinates of the side of the face and the bridge of the nose. The radius is some fraction of the distance between the two—this one mattered more than with the pupils, since as you turn your face, the size of one side becomes a lot larger than that of the other side.

The last thing I did was add a simple neck and body using a rectangle and a quadrilateral, with proportions relative to the head width. I had to use the width, rather than the height (the more intuitive option), as the height of the head changes greatly when you open your mouth and it would be silly to have the size of your neck and body change with it.

I then focused on text generation using RiTa.js, which I fed a bunch of children’s stories from this page of short stories for children. When you open your mouth, the next word of a generated sentence (stripped of punctuation) appears. I wrote these sentences with Lingdong’s p5.hershey.js. Originally, I had these words simply appearing, but it would have been a shame if I hadn’t utilized hershey fonts to at least some of their potential.

Animating the text was one of the trickiest parts of this project, but by modifying Lingdong’s code to have an extra putChar() function called putAnimatedChar(), which takes a time parameter and draws vertices only if time elapsed is over a certain multiple of their index. With this, I could successfully draw words, but only by drawing all the words at the same time. To solve this, I made a boolean parameter for putText(),  which determines if the text passed to it should be animated. I broke the text into two strings in an array: one of previous words and one of the current word. The former passes false and the latter passes true. I use estimateTextWidth() to determine the leftward translation of the previous words.

Following the advice of a surprising amount of people, I made it my last endeavor to synthesize a voice for this character. I was reminded of the time Everest Pipkin told my EMS class about how Nintendo created Animalese in Animal Crossing and also of the teacher’s voice in Charlie Brown, which Golan mentioned in office hours. I figured I could make something nice using Tone.js and formants for vowels I would obtain through RiTa’s getPhonemes(). I was somewhat successful in this, though the resulting sound is far less charming and far more sinister than I’d hoped. Below is a video that demonstrates this result.

To be more in line with the affect I had hoped for, I implemented a slightly-modified version of Josh Simmons’ Animalese.js project, in which he uses javascript audio and a .wav asset to create Animal Crossing-type speech based on text input. I slowed the voice down a bit and had to change a few things to make it compatible with my current program, but the conversion was simple enough. Each time your mouth opens, I feed the Animalese object the current word. It has a lot of charm, is very cute, and makes me much happier. Below is a video which contains the generated sound and a gif of one generated story.

Link to code