sweetcorn-arSculpture

Proposal for the Removal and Burial of “Turner’s Made Fast”

A project I made for Turner’s is still installed in the basement of Doherty Hall. I mourn the loss of the relationship, but cannot seem to healthily move on from it while this monument to our disordered affection still stands.

Pictured above is a plan in Unity for the grave of the sculpture, which should be imposed in AR over the small amount of grass in my backyard here in St. Louis, over 600 miles away from Pittsburgh. It will be put into action once I receive a reply to the following email:

To whomever it may concern,

[Image of Sculpture]

I’ve made a grave mistake regarding the installation of this sculpture in Doherty Hall B-Level right outside of B302. It has been installed for almost a year and I wish for it to be promptly removed and shipped to the following address:

[My Address in St. Louis]

Please speak to me if funding is an issue.

Thank you,

Benford Krummenacher

I will bury the sculpture once it arrives, like a family pet. Maybe then I can finally move on to healthier relationships.

 

I was previously working with videos, but found them distracting and not tied to the concept of planning/blueprinting.

sweetcorn-LookingOutwards05

In this project, by speaking or singing, you produce objects that fly outward from your mouth. This is entertaining in the context of the pandemic as a visualization of possibly disease-carrying objects flying towards others from your mouth. Otherwise, I think it is a nice investigation into the power of speech, singing, and the voice. It’s kind of like the abracadabra thing—speaking things into existence. It also reminds me of what I was thinking about before with sweetmail: the idea that saying the words “I love you” is vastly different than typing them. Perhaps this is a hyperbole of that act of speaking.

sweetcorn-GPT2

Whenever I can’t think of anything to do, I ‘m always looking for something to do next week. My problem is that when I have ideas, I have a hard time staying awake long enough to sit down and do it. It takes me forever to do anything. I blame the sleep deprivation, and some bad medication. It’s the kind of day that you don’t want to do anything. The sky is cloudy and threatening rain. I think the chances of rain are 50 / 50.

Listen to your friends and listen to them closely. They will tell you what you need to hear. Trust your instinct and if the scene appears to be superficial or it’s just a cop-out, then it’s time to put the brakes on it. This is when you should act prudently. 2. Do not spend your own money on women. It’s a sin. – Paul 3. Do not flirt with all of the women at a club or party. Most men in the U.S. and Europe flirt with at least 10 women at a party or bar. It’s easy to fall into a number of traps here. 4. Do not offer women money, gifts or favors to get a date.

^(what is this result noooooo)

sweetcorn-ArtBreeder

I really enjoyed the results of the landscapes, but the process of “general” creation with genes was very enjoyable—seeing how each minor change affects the outcome and trying to put together some logic to it.

sweetcorn-LookingOutwards04

Grannma MagNet

This project by Memeht Selim Akten creates morphs between two given audio samples. Below is a compilation of examples of this.

The transitions were really interesting to me. I could hear the “notes” or whatever sound elements slowly change qualities like length, timbre, pitch, and finally resolve to the second sound. There’s a lot of potential I can see here for creating music, as music I’ve made doesn’t really transition from one thing to another all that much. I wonder what morphs from two samples of music produced by the same person, with all their musical quirks and tendencies, would sound like. Still, as the artist said, it isn’t about creating “realistic” transitions, it’s about the novelty and potential for modification.

 

sweetcorn-mask

Children’s Stories

You are your own virtual teacher as you generate your own children’s stories. Open your mouth to show each new word.

I determine if the user is opening their mouth by comparing the distance between their lips to the distance between the top of their head and the bottom of their head.

Above is a very early demonstration of this in my debug-mode, which also shows the indices of the vertices that I use to draw the face. I am using Beyond Reality Face for face tracking.

Below, I had started filling in features using the given vertices. Most features were very simple to turn into shapes by using beginShape() and endShape(CLOSE) around a for loop across each point and creating a curveVertex() at the coordinates of that point. A few features had to be estimated or otherwise fiddled with.

For example, Beyond Reality does not supply the top of the head, so I estimated it using the width of the face and drew an ellipse. I had to angle it to match the angle of the face with atan2(). At those same angles, I added a couple hair arcs for bangs and a couple triangles for pigtails.

The nose vertices, as you can see in the debug-view, make up two curves: the bridge and the underside of the nose. I could have made the nose simply those two strokes, but I thought it would be kind of cute to create a triangle from the top of the bridge to the two ends of the nostrils.

There were no pupils given either, so I had to guess where those would be. Luckily that’s pretty predictable if I assume the character will be looking straight ahead. The center of each circle was placed at the average of the coordinates of two diagonal vertices in each eye. The radius of the pupil is some fraction of the distance between those two points. It looked kind of boring without eyelashes, so I added a line anchored at each eye vertex and angled it away from the pupil using atan2() and some simple math.

The face still seemed a little bare at this point, so I added cheeks using calculations similar to those of the pupils. The center of the blush circle is the average of the coordinates of the side of the face and the bridge of the nose. The radius is some fraction of the distance between the two—this one mattered more than with the pupils, since as you turn your face, the size of one side becomes a lot larger than that of the other side.

The last thing I did was add a simple neck and body using a rectangle and a quadrilateral, with proportions relative to the head width. I had to use the width, rather than the height (the more intuitive option), as the height of the head changes greatly when you open your mouth and it would be silly to have the size of your neck and body change with it.

I then focused on text generation using RiTa.js, which I fed a bunch of children’s stories from this page of short stories for children. When you open your mouth, the next word of a generated sentence (stripped of punctuation) appears. I wrote these sentences with Lingdong’s p5.hershey.js. Originally, I had these words simply appearing, but it would have been a shame if I hadn’t utilized hershey fonts to at least some of their potential.

Animating the text was one of the trickiest parts of this project, but by modifying Lingdong’s code to have an extra putChar() function called putAnimatedChar(), which takes a time parameter and draws vertices only if time elapsed is over a certain multiple of their index. With this, I could successfully draw words, but only by drawing all the words at the same time. To solve this, I made a boolean parameter for putText(),  which determines if the text passed to it should be animated. I broke the text into two strings in an array: one of previous words and one of the current word. The former passes false and the latter passes true. I use estimateTextWidth() to determine the leftward translation of the previous words.

Following the advice of a surprising amount of people, I made it my last endeavor to synthesize a voice for this character. I was reminded of the time Everest Pipkin told my EMS class about how Nintendo created Animalese in Animal Crossing and also of the teacher’s voice in Charlie Brown, which Golan mentioned in office hours. I figured I could make something nice using Tone.js and formants for vowels I would obtain through RiTa’s getPhonemes(). I was somewhat successful in this, though the resulting sound is far less charming and far more sinister than I’d hoped. Below is a video that demonstrates this result.

To be more in line with the affect I had hoped for, I implemented a slightly-modified version of Josh Simmons’ Animalese.js project, in which he uses javascript audio and a .wav asset to create Animal Crossing-type speech based on text input. I slowed the voice down a bit and had to change a few things to make it compatible with my current program, but the conversion was simple enough. Each time your mouth opens, I feed the Animalese object the current word. It has a lot of charm, is very cute, and makes me much happier. Below is a video which contains the generated sound and a gif of one generated story.

Link to code