For this project, Alec and I decided to merge our efforts of audiovisualization and pitch correction. We realized that the pitch correction system Alec created could be used to pinpoint a user’s pitch and use that value to manipulate variables within the visualization, so we decided to make a Rock Band-style game where the player sings along to none other than Twinkle Twinkle Little Star.
My contribution to the project was primarily on the visual end. I took in the variables that Alec’s patcher gave me and represented them using jit.gl.sketch and jit.gl.text within a js object. In addition to the point cloud that expands whenever the player sings, I modified the particle system to change the hue of the particles in correspondence with the note sung by the player. At the bottom of the screen, I added a player cursor – which has its y-position determined by the note sung by the player and a fixed-length tail that shows the past sung notes – and a scrolling bar of upcoming notes in the song. I then added a score counter and a method of state-switching between gameplay and game over screens.
Informing these two systems is an amalgamation of concepts we’ve covered in class. Starting with an audio signal, I used cascade~ objects to filter it into two frequencies, one roughly representing the bass of the song and the other supposedly representing the vocals but, in reality, just vaguely representing the treble portion of the song. Once separated, I fed the two signals into FFTs, then packed the bins into a matrix and used the average values to calculate the parameters for the point cloud (radius and line drawing threshold) and the particle jet (rate of movement/emission and color). The point cloud grows whenever there’s a bass kick and the particle jet spins in circles around it – it’s all quite fun. Here’s the gist!
E questo è tutto! Because there are a bunch of classes and scripts that go along with my patch, I’ve uploaded the whole thing to a github repository here – but beware!! There are a bunch of values that are woefully hardcoded to make the visual match Blood Brother by Zed’s Dead, DISKORD, and Reija Lee, and no shiny GUI to change them as of yet. But I did include the audio file in the repository (I hope that’s not illegal), so there’s that.
And finally, here’s my dots dancing to the aforementioned song! Please excuse the audio quality, it’s early in the morning.
I was really interested by the idea of mapping video to meshes, so I iterated on the patch we built in class to feed webcam video into the color array. I then created a sawtooth wave that uses the mean luma value of the webcam video to control a bandpass filter, and used pfft~ to convolve the wave with the input signal (either from the mic or an audio file). The result is an interactive video of… yourself!
Here’s a video demonstration of the patch. I used a lamp to light the scene up, and covered it with a book to alter the cutoff of the filter.
Bruce Lee’s said a lot of cool stuff. My favorite quotes of his are not about fighting, but about individuality, inclusivity, and self-improvement. I used a quote of his about developing yourself in a way that is uniquely your own, free from formal doctrine.
The four impulse responses I choses are as follows:
Balloon pop in a music practice room, which gave a completely dry output.
Balloon pop in a Gates stairwell, which gave a wetter output.
A single bell chime, which started to obscure the input signal while giving it the bell’s tone throughout
A clip of wind chimes – I actually used two different length clips for each of the channels, which made the sound travel a bit in stereo. The input signal was completely lost in the process, but the result sounds pretty relaxing in a lost-in-a-mystical-world sorta way.
What I wanted to do with this assignment was to use delay to isolate movement within a video, and show only the subject on an empty background. I got something pretty cool – but only by accident.
I started with the first video, which is from Generate by Rasmus Ott (on YouTube). By delaying the initial matrix and then subtracting the original matrix from the delay, I got the second iteration. Pretty cool, and I sort of isolated the subject, but it wasn’t what I wanted. Then, when I unlinked the delayed matrix from the jit.expr object, it froze it and left only the (anti-?)silhouette of the original behind. I really like the aesthetic of a moving subject revealing the background, but I couldn’t figure out how to replicate this in a non-janky way. Anyway, here’s the gist:
Audacity is a good program! Among many other things, it provides fun effects like Paulstretch! I’m not quite sure who Paul is, but his stretching algorithm sure does a good job of making things sound sloooowwwwmooooooooooooooooo
Or, you can set the “Stretch Factor” to 1 and forego the stretching of the clip, instead simply making it sound more… Paul-y. So that’s what I did. With a “Stretch Factor” of 1 and a “Time Resolution” of .25 seconds, I fed the same clip into the Paulstretch effect 30 times until it was a) pretty quiet and b) quite eerie. If I had to guess, I’d say Paul is a cute, timid ghost – Like Casper, but an audio engineer. Thanks Paul!