Author Archives: stembunk@andrew.cmu.edu

Project 2 – You’re a Twinkle Star!

For this project, Alec and I decided to merge our efforts of audiovisualization and pitch correction. We realized that the pitch correction system Alec created could be used to pinpoint a user’s pitch and use that value to manipulate variables within the visualization, so we decided to make a Rock Band-style game where the player sings along to none other than Twinkle Twinkle Little Star.

My contribution to the project was primarily on the visual end. I took in the variables that Alec’s patcher gave me and represented them using jit.gl.sketch and jit.gl.text within a js object. In addition to the point cloud that expands whenever the player sings, I modified the particle system to change the hue of the particles in correspondence with the note sung by the player. At the bottom of the screen, I added a player cursor – which has its y-position determined by the note sung by the player and a fixed-length tail that shows the past sung notes – and a scrolling bar of upcoming notes in the song. I then added a score counter and a method of state-switching between gameplay and game over screens.

This Drive folder has my contributions, including all of the javascript class files,

and this Drive folder hold all the files for our project as a whole.

Here’s a gist for the visualization patcher, although it won’t be of much use without the js files:

 

Project 1: KeyboardJohnny

For a while, I’ve wanted to get some more experience in Javascript. Whether it’s because I see money in web development or whether it’s because I am drawn to masochistic scoping and global variable practices… I’m not quite sure. Regardless, I saw this project as a good opportunity to flex my fledgling js muscles and make some dots fly around.

The product is an audio visualizer comprised of two visual systems: a point cloud (bunch of dots floating around according to a noise function generator, connecting to one another with line segments whenever they are within a certain distance); and a particle generator, called a particle jet because I want it to be. These systems are, with the exception of basis function generator and matrix to calculate the point cloud positions, entirely contained within Javascript classes. Heavy thanks to Amazing Max Stuff for teaching me how to make these systems.

Informing these two systems is an amalgamation of concepts we’ve covered in class. Starting with an audio signal, I used cascade~ objects to filter it into two frequencies, one roughly representing the bass of the song and the other supposedly representing the vocals but, in reality, just vaguely representing the treble portion of the song. Once separated, I fed the two signals into FFTs, then packed the bins into a matrix and used the average values to calculate the parameters for the point cloud (radius and line drawing threshold) and the particle jet (rate of movement/emission and color). The point cloud grows whenever there’s a bass kick and the particle jet spins in circles around it – it’s all quite fun. Here’s the gist!

E questo è tutto! Because there are a bunch of classes and scripts that go along with my patch, I’ve uploaded the whole thing to a github repository here – but beware!! There are a bunch of values that are woefully hardcoded to make the visual match Blood Brother by Zed’s Dead, DISKORD, and Reija Lee, and no shiny GUI to change them as of yet. But I did include the audio file in the repository (I hope that’s not illegal), so there’s that.

And finally, here’s my dots dancing to the aforementioned song! Please excuse the audio quality, it’s early in the morning.

Assignment 4: Luma Convolution

I was really interested by the idea of mapping video to meshes, so I iterated on the patch we built in class to feed webcam video into the color array. I then created a sawtooth wave that uses the mean luma value of the webcam video to control a bandpass filter, and used pfft~ to convolve the wave with the input signal (either from the mic or an audio file). The result is an interactive video of… yourself!

Here’s a video demonstration of the patch. I used a lamp to light the scene up, and covered it with a book to alter the cutoff of the filter.

And here’s the gist:

Assignment 3: On Style

Bruce Lee’s said a lot of cool stuff. My favorite quotes of his are not about fighting, but about individuality, inclusivity, and self-improvement. I used a quote of his about developing yourself in a way that is uniquely your own, free from formal doctrine.

The four impulse responses I choses are as follows:

  • Balloon pop in a music practice room, which gave a completely dry output.
  • Balloon pop in a Gates stairwell, which gave a wetter output.
  • A single bell chime, which started to obscure the input signal while giving it the bell’s tone throughout
  • A clip of wind chimes – I actually used two different length clips for each of the channels, which made the sound travel a bit in stereo. The input signal was completely lost in the process, but the result sounds pretty relaxing in a lost-in-a-mystical-world sorta way.

Assignment 2: Negative Thinking

What I wanted to do with this assignment was to use delay to isolate movement within a video, and show only the subject on an empty background. I got something pretty cool – but only by accident.

I started with the first video, which is from Generate by Rasmus Ott (on YouTube). By delaying the initial matrix and then subtracting the original matrix from the delay, I got the second iteration. Pretty cool, and I sort of isolated the subject, but it wasn’t what I wanted. Then, when I unlinked the delayed matrix from the jit.expr object, it froze it and left only the (anti-?)silhouette of the original behind. I really like the aesthetic of a moving subject revealing the background, but I couldn’t figure out how to replicate this in a non-janky way. Anyway, here’s the gist:

Assignment 1: Stretching with Paul

Audacity is a good program! Among many other things, it provides fun effects like Paulstretch! I’m not quite sure who Paul is, but his stretching algorithm sure does a good job of making things sound sloooowwwwmooooooooooooooooo

Or, you can set the “Stretch Factor” to 1 and forego the stretching of the clip, instead simply making it sound more… Paul-y. So that’s what I did. With a “Stretch Factor” of 1 and a “Time Resolution” of .25 seconds, I fed the same clip into the Paulstretch effect 30 times until it was a) pretty quiet and b) quite eerie. If I had to guess, I’d say Paul is a cute, timid ghost – Like Casper, but an audio engineer. Thanks Paul!