My contribution to the project was primarily on the visual end. I took in the variables that Alec’s patcher gave me and represented them using jit.gl.sketch and jit.gl.text within a js object. In addition to the point cloud that expands whenever the player sings, I modified the particle system to change the hue of the particles in correspondence with the note sung by the player. At the bottom of the screen, I added a player cursor – which has its y-position determined by the note sung by the player and a fixed-length tail that shows the past sung notes – and a scrolling bar of upcoming notes in the song. I then added a score counter and a method of state-switching between gameplay and game over screens.
This Drive folder has my contributions, including all of the javascript class files,
and this Drive folder hold all the files for our project as a whole.
Here’s a gist for the visualization patcher, although it won’t be of much use without the js files:
]]>
The product is an audio visualizer comprised of two visual systems: a point cloud (bunch of dots floating around according to a noise function generator, connecting to one another with line segments whenever they are within a certain distance); and a particle generator, called a particle jet because I want it to be. These systems are, with the exception of basis function generator and matrix to calculate the point cloud positions, entirely contained within Javascript classes. Heavy thanks to Amazing Max Stuff for teaching me how to make these systems.
Informing these two systems is an amalgamation of concepts we’ve covered in class. Starting with an audio signal, I used cascade~ objects to filter it into two frequencies, one roughly representing the bass of the song and the other supposedly representing the vocals but, in reality, just vaguely representing the treble portion of the song. Once separated, I fed the two signals into FFTs, then packed the bins into a matrix and used the average values to calculate the parameters for the point cloud (radius and line drawing threshold) and the particle jet (rate of movement/emission and color). The point cloud grows whenever there’s a bass kick and the particle jet spins in circles around it – it’s all quite fun. Here’s the gist!
E questo è tutto! Because there are a bunch of classes and scripts that go along with my patch, I’ve uploaded the whole thing to a github repository here – but beware!! There are a bunch of values that are woefully hardcoded to make the visual match Blood Brother by Zed’s Dead, DISKORD, and Reija Lee, and no shiny GUI to change them as of yet. But I did include the audio file in the repository (I hope that’s not illegal), so there’s that.
And finally, here’s my dots dancing to the aforementioned song! Please excuse the audio quality, it’s early in the morning.
]]>Here’s a video demonstration of the patch. I used a lamp to light the scene up, and covered it with a book to alter the cutoff of the filter.
And here’s the gist:
]]>The four impulse responses I choses are as follows:
I started with the first video, which is from Generate by Rasmus Ott (on YouTube). By delaying the initial matrix and then subtracting the original matrix from the delay, I got the second iteration. Pretty cool, and I sort of isolated the subject, but it wasn’t what I wanted. Then, when I unlinked the delayed matrix from the jit.expr object, it froze it and left only the (anti-?)silhouette of the original behind. I really like the aesthetic of a moving subject revealing the background, but I couldn’t figure out how to replicate this in a non-janky way. Anyway, here’s the gist:
]]>Or, you can set the “Stretch Factor” to 1 and forego the stretching of the clip, instead simply making it sound more… Paul-y. So that’s what I did. With a “Stretch Factor” of 1 and a “Time Resolution” of .25 seconds, I fed the same clip into the Paulstretch effect 30 times until it was a) pretty quiet and b) quite eerie. If I had to guess, I’d say Paul is a cute, timid ghost – Like Casper, but an audio engineer. Thanks Paul!
]]>