For this project, I used spectral analysis along with machine learning to create a system for chord recognition. The system works by writing FFT frequency bin amplitudes into a matrix, then taking “snapshots” of the matrix and outputting the snapshot as a list, then sending these lists to the ml.svm object for categorization. While the system could easily work with any audio source, for this demonstration I made a simple polyphonic synth using sawtooth oscillators and a MIDI controller to play chords for the system to analyze. The challenge with this project was devising a system for processing the data from the FFT matrix and molding it into a form that is usable by the SVM but still contains enough data to identify specific chord spectra. The algorithm is powerful enough to recognize, for example, the difference between a C major chord and a C minor chord, if given enough training data.
In this demonstration I show how to train the SVM and how to map new chords. At the end I show that the system is not aware that a chord played an octave higher is not recognized. This can be fixed easily by simply mapping one chord played in several octaves (for example play C major chords with roots C3, C4, and C5 as state 1, D3 D4 D5 state 2, etc.)
I decided to make use of the MIDI Fighter 3D midi controller with built in accelerometer to control the lighting system in the Media Lab. The Fighter is being utilized as a drum machine which has kick, snare, hi-hats, crash, and a series of bass notes. The accelerometer channels control the lights as well as some digital effects on the drum machine, including distortion and reverb. When sent MIDI notes on the correct channel, the MIDI Fighter lights up on the appropriate button with color dependent on the velocity of the note. I used Ableton Live to send these MIDI messages, as well as play the samples for the drum machine. To get control and MIDI messages into Max, I used ctlin and notein objects. When tilted left or right, the MIDI Fighter as well as the overhead lights turn red based on how far the device is tilted. The same is true for forwards and backwards, but blue instead of red. The snare drum triggers a flash from the UV LEDs, and the bass notes trigger a green flash.
I had trouble screen capturing audio and video at the same time, so here is a short audio example of a drumbeat I played on the MIDI Fighter.
And here is a video of what the patch looks like, with added pwindows to imagine howthe lights in the media lab would react.
For this assignment, I created my take on a spectrum visualizer using jit.poke~. The visualizer works by doing spectral analysis on a number of different audio samples and writes the data into either the red, green, or blue channel of a matrix corresponding to the frequency bins. The result is the visualization of 3 audio signals simultaneously. I added a rudimentary spectral degrader to demonstrate how different effects are displayed on the visualizer. The audio used in the examples are short loops that I wrote. (My favorite is the 8-bit lead synth on the blue channel due to the pitch bending.)
I would like to make a project involving the use of a MIDI Fighter 3D to control lights in the media lab as well trigger samples. This project is inspired by the work of Shawn Wasabi, an electronic artist and performer known for his work with MIDI Fighter controllers. Here is a link to one of his works using the MIDI Fighter 64.
I decided to write a short drum+synth loop in Ableton Live to use as the original audio to be convolved. Here is the loop (2 iterations).
To get my impulse responses, I traveled into Schenley Park. This impulse was recorded on the trail underneath Panther Hollow Road.
Here is what the original sounds like under the bridge:
The next impulse was recorded near Panther Hollow Lake. Notice the background insect/bird noises.
Here is the original played next to the lake:
For this “impulse” I recorded the sound of the stream that flows into the lake.
The convolution (my favorite of all the convolutions):
Finally I attempted to convolve the original with the sound of wind, but I didn’t feel like finding a good sample. So instead I put the original recording as the IR and convolved them. Here is the audio:
(Shout out to my trusty assistant Ben for being the balloon popper.)
As in the in class demonstration of timeshifting with video using jitter, I created a simple video delay without feedback using a webcam. However, each color in ARGB has its own delay controls, creating an effect where each layer of color echoes differently. I was unsatisfied with how “regular” the echoes appeared, so I added the ability to dynamically change the delay time of each channel independently by using the microphone. As the received amplitude increases, so does the delay time. These abrupt changes to the delay time create some stuttering to the echoes, which allow the user to create trippy music videos on the fly.
Here is a video of me messing with the patch while Bassnectar serenades me.
For this assignment I took inspiration from the example of I am sitting in a room by Alvin Lucier by iteratively re-recording audio. The readymade system I used was Ableton Live. I took a default drum loop, recorded the audio using my computer speakers and a condenser mic, then applied a built-in audio effect called Redux which is essentially a bitcrusher/downsampler. I did this 11 times (including the original loop). It became somewhat painful to listen to since certain high frequencies were being amplified each pass.
An interesting and unintended side effect of the setup I used was the latency of the system. Each recorded section is exactly 3 measures at 70 BPM, but as you can hear the recordings become increasingly late and eventually cut off the end of the original loop.