Author Archives: rakeller@andrew.cmu.edu

Robert Keller Project 2 – Project sPIral

Hello! For our final project, we created an installation using the Media Lab’s 8 speakers, a raspberry Pi, an accelerometer, DMX floorlights, and a ROLI Seaboard. The idea of our project was to play audio around the room on the speakers, and use lights to cue the listener on where the audio is actually coming from. We used 3 distinct “voices” which could be any audio file. These voices rotate around the room in a lissajous pattern. The position of the voices and additional audio-synthesis can be controlled with the ROLI seaboard and an accelerometer that’s been hooked up to a raspberry pi. As a group member, I assisted with lighting, and helped to get the raspberry pi operational. My main role was creating a max patch that incoorporated the accelerometer data into our project’s audio synthesis and provided control signals to the lights and the speakers. The patch implements granular synthesis, altered playback speed, downsampling, spectral delay, and band filtering to create a unique slew of sounds that will ultimately be played on our 8 speakers. This project was an excellent exercise in debugging, control flow, and sound synthesis.  Below is a demonstration of our system in action. You should be able to hear 3 unique “voices”, one of which is controlled by the raspberry PI:

Below is a video of me detailing my max patch extensively, and showing how to use it (save for the subpatches which can be viewed in the attached zip file at the bottom, also, audio is included in the video):

Below is a gist of the main patch:

Finally, I’ve attached a zip file with all the files you’ll need to use the patch:

seaboardfinal

Project 1: Twisted Rhythm Game

Hello! For my first project this semester, I decided to make a spooky, Halloween themed rhythm game. I built it using LeapMotion. The main object of the game is to score as little or as many points as possible. The player can aggregate points by positioning the pumpkins above the skulls right before the skulls “pop” and fade into the background. The pumpkins are controlled by a LeapMotion hand sensor that in this context tracks the palm position of each hand. The player can use both hands to guide of the pumpkins separately. There are a couple easter eggs hidden in the game that trigger during certain sections of the music that is streamed, and when the player exceeds a certain amount of points. Below is a video in which I demo the game:

Below are the two main max patches I wrote to create the game:

Here is the second patch:

Below is a link to a zip file with all the relevant files for this project:

https://drive.google.com/open?id=1QKsCDLECiuAogAaK-4MKYV6uXcCKUMBw

Assignment 4 – Nasally Controlled Filter

Hello!

For my pfft project, I wanted to experiment with facial tracking and use information from my webcam to control parameters to a pfft. The x and y coordinates of my nose provided a high and low cutoff for the frequency bins of the original signal. As an additional point of control, I added an adc~ object to take in audio and alter the magnitude and phase of the original signal in the frequency domain. Below is a clip of the original audio that was used:

Now here is a clip of the audio after I used my nose to alter it (the adc~ picked up a bit of the feedback from my speakers as well):

Here’s a video of me using the patch:

Below is the gist for the top level patch:

And finally, here is the pfft I created for use in this project:

 

Robert Keller Assignment 3: Convolutional Cover

Hello! For this assignment, I took several impulse recordings  and convolved them with different sources of audio. For my original signal, I used the opening to Jethro Tull’s Thick as A Brick. I picked this recording because it was an acoustically “dry” signal, but also had many different elements of audio in it:

My first impulse recording was recorded in the bottom of the Doherty stairwell:

The resulting audio sounded exactly as if it had been played from the bottom of a stairwell:

My second impulse recording was taken from the inside of a grand piano:

Ian Anderson’s voice reacted strangely to this impulse, probably because of the harmonics produced:

The next impulse response I used was actually a recording of a handpan that I found on freesound.org:

Convolving this signal with Tull provided some interesting results:

For my final impulse response, I took the impulse of my room at 4am. Needless to say, my roommate wasn’t pleased.

I had to doctor the impulse signal up a bit to produce an interesting response:

The resulting convolution produced an interesting echo effect:

For my convolution piece, I recorded a cover of Skyhill’s “City as You Walk”. I split the recording up into 5 different pieces. The first chunk was convolved with the IR from the Doherty stairwell. The second chunk was convolved with the inside of the grand piano. The third chunk was convolved with an IR from the top floor of Margaret Morrison. The fourth chunk (and my favorite part of the song) was convolved with the IR of my roommate at 4am. The trailing bit was convolved with the handpan, because the handpan IR destroys most of the vocals. You should be able to make out when the impulse response changes from section to section.

For my Max Patch, I used the following patch to generate most of my audio signals:

 

 

Assignment 2 – Trippy Time Machine

Hello, for the time machine assignment, I decided to experiment with several different effects. I tried to mess with the color of the video feed, and I split the jitter video in to two halves to work with. Each half was passed through a feedback time loop, and the feedback in said loop was the video received by the other half of audio. I used a motion detector to make the moving elements of the final frame yellow tinged. I had to tinker with values of feedback, transparency, and color in order to get the intended effect.

Robert Keller – Feedback using a phase vocoder

Hi, for my feedback with found systems assignment, I used a phase vocoder that I improved (initially from http://sethares.engr.wisc.edu/vocoders/matlabphasevocoder.html). A phase vocoder is essentially a system that slows down or speeds up audio without altering the pitch of the audio. Designing an industry standard phase vocoder is rather difficult, and the implementation below is not without it’s shortcomings. Namely, the process of speeding down audio using the vocoder below introduces a “phasey” aspect to the resulting audio. The phasey artifacts introduced by the vocoder are initially passable, but if one were to say, slow the initial audio down by a factor of 2, then speed the audio up by a factor of 2, and then repeat the process 20+ times, one would drastically alter the original quality of the audio. The final result is incredibly phasey. The initial piece of audio I used was from the song “Message in a Bottle” by the Police. Below is the resulting audio and code from Matlab.

Here is the script I used to generate the audio:

Original Audio:

Audio passed through phase vocoder once:

Audio passed through twice:

Audio passed through three times:

Audio passed through nine times:

Audio passed through fifteen times:

Audio passed through twenty times:

Audio passed through thirty times: