Author Archives: crangel@andrew.cmu.edu

One handed DAW

For this project I wanted to do build a digital audio workstation that one could control entirely with one hand, and with as little keyboard/mouse interaction as possible. The leap motion controller seemed like the easiest way to do this, but I was stuck for a bit trying to figure out how to control so many different parameters with just one hand. The gate object was the key here, in conjunction with the machine learning patch we looked at in class. Upon recognizing a certain hand position, the gate object would cycle through its outlets and activate different subpatches that recognized different sets of hand gestures. For example, the hand gestures in mode 1 would control audio playback- pause, play, forward/reverse, etc. The same hand gestures in mode 2 would control a spectral filter, and in mode 3 they might control a midi drum sequencer, and so on for various different modes. At least that was the idea in theory…
I definitely bit off more than I could chew with this undertaking. There was way too much I wanted to do, and only a couple of modes that worked semi-reliably. I intend to smooth out the process and eventually have the daw be as fluid and intuitive as possible. One day maybe a disabled music producer might be able to perform an entire set with one hand and not a single click.

Here’s the (WIP) patch:

Thanks!

Assignment 4- What have I done?

I thought it would be cool to have an effect triggered by a certain frequency or amplitude in a song, so I recreated the frequency crossover patch from class and decided to have that be the manipulated variable in my attempt at an effect automation patch. I added an amplitude gate to the pfft~ sub patch that only allowed sounds through that were over a certain amplitude and sent the resulting 1/0 back to the main patch, where it would then activate a random number generator and a counter that in turn controlled the gain on each of the signals that were coming out of the frequency crossover processing. This sounds like it doesn’t make any sense at all as I’m typing it, so here’s the finished product:

https://soundcloud.com/user186794567/coby-spectral

And my code:

https://gist.github.com/anonymous/ef7aab70161be0846284623791ac6616

https://gist.github.com/anonymous/ac489ab3534398d4ce3f3acb4756d73d

 

It’s not quite the level of control I would like, but it’s a start and I definitely feel more comfortable working in the frequency domain now.

Baker Hall Shenanigans

I recorded 2 balloon pops in Baker Hall, because I’ve always loved the incredible reverb you can get if you stand in just the right spots. The first one was recorded directly under the center of the dome just inside the main Baker doors, with the Zoom recorder about 2 feet from the balloon, pointed directly at it (which probably wasn’t the best orientation in retrospect). The second one was recorded from all the way down at the other end of the building in Porter Hall, with the balloon in the same spot. The sounds I got were surprisingly different, so I decided to use both. The third sound is actually me tapping directly onto the microphone of the media lab computer, recorded straight into audacity. The fourth IR is an excerpt from a piece by one of my favorite composers, Hector Berlioz’s Symphonie fantastique. Here they are in order:

 

The signal I convolved was an audio clip of me attempting to play a staccato version of Mac Miller’s ROS on the piano:

 

 

Here’s a playlist of the audio clip convolved with the 4 different IR recordings:

 

 

Frequency-to-color converter

For this assignment, I wanted to create a system that would draw colored lines on an lcd based on a sound’s frequency and transposed feedback frequencies. Unfortunately due to time and knowledge constraints, I only managed to achieve half of my plan. I took a frequency-to-color converter (borrowed from my good friend Matt Hova, https://www.youtube.com/watch?v=R7vuAlkvKGQ) and combined it with the feedback system we made in class to create a system that displayed a frequency’s assigned color, then the colors of its transposed frequencies in another panel. My plan is to combine these values and somehow automate a line drawing process based on the initial sound’s amplitude and length… or something. Here’s the patch so far:

Snapchat Feedback

For this assignment, I used the Snapchat app to record a video, then re-recorded it 40 times alternating between my iPhone and iPad until both the video and audio were unrecognizable. The original video was taken on the iPad, and it was of Tame Impala’s Feels Like We Only Go Backwards music video. Here’s how it turned out: