For my project, I adapted the convolution patch shown in class to take an audio clip and convolve it with different gain levels with 4 impulse response clips. The gain levels of the audio file are controlled by the x and y coordinates of a 2d slider (where the output is a sinusoidal function of the distance of the coordinates from the corners). I also added a delay line. (I will upload the patches once I figure out how to do that. Sorry!)
The source sound is Ter Vogormia by Diamond Galas. My 4 impulse responses are a bottle cap falling, a whistle, and a couple of other sounds I had lying around on my computer. Here are all the original files:
Finally, I used a Leap Motion to obtain gestural data from my fingers to control the inputs to the convolution patch. To do this, I downloaded a third party max object that processes the Leap raw data. I then used this object to extract the 75 dimensional vector that contains all the finger data at any moment and send it via an OSC message to Wekinator, a GUI that allows you to build regression models using neural networks. I then trained the model, mapping the 75 inputs to 3 outputs which I sent to my original Max patch (via OSC), scaled, and smoothed logarithmically. These 3 outputs controlled the x and y values of my slider along with my delay time. Here is the final outcome: