Project 1: Convolution Kinect

For my first project, I utilized a Microsoft Kinect to gather movement information on a person, within a space, and apply the tracking information to parameters on a granular synthesis. At the end of the pipeline, the resulting sound was then processed into a Lissajous pattern and project onto a wall behind the performance space.

To farm the information from the Kinect, I used Processing, an open-sourced graphical library, which was programmed to collect data from the Kinect, parse out only the left hand, right hand, and head X-Y Coordinates. Then, using OSC I sent this bundle of coordinates to Max MSP.

Using the UDPRECEIVE object, I received data from Processing, containing coordinates.

Next, I scaled these numbers to be useful by the granular synthesis parameters.

Scaling the data which I am receiving from the Kinect, to be useful for granular synthesis parameters.

Originally, I started with a rather bland audio file to be processed, however, I realized soon that I would need a more complex sound file to really accentuate the differences in the granular synthesis. To do this, I used a file of wind chimes, which proved to be much more textual and rewarding.

Below is a video of the piece in work, as well as the max patch and processing files.

IMG_1881

https://gist.github.com/jsantillo/9441d9acfa79fbc7548cc4aeaf5d0150.js