Roles:
Computer vision programming – Dan Moore
Max Patch Programing and Sound Design – Kaitlin Schaer
Percussion Patch and Documentation – Arnav Luthra
Overview:
Our goal for the project was to create a live performance system that would create music from the act of drawing. To do this, we utilized computer vision to recognize shapes being drawn on a piece of paper and generate sounds in response to the shapes being drawn. We had three main “instruments” one of which was melodic while the other two were “whooshey” sounds.
Technical Details:
For the project, we ended up using two Max Patches and a separate instance of OpenCV. The computer vision processing was done on Dan’s laptop and allowed us to get the colors of various blobs, the location of the blob’s centroid (blob’s central point) and the velocity at which the blob was growing. We then took these parameters and send them over to Kaitlin’s laptop using OSC (Open Sound Control). From Kaitlin’s laptop, we took these parameters and used them to control an arpeggiator as well as resonant filters on the sounds. The arpeggiator added different notes within a fixed key depending on the location of the blob and then triggered two different midi instruments (the melodic saw wave and on the whooshey noises). The third instrument took white noise and applied resonant filters at a rhythmic interval to create a percussive effect. Parts of this patch were pieced together from various sources online and then compiled and modified to suit the needs of our project.
Video of final performance:
https://drive.google.com/open?id=0BzxqdpE9VUgJR04xMjlsN0U3MHc
Closing remarks:
Overall our presentation went well despite a few technical difficulties in the beginning (We ran into some difficulties getting Kaitlin’s laptop to receive the information from Dan’s). We were limited in what we could do with the computer vision aspect but if we were to continue this project we could find other interesting parameters to get from the drawings.