Users use colorful objects to interact with an emulated theremin. Brightly colored objects can be tracked using a webcam and translated into varying frequencies of tone.
I’ve had a few different concepts in mind going into this final project, but I decided to implement a tool I’ve been wanting to make for while, that happens to use physical user motion in real space to generate tones so that the program can be played and sampled like any other digital instrument.
The program has a few basic functions. First the user selects a color by clicking on the area of the video feed where their brightly colored object is. Once the color is calibrated, a function decides the position which is the weighted average of all of the pixels in the image based on how close they are to that color. The user can now move the object around in front of the camera, and the tone output from the program will reflect its position using pitch and volume.
I wasn’t able to experiment with the final setup as much I would have liked, but I put together a short edit compiling some of the sounds I recorded.
I think this piece functions best when the user is only focused on the object they’re manipulating and not having to worry about what it looks like on the screen. I think it would help to create some physical frame or boundary that the user can operate within rather than have to consider the video feed itself. I also think that the layering of sounds is an important element of the interaction. In developing this further, I think I would add functionalities for live looping or more finely controlled sampling and compositing.