The initial concept of my project was that people would draw sound. So their pictures would be translated into sound but as my project developed it transformed into a way to edit samples as I think taking the color data into sound wasn’t very interesting. It uses a webcam to get the image that the user wants to track colors in and the max patch.
The beginning of the patch is a color tracker that the user can interact with. The user clicks inside the pwindows to choose the colors they want to track and the color data and location found using jit.findbounds is sent into the bottom of the patch.
I used the bottom of the image received from jit.findbounds as my control for pitch. I explored the groove object and controlling samples and I put that in this project to control not only the pitch but the delay and gain of a sample. While the bottom location controls pitch the rgb values control delay and gain. Initially, I also controlled feedback but when playing around with the comb object I found it gave more interesting results when I kept feedback high and changed gain and delay. Red and blue control delay while green controls gain. Below are examples of the difference using stems from a band named Pomplamoose.
Another aspect of the project is that the sound is binaural using the top right of pictures to control the amplitude and phase of a cycle to control the path of the sound. Initially, I had the top right be where the sound existed in the soundscape but I thought it’d be more interesting if the sound was moving through space instead of being still since it is a still image.
The last example that I didn’t use a drawing to show that since it’s a color tracker it can be used both with images and moving pictures. So for this example, I used stock sounds of nature as my sample and a picture of my cat and Kirby plush that I moved around as my controls.
https://drive.google.com/drive/folders/1HrYdWD5CCZ6iBj7FTXnzaYILSPrvPsi2?usp=sharing