For our final project, Ken and I originally wanted to control a videogame by using a kinect sensor. But we ran into problems getting it to work with emulators, so we decided to keep the kinect and make a human theramin with it!
My contribution was just the getting the data, while Ken was in charge of the sound synthesis.
My first step was to use the extension dp.kinect2, which is used to get all kinds of data from the kinect to use in a maxpatch.
We made sure that we could get the skeleton data from what is infront of the kinect, and I then picked out the separate data for the head, feet, and hands. (The data itself is the body part on an XYZ axis.) I then sent the data over to Ken’s computer, and we used that to make the rest of the project!
Overall I like how it turned out, because if you’re precise you can actually make some music with our project. But we didn’t have enough time to add more to it, but if we did it probably would’ve been doing something with the video footage from the kinect as well.
For this project, I wanted to use the game Mario Kart: Double Dash, and to create ‘music’ with it. More specifically, I wanted to use color to determine what frequencies would play.
Main patch:
This is my overall patch, and I first had my computer only capture part of my screen where the emulator would be, so I could then segment the screen into four parts by making a 2×2 matrix.
The matrix takes the average color of each of the four quadrants, and so I used that to get the color values.
I then took a sub-patch that I helped make last year for a project in Experimental Sound Synthesis (pic. is at the bottom,) and used it to read the messages better, and turn them into smaller numbers. I then multiplied those numbers by five so the frequencies would be easier to hear. After that, I put the numbers in either a phasor~, rect~, or cycle~, so that I would get contrasting sounds when you played the game.
To get four people to play, I took a few Xbox controllers and got them to work on my Mac with an extension that I got a few year ago: https://github.com/360Controller/360Controller/releases
Here’s also a recording of me trying it out: (Turn down your volume in case it’s too loud!)
Overall I’m extremely happy with how this project turned out, and I’m really excited to show it to you all tomorrow!
For this assignment I wanted to mess around with more music and how I could change the output of the song. I took the song Distant Past from my favorite band, Everything Everything to use for the project.
First, I took the music video for the song Distant Past, and connected it to two submatrix windows which turned the footage from the music video into the fragmented lines of footage that you see on the patch below:
I then made a multislider which connected to the audio of the song. The multislider showed the movement of the video on the graph, so I used it to change the frequencies by EQ’ing it while both the video and the song played at the same time. I also added another window so you can watch the music video at the same time.
Also here is the picture of the subpatch in the pfft:
Basically all that’s left is a recording of the audio and video that I took on the patch, I hope you enjoy it!
Heyo. For this assignment I decided to play around with the first few seconds of Alt-J’s song Hunger of the Pine. Also, I used Garageband to edit my sample clips, and I uploaded them to iTunes to use them for this project. The video link is right below.
(If you decide to watch all of it, just know that the guy gets shot with arrows in the end incase you don’t want to see that!)
I then uploaded four samples from Max and put them on Soundcloud.
For the first sample, the IR is actually the first note of Hunger of the Pine. This actually creates a smoother version of the piece that actually blends the notes together.
The next sample was one of the balloon popping samples from the back stairwell behind the CFA practice rooms. This also mutes the sound a bit, but it is blended less together compared to the first sample.
The third sample is a voice clip of Link from Nintendo’s Breath of the Wild. You can hear his signature ‘hyah!’ in the beginning of the clip, but it slowly blends in with the rest of the sample, and also creates a sound that reminds me of a rubber ball bouncing on the street.
The last sample is a loud cymbal crash. This sound is the least blended together, and it sounds like it’s playing a from a few rooms over.
Overall a few of my sounds turned out to be pretty similar, but I like how most of it turned out!
For the second assignment I decided that I wanted to play around with rampsmooth and randomly generared tones.
And here’s the overview of my patch:
What I tried to create a normal speaker that would randomly generate tones. I then used tapin and tapout to create a delay for the final signal. I also multiplied the signals, and used rampsmooth and kink to help get rid of the pops in the speaker. What ends up being made is a much smoother version of the first thing we made in class, which I think sounds pretty chill!
When I decided to do this project I wanted to use my printer, but I didn’t have any ink. What I ended up doing was taking a sheet of music and scanning it to upload to my computer. After that I took about 50 photos of the music and airdropped it back to my computer each time. Below, the music is shown every 5 pictures to show the progress of it changing.
I didn’t expect it to turn green as quickly as it did. Overall I’m pretty happy how it turned out!