For this project, I continued my work on the first project and added a tap for bpm and pose recognition using machine learning and the leap motion controller.
I kept the same overall layout with a video feed being stripped into separate RGB colorplanes and then moving them against each other but instead of having a single looping video I created a playlist of videos which can be switched by making a fist. I also altered the playback speed of the video using the position of the right palm over the sensor.
Instead of using the problematic beat detection object from the first version, I instead built a simple tap for bpm. I did this through a timer and some zl functions.
If I were to continue this further I would look into more interesting parameters to tweak as well as finding some ways to add some more visual diversity.
For this project I tried making reactive visuals for rave music. There are two main components, the video of a 3D model dancing and an actual 3D model jumping around in space. The video is separated into R, G and B planes which are then moved around in sync with the music. The 3D model is distorted by a jit.catch on the signal and is bounced around on beat to the incoming audio.
The beat/bpm detection was done by an external object that was rather inaccurate but still created some cool visual effects.
One thing I really wanted to add to this project that would have made it a lot more interesting is having it fade in and out of these separate visual components based on the activity of the low-end. Since rave music is largely driven by kick drums, moments in a song where the kick drum or bass is absent are generally tense and dramatic. Being able to have the visuals correspond to this moment would have been really key. I tried to start this in measuring the difference in peakamps on beat with a low pass filtered signal but couldn’t find a meaningful delta. I then tried to simply map the amplitude of the filtered signal to the alpha channels of the layers but the 3D model would respond to a change in alpha values.
Overall, I think I could greatly improve on this project by more accurately measuring beats/bpm and getting the triggering/fading working. Below is a low-res recording of the visuals as well as the pasted patch.
Similar to my project in Experimental Sound Synthesis, I want to work on making rave visuals. Now having a better understanding of jitter and video processing, I think I can make even more interesting visual patterns. I also want to make these visuals responsive to a sound input similar to our FFT in class examples instead of relying on MIDI clock data. Also, instead of taking videos and altering their playback like I did in the last project, for this project I want to do more generative original visual patterns.
For this assignment I took a midi mapping of the Gravity Falls theme song and ran it through a electric piano instrument in Ableton to create a totally reverb-less audio track. I think fed it through the convolution reverb with a IR taken from my bedroom, an IR taken from popping a balloon in my backpack while recording from the outside, a church bell, and an accordion. I then took each of these tracks and cross-faded between them to create a piece. The IRs and individual tracks can be downloaded here.
For this assignment I took the patch from this video, and made some changes so that each frame it saves to the buffer only contains R, G, or B values. The result is an effect where instead of pixelated noise it looks more like multicolored noise.
The first idea I had for this assignment was to feed my name into the Wu-Tang Clan Name Generator and then feed the input of that back into itself over and over I did this 5 times and got the following:
I didn’t really get any meaningful insight into the system so I decided to do something else.
Similar to I Am Sitting in a Room, I recorded myself briefly speaking but then fed it though Max for Live’s convolution reverb(set with the impulse of a PVC pipe) and repeated the process to yield the following:
It sounded harsher overall compared to the original I Am Sitting in a Room. This could be due to the resonant frequencies of a PVC tube. Also there was a certain warmness in the original that likely stemmed from some form of tape distortion.