In its final form, the patch I made controls reverb, a multi-voiced vocoder, and an LFO that controls the amount of pitch bend the incoming audio receives and how fast it shifts. The patch is slightly finicky in that you need to activate the LEAP data in all the sub-patches before audio will come through. Parts of the sub-patches are patches I found online or in tutorials, but all the LEAP data tracking is my own. I also decided to use Luis Fonsi’s Despacito since we used that in class several times.
When first opened, the patch contains two different input options, a hand tracking toggle, and access to the sub-patches that contain the individual effects. The mp3 is by far easier to work with. The adc~ works, but not as effectively as I would have liked.
Once you decide the input and hit the desired effect’s number key, you can open up the corresponding sub-patch.
The reverb patch is fairly straight-forward. It uses the reverb patch found in max > help > examples. The hand gestures directly effect the basic parameters of a reverb effect. The most noticeable are the Decay and Size, while the Diffusion and Hi Frequency Cutoff are more subtle.
The vocoder is names after Gir from Invader Zim. The dry/wet part of the patch is from – https://www.youtube.com/watch?v=mi9CjZxk8zs and the vocoder effect is from – https://www.youtube.com/watch?v=4feOFLX6238. I used these because both were easily controlled by the LEAP data, while other pre-made versions required the mouse to control the various parameters.
Lastly, is the LFO device. The base effect of this is loosely based on this video – https://www.youtube.com/watch?v=uyzY_ZP54pA. However, I altered it in order to get a different effect. I was trying to replicate an effect I had heard from a soundtrack, but I ended up getting more of a whammy bar/warbly effect.
Overall, I am very happy with this. It is very fun to use, and can be very easily modded to control different effects or to add new ones. I am planning on continuing to work and develop this because I think it could eventually turn into a very useful tool, and I think this is a very good point to be at for the first iteration.
https://drive.google.com/open?id=1xMT-od471FLCn6pOWAYY9XWCGeTdMlIO
]]>The exact way that this works is the patch tracks the x-y-z data of the fingertip. X data is pitch, Y data is on/off, and Z data is volume. I designed this to act as closely to a piano keyboard as possible, which is why the Y data controls the on/off of the sound. This simulates the action of pressing down a key.
https://drive.google.com/drive/folders/1gxu0LY6NpyEr03aOj5RZeaOhtDbv_HCa?usp=sharing
The combination of objects I used to convert the frequency into integers was found in the forums. I will attach a link to the thread. The exact post is near the bottom and was posted by Jean-Francois Charles. I only used part of the patch and also colored the parts I used in blue. The audio I used to generate everything is the “Anton” file that can be found in the Max audio tab.
https://cycling74.com/forums/converting-real-time-audio-into-frequency
https://drive.google.com/drive/folders/1GlmiPqBMVEvIyXNLXFlqrgDLZtQ5MVez?usp=sharing
]]>For my two regular IRs, I went to North Park in Allison Park to an old well call the Fountain of Youth that has a small domed entryway before going further inside to the well. I recorded my first IR in the entryway. For the second, I wanted to do something completely different from the Fountain IR so I recorded this IR in the CFA stairwell.
For my first experimental convolution, I recorded Lucier’s “I’m Sitting In A Room” being played in the Fountain of Youth entry. Then I took the last cycle of his recording (around the 45 minute mark) and recorded that. Then I ran it through Audacity to normalize it and plugged it into the Max patch.
My second experimental convolution was made from the A-Game Synth audio loop from Logic Pro X. For this, I took the loop, cut up about 20 different sections of the audio, then stitched them back together at similarly sized regions to create a new sound wave. Then I exported from Logic, loaded into Audacity, and ran the new audio file through Paulstretch. I kept the stretch factor at 10 seconds, but I reduced the time resolution down to 0.05 seconds. Then I added a fade-out to it to better simulate this “room”.
I uploaded all the IRs and all the convolutions as two separate files to Soundcloud. On both, the order of playback is: CFA Stairwell, Fountain of Youth, A-Game Synth, and Lucier. All the separate files, as well as the original Lucier file are in the shared Google Drive folder.
https://drive.google.com/open?id=1ksIiY-WbGaDIanr9L9jDofBgw-HUMIj4
]]>To use it, make sure all the mp3 files are on the “loop” setting, then toggle them all on. The integer boxes affect the delay length and pitch shiftiness. Each of the lower patches affect only the delay length. There are several message boxes that are attached to the tapin~ inlet. The “freeze 1” stops all currently incoming data and keeps only the data that has already gone through, while the “freeze 0” resumes the input as it was. The “clear” button wipes any saved data that has already gone through, in this case, essentially setting that particular audio route to a just-started state. The additional delay patches affect pairs of the larger patch sections, pairing 1/2, 1/3, and 2/3 from left to right. Adjusting the delay length here will affect both the connected signals in the same manner. This allows three separate delay lengths for any given signal and can create rather complex lairs of delay, while the “freeze” and ”clear” buttons for some creative expression and more control over the output.
https://drive.google.com/open?id=1U6VTqGjUWpQrW5suiFGNvUgMeyAtKqBz
]]>https://drive.google.com/open?id=17iIhon3EUlHm9qhTsiu0gADy7dm0e6-t
]]>