The SPE consists of three parts: the first being the subtractive synth from my last project, with some quality of life and functionality improvements. This serves as the lead of the SPE.
The second is a series of four probabilistic sequencers. This gives the SPE the ability to play four separate samples with probabilities specified for each sixteenth note in a four note measure. This serves as the rhythm of the SPE.
Finally, the third part is an automated bass line. This will play a sample at a regular (user-defined) interval. It also detects the key being played in by the lead and shifts the sample accordingly to match.
It also contains equalization equipment for the bass & drum (jointly), as well as for the lead. In addition, many controls are alterable via MIDI keyboard knobs. A demonstration of the SPE is below.
The code for the main section of the patch can be found here. Its pfft~ subpatch is here.
The embedded sequencer can be found here.
The embedded synth can be found here. Its poly~ subpatch is here.
Thanks to V.J. Manzo for the Modal Analysis library, An0va for the bass guitar samples, Roland for making the 808 (and whoever first extracted the samples I downloaded), and Jesse for his help with the probabilistic sequencers.
]]>
Here’s a video of me noodling around with it:
(sorry about the audio in some places, it’s my capture, not the patch itself)
The main patch can be found here.
The patch used inside poly~ can be found here.
]]>In this project, I connect a multislider object to a pfft instance to allow for real-time volume adjustment across the frequency spectrum. The multislider updates a matrix which is read via the jit.peek object inside of a pfft subpatch. The subpatch reads the appropriate value of this matrix for the current bucket, and adjusts the amplitude accordingly. This amplitude is written into a separate matrix, which itself is read by the main patch to create a visualization of the amplitude across the frequency spectrum.
At first, the multislider had as many sliders as buckets. However, this was too cumbersome to easily manipulate, and so I reduced the number of sliders, having one slider control the equalization for multiple buckets. At first I divided these equally, but this lead to the problem of the first few buckets controlling the majority of the sound, and the last few controlling none. This stems from the fact that humans are better at distinguishing low frequencies from each other. Approximating the psychoacoustic curve from class as a logarithmic curve, I assigned volume sliders to buckets based on the logarithm of the bucket. After doing this, I was happy with what portion of the frequency spectrum was controlled by each slider.
(Also interesting: to visualize the frequency, I took the logarithm of the absolute value of the amplitude. In the graph, the points you see are actually mostly negative – this means the original amplitudes were less than one. I took the logarithm to take away peaks – the lowest frequencies always registered as way louder and so kinda wrecked the visualization.)
This is the code for the subpatch running inside pfft.
And this is the code for the main patch.
]]>The original:
My first two signals were created by popping a balloon from the other side of a door as the recorder, and by recording the sound through Snapchat, then playing it back.
Here’s the IR and the song from the other side of a door:
And here’s the IR and the song through Snapchat:
Next, I used as my IR the sound of me knocking on my desk with my recorder pressed to the desk. There was a plate on my desk, and the sound of a fork rattling on the plate creates a pitch.
Here are the IR and the song convolved with this IR:
I was a bit disappointed to see that it sounded similar to the first two, but it is cool to note that the frequency from the fork and plate cause a resonance in the song.
Finally, I recorded a short clip of myself eating yogurt and used that as the IR. I’d like to thank my roommate for donating his yogurt for the sake of art. Here’s that IR and the resulting song:
Sorry that the IR for this is so gross. But, the different spikes in the yogurt IR do create a cool preverb effect in the song.
]]>My second attempt was to abandon user-defined transforms and instead utilize Max’s built-in implementation of the Sobel edge detection kernel to create the transform. However, applying the convolution to the feedback itself led to the edge detection being run on itself, causing values in the video to explode. This was solved by applying the edge detection on the input itself, and then adding the camera footage before the final output. (It looks maybe cooler without the original image added, depending on the light, so I included both outputs in the final patch.)
]]>
The above video is the result of a beautifying filter from a photo retouching app (Meitu) being applied to the same picture (of myself) about 120 times. The filter itself was the one the app called ‘natural’, set to be as subtle as possible. (A slider in the app alters how drastic the changes are.)
To follow the themes presented in the in-class examples, this project was meant to demonstrate certain fundamental artifacts and assumptions in the process itself – in this case, what does the app consider beautiful? We see in the final result of the process those artifacts extracted and brought to their extremes.
The music on the video doesn’t have anything to do with feedback loops, I just thought having it in the background was funny.
]]>