The basic premise of the patch is to capture and playback gestural input. Max outputs these gestures as MIDI CC data, which are then converted to CV signals that control a hardware synthesizer.
Three Max objects play a major role in this patch:
The core engine in this patch can be extended or augmented to support many types of gestural input. In this implementation the graphical interface consists of 8 faders whose values can be set or automated.
The embedded video demonstrates the patch with a live performance.
Project Resources: https://drive.google.com/drive/u/0/folders/1htNu8UGfB6_NB_QtNnzEGnrOOXTfRYm2
]]>The basic concept of spectral delay is to implement a delay line in the frequency domain, enabling independent control over delay and feedback parameters for each FFT bin. I experimented with a number of extensions to this basic concept and settled on a few that I thought produced the most musically interesting results.
In Cycling74’s original patch, the delay time and feedback parameters for each FFT bin are configured with multislider objects. When these parameters are instead programmed dynamically it can add movement and texture to the overall effect. One way that I extended the original patch was enabling these parameters to be programmed by performing an FFT analysis on a secondary input signal, scaling the resulting spectral magnitude values to generate delay coefficients. When this option is engaged, the delay time parameter for each FFT bin is updated at the rate of the FFT analysis, creating movement.
In my implementation, the user can select either the primary input or an auxiliary signal to be routed to the secondary FFT analysis that generates the delay time parameters. While considering how else I could exploit the output of the secondary FFT analysis, I experimented with substituting the phase information from this analysis in the re-synthesis of the main analysis, a technique known as cross synthesis. By swapping the frequency or phase spectra of one sound with that of another sound, one can impart qualities of a sound onto another. This method produces especially interesting results when the amplitude spectra of a percussive sound is re-synthesized with the phase spectra of a harmonically rich tonal sound. In my implementation I also allow the user to re-synthesize the spectrally delayed signal with the original signal’s phase information which can produce interesting timbral results.
I later reworked the UI for the patch, adding control for the new functionality I implemented. In the end I believe I improved on the repertoire and utility of the original patch; the processing is capable of subtly augmenting a signal but is easily driven to sonic extremes if that is desired.
I’ve embedded a recording of myself manipulating the patch live, feeding it a drum loop as the primary input and a synth pad as the auxiliary input. The result sounds like a field recording in an electrified rainforest.
Project Resources: https://drive.google.com/drive/u/0/folders/1bAm9KaBNgVL0uBodDaZ_xvt6eol5jGei
]]>The patch allows audio to be harmonized with the following intervals adhering to just intonation tuning: M/m3, P5, & M/m7. A simple UI allows the user to mix the incoming audio with it’s harmonization, to select the quality of the 3rd and 7th intervals, and to mute the 5th.
The embedded video demonstrates real-time manipulation of the patch processing a monophonic recording of a cello.
The code for this patch can be found at the following link: https://drive.google.com/drive/u/0/folders/1O0wm-jHsYGvnDhIR8q4OJUjityO-iqMK
]]>I chose my trusty Amen break sample to serve as convolution guinea pig for this assignment.
I went hunting for acoustical spaces of note around my neighborhood. The first interesting IR I captured was in a hallway in an apartment building. There was a nice twangy high frequency resonance to this hallway, evident in the tail of the captured IR.
The next IR I captured was in the basement of a building. The basement in question was solid concrete and I was expecting a decent IR from the space, but the positioning of the microphone turned out to dampen the perceived spaciousness and alas I was out of balloons …
Next I turned to Audacity in an attempt to mangle my captured IR’s into something more experimental. I stumbled upon the idea of applying an envelope filter effect to the hallway IR; the resulting convolution was satisfyingly “funky”.
An hour of experimentation later I was at a loss for what else to convolve my lovely drum break with. Then it hit me … throw a test signal at it! I generated a logarithmic sine sweep between 500Hz and 20kHz, and sure enough the resulting convolution was weird.
All audio files: https://drive.google.com/open?id=1DACjS4VHza50fQDN9yKv5htPZ3NMyx-7
]]>Mangled sound:
Code:
https://drive.google.com/file/d/1yr-0V6TYl60hNG7rHSMOf_A63oJH6-EI/view?usp=sharing
]]>I then resampled the result of this exercise and executed the same steps again, dividing the sample into slices of equal length, randomly distributing the slices among evenly spaced musical time divisions, resampling the result, and so on, and so forth.
I stopped at iteration 5 because … the process was time consuming. At any rate, without human intervention at each step, the progressive results tended towards being more repetitive and less musically satisfying. A composite .WAV file detailing the evolution of the “chopped” break is linked:
]]>