I’ve used this assignment as an opportunity to continue my explorations of the writings of Virginia Woolf as transformed by digital mediums, as well as to better understand how to use a spectral system to create audio-responsive visuals.
I began by reviewing Jesse’s original shapes/ducks/text patch and reconstructing through the components of the PFFT~ one by one in order to understand how they work together toward being written into a matrix. I subsequently created a system outside of the PFFT~ subpatch which randomly pulls a series of lines from Woolf’s novels and renders them as 3D text to a jit.gl sequence.
The only extant recording of Woolf speaking about the identities of words activates the PFFT~ process, and the resulting signals control the scale of the text. The movement of the text uses Jesse’s original Positions subpatch, but filtered through a new matrix used to control the number of lines which appear at any given time.
At the top of her recording, Woolf says, “Words…are full of echoes, memories, associations…” and I aimed to create a visual experience which reflects this statement as an interplay between her own spoken and written words.
I spent some time altering various parameters – size of the matrices, size of the text, amount of time allotted to the trail, swapping the placement and scaling matrices, etc – in order to achieve different effects. Some examples of those experiments are below.
For assignment 4 I took the piece “Pa Pa Papageno” from the opera The Magic Flute and separated the frequencies using a PFFT so that all frequencies within the human vocal range were allocated to one matrix and all other frequencies were allocated to another matrix. I took these two matrices and used an altered version of the patch from class to create two groups of shapes, green polyhedrons and red cubes. The red cubes fluctuate in size with the orchestra, and the green polyhedrons fluctuate with the opera singers.
I managed to smash a few piano keys to put together this piano improvisation. The original piano sound was very boring and uninterested. So I decided to throw my piano into a metal box. Here’s how it should sound like… if I’m playing the piano in a metal box!
The metal box is not interesting enough. because I want the whole building to hear me. So I played in the stairwell.
Hm…. the elevator started to going up and down…
Actually, I want to play the piano in a flushing toilet because it should sound greater. I love toilet!
Piano is too boring. I want a VIBRAPHONE instead!!!!!!!!
The vibraphone sounds GREAT!!!!!! I ended up flanging the sound around using an LFO, and made this trippy wobbly vibraphone version of my piano improvisation. Enjoy!!!!!!!
Metal Box IR was created by putting the mic in a metal box, and hitting the box with a drum stick very hard.
Stairwell IR was created by dropping the extraordinarily heavy computer music tutorial book a whole floor.
ElevatorStairwell IR was created by dropping the extraordinarily heavy computer music tutorial book a whole floor while the elevator right next to the stairwell is moving.
Toilet Flush IR was created by flushing a toilet.
vibraphone IR was a recording of me playing vibraphone.
Gotta love those zany ladies from Miami! They’re fun, they’re flirty, you might even call them golden.
But what if those same Golden Girls were right in your own back yard – in baker hall!
(This was made by recording myself throwing a large book at the ground at the end of baker hall as my impulse – see below. Gotta love literature!)
Think halls are boring? What if those same ladies were in a stairwell!
(This was made in a similar fashion to the first, but the recorder was at the top of a winding stairwell whereas the book throwing was at the bottom of that stairwell – see below.)
Now for the fun stuff.
Do you love Bojack Horseman? Talk about a cross-over episode!
(This was made by using the Bojack Horseman Theme song as the impulse – see below. I like what the warp sounds at the beginning of the song/impulse did to the song “thank you, thank you, thank you…” Overall the whole thing swells in a pretty beautiful way.)
More of a Rick and Morty fan? In honor of the season finale last night, thank you for being a… fan.
(This was made by using the Rick and Morty Theme song as the impulse – see below. The beginning is pretty soft, but the middle is more audible and it is very eerie.)
Here’s a fun lil video that convolved the golden gals’ lovely faces through the bojack video theme song. If there is one thing to learn from this project, it’s that you can never have too much golden girls. Enjoy!
For my Project 1, I am thinking of creating a patch that could detect the different audio effects from an audio file (reverb, delay, high/low frequencies etc) and would trigger the proportional amount of effects visually, manipulating a video input. If I have the ability, I would also like to include other features such as generating certain patterns across the video also being dependent on the audio changes.
An inspiration source I am having for this project is as follows:
This is a lot more advanced than what I’m hoping to achieve, but definitely includes certain artistic styles that I’d like to imitate in my project as well.
I really liked the sound of this, and decided to make a track through Ableton Live.
Track explanation as follows:
Percussions (kick, clap, claves, woodenruffle, drip, underwater-twirl-sound, chimes): except for kick, and chimes, all other reverb effects were used by running the original sample through the Max patch with IR1 as the convolution effect.
Starts off with the original riff, adding the IR1 IR2 IR3 IR4 riffs each loop. IR3 is pitched down two octaves to provide a bass-ier feel.
Once all are included, the oiginal riff IR1 IR2 IR4 fades away in order each loop. IR3 is kept at the very end (wanted to end the track with the distant “kalimba” voice within the kalimba reverb.
Apart from a tiny bit of compression (Ableton built-in) and limiter (George Yohng’s W1) added, not much else mixing is done (my apologies).
I’d like to make a playable 3x Oscillator in Max. The basic functionality will be three separate oscillators with switchable waveforms which can be (de)tuned and volume adjusted separately. On top of hooking this up to a keyboard (and maybe functionality to have it read from USB Midi input?) I could also implement a bunch of user-customizable options like hi/lo pass filters, panning, reverb, and EQ options. I could also add some visualizations of the resulting waveform.
For this assignment, I made patch utilizing a sample from the intro vamp of an old radio play narrated by Vincent Price.
The recording I provided uses all four convolutions at once — 3 of which are only played once and the fourth, which is a simple IR from a stairwell is offset from the others and then put through a large set of delays to generate a cascade of sound — as if it is coming from multiple sources placed close together in the same room.
The two impulse recordings which are not actual impulses were chosen by how the fit together and were edited for length so that they could be played simultaneously and build to a wall of sound before tapering off.
The end result is below.
I think even further narrative content could be developed by carefully made audio cues. However, I think these may be better triggered using a launchpad rather than programming each in — so there is a greater element of indeterminacy.