Author Archives: jcavell@andrew.cmu.edu

Ambisonic Template with Generative Features (Project 2) — Jonathan Cavell

For this project, I wanted to create a template for use in a performance setting for an upcoming project I am developing which combines electronics and live vocals.

In this case, this patch acts as a template to load a set of audio files into a polybuffer~ and generate an 8 channel ambisonic audio signal using the files which were imported. In addition, a series of parameters have been added which allow for customization both beforehand and live (using a Leap Motion controller) to the output of the patch.

These parameters include the volume for each of the 8 channels, a biquad filter, a svf~ filter, and the positioning of sound sources within three dimensional space (both using generative and manually controlled movement).

The primary benefit to this template is that it auto-generates a multi-channel audio playback object and automatically connects it into the objects from the hoa object library so that the primary focus of any project using this template is on the customization of parameters rather than building an ambisonic patch from the ground up. It is possible that, using the current form of the patch, you can generate a sound installation for instant playback using only a handful of audio files (within a particular set of bounds) and various parameters of the sound as it is played live.

Given more time, I hope to further revise this patch so that it is more flexible and allows more complex ambisonic instillation to be automatically generated (such as up to the 64 channels currently supported by the Higher Order Ambisonics library).

Patch Available Below (Requires Higher Order Ambisonics Library and Leap Motion Object by Jules Françoise):

Dropbox

Reactive Visuals in Max for Live (Project 1) — Jonathan Cavell

For our first longer term project, I created a patch which resulted in a more stylistic visual which reacted to midi and audio data from Ableton Live session information. Within the patch, gl objects are set to render and erase based of midi information from each instrument and are manipulate by the amplitude of the audio signal generated by each instrument. The end result is a set of shapes/objects, each assigned to their own instrument, which are turned on and off by that instrument and manipulated by its audio signal.

The patch automates the movement, transparency, and rotation of objects within the video window in direct proportion to the amplitude signal of each midi instrument in Live. For the shapes associate with the synth, I created an image in Adobe Illustrator which was then imported into Max and layered to create a new object. The drum kit uses clearly defined geometric shapes to contrast the more amorphous shape generated by the synth.

Midi Send Portions of the Patch

Synth Visual Portion of the Patch

Drum Visual Portion of the Patch

The patch itself, while it is rather large (and divided into a set of four patches within the Live session) is built on a series of smaller patches to execute a simple, but polished concept. My personal goal for this project was to become familiar with a set of techniques which I had not utilized in a previous project and familiarize myself with the Max for Live environment which operates with a unique set of limitations. I wanted to create a patch that generated a stylistic visual element which could be replicated for live performance which came across as a more polished visualization with smoother transitions than what I was able to achieve in previous projects.

Of these techniques, the ones I was most concerned about ended up being the easiest (ex. creating a unique shape/image in Adobe Illustrator and converting it into an .obj for use by jit.gl since I do not have a visual media background) and the ones I thought I should be able to complete easily proved more complicated when working in the Max for Live environment (ex. automating the transparency of different jit.gl objects and creating smooth movement across the video window).

I would like to experiment further with the automation to create a much more experimental version of the visual elements, but I am pleased with how this first version turned out.

Drum Visualization Patch

Synth Visualization Patch

Midi Send Patches

Assignment 4 — Jonathan Cavell

This assignment had us look at signal processing utilizing the pfft~ object.

Personally, I wanted to do something interesting with the amplitude and phase data that the pfft~ object provides that we had not yet tried. The end result was a fairly straight forward use of signal information to create a reactive video — similar to what we achieved in class — in which the amplitude and phase were used as an external effect on a noise matrix.

I chose to use noise because I found it easier to produce a particle effect using the draw points in the jit,gl.mesh object.

The patch takes the amplitude and phase data from the incoming audio — in this case, from a microphone — and captures each as a number value (using snapshot~ rather than poltocar~ to convert the signal information after some minor processing). The number value is then used as a set of parameters defining the location of an attracting force on the particles — causing them to moving around the screen.

I also added a more extreme set of attraction forces which use the amplitude and phase information to govern how strong the particles are attracted or rejected to the center of the video window. When turned on, the particles become more erratic due to the constantly changing values which limits its applications — but I like it as an effect to instantly intensify the drama of the visual effect.

I am interested in developing this patch further with a set of filters and gates to create a combination audio/visual instrument. I would also like to refine the way in which the particles are acted on by different forces to create a more fine-tuned reactive effect.

 

Assignment 3 — Jonathan Cavell

For this assignment, I made patch utilizing a sample from the intro vamp of an old radio play narrated by Vincent Price.

The recording I provided uses all four convolutions at once — 3 of which are only played once and the fourth, which is a simple IR from a stairwell is offset from the others and then put through a large set of delays to generate a cascade of sound — as if it is coming from multiple sources placed close together in the same room.

The two impulse recordings which are not actual impulses were chosen by how the fit together and were edited for length so that they could be played simultaneously and build to a wall of sound before tapering off.

The end result is below.

I think even further narrative content could be developed by carefully made audio cues. However, I think these may be better triggered using a launchpad rather than programming each in — so there is a greater element of indeterminacy.

Assignment 2 — Jonathan Cavell

Video Synth

Below is a patch which utilizes color information from a video to generate synth sounds through a set of delay channels.

The patch starts by sending the RGB matrix from a video source to a series of jit.findbounds objects, which locate color values within each plane of the matrix (this includes the alpha plane, but it is not utilized here since I am interested in capturing red, green, and blue color values specifically).

There are a series of poly synthesizers — one for each set of bounds the jit.findbounds objects send, with a channel for the x value and one channel for the y value. The number values are “seen” as midi data by the synthesizers which then turn these values into frequencies.

The values which are actually turned into frequencies have been scaled to a less jarring range of possible pitches using a scale object and the transition between frequencies has been smoothed using the line~ object.

Finally, the frequencies are sent into a delay patch which utilizes two different controls. One is the static set of delay for each channel shown in the tapout~ object. Additionally there is a changeable integer value to add additional delay to the preset amounts in the tapout~.

Since I wanted to use a clear video to create an additional feedback effect (for an extra layer of delay using a randomly generated number value), I added a control to adjust the saturation levels. This is helpful as, depending on what is in frame, the change between values is too subtle to produce a recognizable variation in pitch. By manipulating the saturation, you can get around this issue. Additionally, this control can provide sweeping effects by changing the set of values sent to the synths.

The final product provides a crude sonic impression of the colors moving within the digital image.