Category Archives: Assignments

Two-person synth – JEENA

I built a system that uses two Kinects to track the movements of two dancers, a digital synthesizer that generates sound solely depend on the skeleton data, and a particle pattern visuals that changes based on both the skeleton data and the sound itself.

For the Kinect part, I use the head height of two users, and the distance between their hands, and their body left-right positions. In order to create the best performance, if any of those data from one of the Kinects stops changing, which indicate the person might have moved out of the range of the Kinect, I reset the values of the sliders that is sending midi data to the synthesizer, so that the filters might not be accidentally set to a very low point.

For the synthesizer part, I strip the sound into two parts — one is manipulated by the filters, and one is not to decrease the chance that the sound might be completely turned off during the performance. The synthesizer has 13 presets that allow people to choose from as starting point.

In the particle pattern visuals, the pattern is distorted by the sound, and the size of the pattern is controlled by one of the dancers. Also, depend on where the two dancers are at, the particles will move left or right with the dancers.

code

Ambisonic Template with Generative Features (Project 2) — Jonathan Cavell

For this project, I wanted to create a template for use in a performance setting for an upcoming project I am developing which combines electronics and live vocals.

In this case, this patch acts as a template to load a set of audio files into a polybuffer~ and generate an 8 channel ambisonic audio signal using the files which were imported. In addition, a series of parameters have been added which allow for customization both beforehand and live (using a Leap Motion controller) to the output of the patch.

These parameters include the volume for each of the 8 channels, a biquad filter, a svf~ filter, and the positioning of sound sources within three dimensional space (both using generative and manually controlled movement).

The primary benefit to this template is that it auto-generates a multi-channel audio playback object and automatically connects it into the objects from the hoa object library so that the primary focus of any project using this template is on the customization of parameters rather than building an ambisonic patch from the ground up. It is possible that, using the current form of the patch, you can generate a sound installation for instant playback using only a handful of audio files (within a particular set of bounds) and various parameters of the sound as it is played live.

Given more time, I hope to further revise this patch so that it is more flexible and allows more complex ambisonic instillation to be automatically generated (such as up to the 64 channels currently supported by the Higher Order Ambisonics library).

Patch Available Below (Requires Higher Order Ambisonics Library and Leap Motion Object by Jules Françoise):

Dropbox

Final Project – Generative Music Soundscape Matthew Xie

For the final project, I decided to further explore Max MSP’s self-generating music project, a step above of what I created for Project 1. For this project, 8 different designed sounds are ready. 5 are main sounds while 3 acts as special effects. The patch almost acts as a sequencer, with inputs of tempo and ‘beats per bar’. Each bar, a new sound is triggered completely randomly. However, both the frequency and volume of the sound is from analyzing the user’s input through the piano keyboard and other settings. The user is also able to change the sound design of 5 of the 8 sounds through graphs. The piano keyboard also acts as a slider, as both the frequency and volume is set based on where the user clicks it. Other sliders for the 7 different sounds indicate the octave possibility range. From then on, the 5 main sounds are selected randomly. The 3 FX sounds are played also due to chance, yet this chance is a result within the subpatch. The sounds are processed through reverb and delay effects. Furthermore, a stuttering effect is also available, which splits each bar up into 16 distinct ‘steps’ (inspiration from Autechre).

I originally wanted to due a music generative project based off of possibility and an input from the mic. But after researching online and especially finding out about the music group Autechre I changed my mind. I mainly got my inspiration from their patches. Sound designs were learnt both through the youtube DeliciousMaxTutorials and http://sounddesignwithmax.blogspot.com/. Reference for the reverb subpatch: taken from https://cycling74.com/forums/reverb-in-max-msp.

Here is a recording sample of the piece being played:

 

Code as follows:

Small Production Environment – Will Walters Final Project

For my final project, I created what I’m calling a Small Production Environment, or SPE. Yes, it’s a bad name.

The SPE consists of three parts: the first being the subtractive synth from my last project, with some quality of life and functionality improvements. This serves as the lead of the SPE.

The second is a series of four probabilistic sequencers. This gives the SPE the ability to play four separate samples with probabilities specified for each sixteenth note in a four note measure. This serves as the rhythm of the SPE.

Finally, the third part is an automated bass line. This will play a sample at a regular (user-defined) interval. It also detects the key being played in by the lead and shifts the sample accordingly to match.

It also contains equalization equipment for the bass & drum (jointly), as well as for the lead. In addition, many controls are alterable via MIDI keyboard knobs. A demonstration of the SPE is below.

The code for the main section of the patch can be found here. Its pfft~ subpatch is here.

The embedded sequencer can be found here.

The embedded synth can be found here. Its poly~ subpatch is here.

Thanks to V.J. Manzo for the Modal Analysis library, An0va for the bass guitar samples, Roland for making the 808 (and whoever first extracted the samples I downloaded), and Jesse for his help with the probabilistic sequencers.

 

Air-DJ, A Final Project by Anish Krishnan

As a pretty heavy music listener, I have always wondered to myself if it would be possible to mix a few songs together and create a mashup of my own. After eagerly surfing the web for an app that would let me do just the thing, I quickly realized that using a mouse and keyboard is not the proper interface to work with music. This is exactly why DJ’s use expensive instruments with knobs and dials so that they can quickly achieve the effect they are going for. For my final project, I made an Air-DJ application in Max so that you can convolve your music in a variety of ways using your hands and never touching the mouse or keyboard. Using a Leap Motion sensor, I used various different gestures to control different aspects of a song.

After selecting a song to play, you can use your left hand to add beats. You can add 3 different types of beats by either moving your hand forward, backward, or to the left. Lifting your hand up and down will change the volume/gain of the beat.

Your right hand controls the main track. Again, lifting it up and down will control the volume/gain of the song. With a pinch of your fingers, you can decrease the cut-off frequency of a low pass filter. I also implemented a phase multiplier when you move your right hand towards and away from the screen (on the z-axis). Finally, moving your right hand sideways will increase an incorporated delay time.

Here are a few screenshots of the patch:

 

 

 

 

 

And here is the video of the whole thing!

Original song:

https://drive.google.com/open?id=1Z7nWcNn6fCZ3dw5nnWZ5tU52breicnIr

Air-DJ’d Song:

https://drive.google.com/open?id=1KseRhpuURgx3AZ6PN6Z14abrB5dDtoBS


All the important files are below:

Google Drive link containing all files: https://drive.google.com/open?id=1FmMiDLyB4gIbOK6bx0KgIbESSKyNBcA1

Github Gist: https://gist.github.com/anonymous/4570d6ae97e13fe29337a57a97fb81e5

Final Project – Isha Iyer

For my final project I decided to explore more ways of using the leap motion sensor to control different elements of drawing. I made a game through which the coordinates of a hand are tracked to translate to both a rotation and resizing of a square to match up with a target square. When the squares are matched sufficiently, it moves to another one. I have attached a demo of me playing this.

I was also very interested in learning more about different uses for the machine learning patch. I trained the leap motion to detect three different hand gestures: a palm facing down, a fist and a “c” for camera. As shown in the demo below, when I make a “C” with my hand, I am able to take pictures with the camera. I can then use my left hand to distort the image taken. This distortion was influenced by this tutorial.

Here is a link to all the final files I used for this project including the machine learning data and model for convenience. I also have included all the gists below.

Draw Game:

ML patch:

Distortion Patch:

Adam J. Thompson – Final Project – Body Paint

Body Paint is the visual component of a commission from the Pittsburgh Children’s Museum in collaboration with three sound artists from the School of Drama.

The project is an interactive experience which uses the Kinect 2 to transform each participant’s head, hands, and feet into paintbrushes for drawing colored lines and figures in space. Each session lasts for one minute, following which the patch clears the canvas allowing a new user to take over and begin again.

Participants might attempt to draw representational shapes, or perhaps dance in space and see what patterns emerge from their movements.

The user’s head draws in green, the left hand in magenta, the right hand in red, the left foot in orange, and the right foot in blue.

Body Paint will be installed in the Museum in late January for a currently undefined period of time, free for participants to wander up to, discover, and to experience during their visit.

Visual documentation of the patch in presentation and patcher modes and a video recording of the results of my body drawing in space are below.

The Gist is here.

Project 1 – Matthew Xie

For Project 1, I created a self-generated melody & drone patch.

First off, a wav file of single piano notes played consecutively is analyzed. While Max randomly selects portions of the wav file to play in snippets, the frequency of the audio being played is analyzed and triggers the 1st higher-pitched drones in intervals. Meanwhile, the 2nd drone patch can be triggered by using the keyboard as a midi-keyboard.

The drone is achieved via subtractive synthesis. The pink noise generator is send through filters, only letting pass certain frequency bands. Then, the subtractive synthesis is done with a handful of inline reson~ objects.

The ‘analyzer~’ object is referenced from the maxobject.com website.

Delay is added to all sound effects. Piano melody can also go through a noise gate at will. The speed of the piano sampling can also be manipulated, which will immediately also effect the speed of the self-generated higher pitched drones.

Here is an example of the music being played:

Code is Here:

Reactive Visuals in Max for Live (Project 1) — Jonathan Cavell

For our first longer term project, I created a patch which resulted in a more stylistic visual which reacted to midi and audio data from Ableton Live session information. Within the patch, gl objects are set to render and erase based of midi information from each instrument and are manipulate by the amplitude of the audio signal generated by each instrument. The end result is a set of shapes/objects, each assigned to their own instrument, which are turned on and off by that instrument and manipulated by its audio signal.

The patch automates the movement, transparency, and rotation of objects within the video window in direct proportion to the amplitude signal of each midi instrument in Live. For the shapes associate with the synth, I created an image in Adobe Illustrator which was then imported into Max and layered to create a new object. The drum kit uses clearly defined geometric shapes to contrast the more amorphous shape generated by the synth.

Midi Send Portions of the Patch

Synth Visual Portion of the Patch

Drum Visual Portion of the Patch

The patch itself, while it is rather large (and divided into a set of four patches within the Live session) is built on a series of smaller patches to execute a simple, but polished concept. My personal goal for this project was to become familiar with a set of techniques which I had not utilized in a previous project and familiarize myself with the Max for Live environment which operates with a unique set of limitations. I wanted to create a patch that generated a stylistic visual element which could be replicated for live performance which came across as a more polished visualization with smoother transitions than what I was able to achieve in previous projects.

Of these techniques, the ones I was most concerned about ended up being the easiest (ex. creating a unique shape/image in Adobe Illustrator and converting it into an .obj for use by jit.gl since I do not have a visual media background) and the ones I thought I should be able to complete easily proved more complicated when working in the Max for Live environment (ex. automating the transparency of different jit.gl objects and creating smooth movement across the video window).

I would like to experiment further with the automation to create a much more experimental version of the visual elements, but I am pleased with how this first version turned out.

Drum Visualization Patch

Synth Visualization Patch

Midi Send Patches

Project 1 – 3x Oscillator – Will Walters

For this project, I created a synthesizer instrument called a 3x Oscillator. It does more or less what it says on the tin, the user can control three oscillators which can be played via midi input. When a note is played, the oscillators produce sound in tandem, creating a far fuller sound than a single tone. The oscillators can be tuned and equalized relative to each other, and the waveform of each can be selected – sinusoid, sawtooth, or square. Other options for customization include total gain of the instrument; independent control of the attack, decay, sustain, and release; and a filter with both type and parameters customizable.

Here’s a video of me noodling around with it:

(sorry about the audio in some places, it’s my capture, not the patch itself)

The main patch can be found here.

The patch used inside poly~ can be found here.