Category Archives: Assignments

Two-person synth – JEENA

I built a system that uses two Kinects to track the movements of two dancers, a digital synthesizer that generates sound solely depend on the skeleton data, and a particle pattern visuals that changes based on both the skeleton data and the sound itself.

For the Kinect part, I use the head height of two users, and the distance between their hands, and their body left-right positions. In order to create the best performance, if any of those data from one of the Kinects stops changing, which indicate the person might have moved out of the range of the Kinect, I reset the values of the sliders that is sending midi data to the synthesizer, so that the filters might not be accidentally set to a very low point.

For the synthesizer part, I strip the sound into two parts — one is manipulated by the filters, and one is not to decrease the chance that the sound might be completely turned off during the performance. The synthesizer has 13 presets that allow people to choose from as starting point.

In the particle pattern visuals, the pattern is distorted by the sound, and the size of the pattern is controlled by one of the dancers. Also, depend on where the two dancers are at, the particles will move left or right with the dancers.

code

Ambisonic Template with Generative Features (Project 2) — Jonathan Cavell

For this project, I wanted to create a template for use in a performance setting for an upcoming project I am developing which combines electronics and live vocals.

In this case, this patch acts as a template to load a set of audio files into a polybuffer~ and generate an 8 channel ambisonic audio signal using the files which were imported. In addition, a series of parameters have been added which allow for customization both beforehand and live (using a Leap Motion controller) to the output of the patch.

These parameters include the volume for each of the 8 channels, a biquad filter, a svf~ filter, and the positioning of sound sources within three dimensional space (both using generative and manually controlled movement).

The primary benefit to this template is that it auto-generates a multi-channel audio playback object and automatically connects it into the objects from the hoa object library so that the primary focus of any project using this template is on the customization of parameters rather than building an ambisonic patch from the ground up. It is possible that, using the current form of the patch, you can generate a sound installation for instant playback using only a handful of audio files (within a particular set of bounds) and various parameters of the sound as it is played live.

Given more time, I hope to further revise this patch so that it is more flexible and allows more complex ambisonic instillation to be automatically generated (such as up to the 64 channels currently supported by the Higher Order Ambisonics library).

Patch Available Below (Requires Higher Order Ambisonics Library and Leap Motion Object by Jules Françoise):

Dropbox

Final Project – Generative Music Soundscape Matthew Xie

For the final project, I decided to further explore Max MSP’s self-generating music project, a step above of what I created for Project 1. For this project, 8 different designed sounds are ready. 5 are main sounds while 3 acts as special effects. The patch almost acts as a sequencer, with inputs of tempo and ‘beats per bar’. Each bar, a new sound is triggered completely randomly. However, both the frequency and volume of the sound is from analyzing the user’s input through the piano keyboard and other settings. The user is also able to change the sound design of 5 of the 8 sounds through graphs. The piano keyboard also acts as a slider, as both the frequency and volume is set based on where the user clicks it. Other sliders for the 7 different sounds indicate the octave possibility range. From then on, the 5 main sounds are selected randomly. The 3 FX sounds are played also due to chance, yet this chance is a result within the subpatch. The sounds are processed through reverb and delay effects. Furthermore, a stuttering effect is also available, which splits each bar up into 16 distinct ‘steps’ (inspiration from Autechre).

I originally wanted to due a music generative project based off of possibility and an input from the mic. But after researching online and especially finding out about the music group Autechre I changed my mind. I mainly got my inspiration from their patches. Sound designs were learnt both through the youtube DeliciousMaxTutorials and http://sounddesignwithmax.blogspot.com/. Reference for the reverb subpatch: taken from https://cycling74.com/forums/reverb-in-max-msp.

Here is a recording sample of the piece being played:

 

Code as follows:

Final Project Chaos Symphony – Kun Peng

In this project, I explored the concept of combining semi-generated music with noise. The initial signal in the piece includes midi chord and head position. The patch further combines these two signals, which trigger other noises that contribute to the final piece.

Initially, I planned to work with  finished midi sequences but ended up with using only chords since the goal of the project is to generate both noise and midi notes. These generated midi notes vary in velocity, which is controlled by vertical head angle. The facial landmark positions are generated through pre-processed python script using dlib, easy reading here (https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python/). the output looks like the figure below. The arrays of x, y coordinates are sent via UDP through python using the OSC library(https://github.com/ptone/pyosc), which makes the process easier(the native socket library in python does not work very well with Max).

The first part of the patch includes parameter-processing and calculations. After the facial landmark identification script is loaded, you can press A in max to set up the reference coordinates on which further processing and calculation will be based. Part of the patch is presented in the figure below. For all parameters, the average over 4 incoming values rather than original value is used for more stable implementation. Note that all values/distance calculated are not the values you will get measuring your face with a ruler. The calculations are made to be as simplified as possible and no normalizations are implemented. The output represents scales, rather than the exact values.

The second part is note-processing: In the current version, the patch is comfortable with 3-note chords.  I will keep on working on the patch so that it can be more flexible in dealing with chords.

 

You can control the following sound effects using the following parameters:

  • Landmark 28, Horizontal Position, pitch shift in background noise with granular synthesis, triggers crow noises effect. I implemented manual fade in and fade out for the sound sample for better ambiance:

  • Landmark 28 + 1, Horizontal angle, triggers gunfire background noise with granular synthesis, greater variance corresponds to greater random pitch rate.  Since calculating variance can be a little bit difficult, I used another value of similar nature(picking the first and the last value from the bucket and calculate their distance) to work with the concept. (My granular synthesis implementation is this project is relatively native since the goal is to produce sound effects. )
  • Landmark 28 + 20, Eyebrow-raise, triggers an argument sound effect.
  • Landmark 48 + 44, Eye-close, stop the generated notes from playing.
  • landmark 9 + 58. Vertical head angle, changes note velocity. Raising your head produces louder notes, assuming that you start by looking down at the screen.
  • Landmark 49 + 55. Smile/Smirk, triggers a wicked laughter. The laughter is transformed into multiple pieces of different pitches using phasor so that you can hear the smile from multiple people(witch and wizards?).  A custom-pan is implemented so that you can pan the laughter by moving your head sideways. In the full demo, I forgot to move my head to the direction to which the pitch shifted voices can be triggered, here is a sample for this sound effect alone.

  • There is also a background thunder effect, triggered by low-pitch notes generated in the note-processing patch.

You can also make changes manually while performing if you are not happy with the sound.

 

Due to my computer’s inability to process all the data while recording both audio and video, only the audio file is included.

Small Production Environment – Will Walters Final Project

For my final project, I created what I’m calling a Small Production Environment, or SPE. Yes, it’s a bad name.

The SPE consists of three parts: the first being the subtractive synth from my last project, with some quality of life and functionality improvements. This serves as the lead of the SPE.

The second is a series of four probabilistic sequencers. This gives the SPE the ability to play four separate samples with probabilities specified for each sixteenth note in a four note measure. This serves as the rhythm of the SPE.

Finally, the third part is an automated bass line. This will play a sample at a regular (user-defined) interval. It also detects the key being played in by the lead and shifts the sample accordingly to match.

It also contains equalization equipment for the bass & drum (jointly), as well as for the lead. In addition, many controls are alterable via MIDI keyboard knobs. A demonstration of the SPE is below.

The code for the main section of the patch can be found here. Its pfft~ subpatch is here.

The embedded sequencer can be found here.

The embedded synth can be found here. Its poly~ subpatch is here.

Thanks to V.J. Manzo for the Modal Analysis library, An0va for the bass guitar samples, Roland for making the 808 (and whoever first extracted the samples I downloaded), and Jesse for his help with the probabilistic sequencers.

 

Air-DJ, A Final Project by Anish Krishnan

As a pretty heavy music listener, I have always wondered to myself if it would be possible to mix a few songs together and create a mashup of my own. After eagerly surfing the web for an app that would let me do just the thing, I quickly realized that using a mouse and keyboard is not the proper interface to work with music. This is exactly why DJ’s use expensive instruments with knobs and dials so that they can quickly achieve the effect they are going for. For my final project, I made an Air-DJ application in Max so that you can convolve your music in a variety of ways using your hands and never touching the mouse or keyboard. Using a Leap Motion sensor, I used various different gestures to control different aspects of a song.

After selecting a song to play, you can use your left hand to add beats. You can add 3 different types of beats by either moving your hand forward, backward, or to the left. Lifting your hand up and down will change the volume/gain of the beat.

Your right hand controls the main track. Again, lifting it up and down will control the volume/gain of the song. With a pinch of your fingers, you can decrease the cut-off frequency of a low pass filter. I also implemented a phase multiplier when you move your right hand towards and away from the screen (on the z-axis). Finally, moving your right hand sideways will increase an incorporated delay time.

Here are a few screenshots of the patch:

 

 

 

 

 

And here is the video of the whole thing!

Original song:

https://drive.google.com/open?id=1Z7nWcNn6fCZ3dw5nnWZ5tU52breicnIr

Air-DJ’d Song:

https://drive.google.com/open?id=1KseRhpuURgx3AZ6PN6Z14abrB5dDtoBS


All the important files are below:

Google Drive link containing all files: https://drive.google.com/open?id=1FmMiDLyB4gIbOK6bx0KgIbESSKyNBcA1

Github Gist: https://gist.github.com/anonymous/4570d6ae97e13fe29337a57a97fb81e5

Final Project – Isha Iyer

For my final project I decided to explore more ways of using the leap motion sensor to control different elements of drawing. I made a game through which the coordinates of a hand are tracked to translate to both a rotation and resizing of a square to match up with a target square. When the squares are matched sufficiently, it moves to another one. I have attached a demo of me playing this.

I was also very interested in learning more about different uses for the machine learning patch. I trained the leap motion to detect three different hand gestures: a palm facing down, a fist and a “c” for camera. As shown in the demo below, when I make a “C” with my hand, I am able to take pictures with the camera. I can then use my left hand to distort the image taken. This distortion was influenced by this tutorial.

Here is a link to all the final files I used for this project including the machine learning data and model for convenience. I also have included all the gists below.

Draw Game:

ML patch:

Distortion Patch:

Adam J. Thompson – Final Project – Body Paint

Body Paint is the visual component of a commission from the Pittsburgh Children’s Museum in collaboration with three sound artists from the School of Drama.

The project is an interactive experience which uses the Kinect 2 to transform each participant’s head, hands, and feet into paintbrushes for drawing colored lines and figures in space. Each session lasts for one minute, following which the patch clears the canvas allowing a new user to take over and begin again.

Participants might attempt to draw representational shapes, or perhaps dance in space and see what patterns emerge from their movements.

The user’s head draws in green, the left hand in magenta, the right hand in red, the left foot in orange, and the right foot in blue.

Body Paint will be installed in the Museum in late January for a currently undefined period of time, free for participants to wander up to, discover, and to experience during their visit.

Visual documentation of the patch in presentation and patcher modes and a video recording of the results of my body drawing in space are below.

The Gist is here.

Project 1 – Kun Peng

In this project, I explored the harmony and intervals in midi files and visualized these qualities. Each midi note is represented by a cube, which is pressed down/pulled up when the note is turned on/off. A noise value relevant to the dissonance of the chord currently held is generated and applied to the position attribute of the cubes. The rendered mesh object changes its color mode when there is a root-note change detected in the chord. Unfortunately, the visualization is pretty crude and I’m still very far from what I wanted to do. I had some troubles trying to manipulate each cube independently in a more creative way under jit.gl.mult context, for example, applying a glow effect on a specific cube when a note is turned on. My major plan is to improve my methods of generating the cubes so that ultimately they can be manipulated independently.

This part of the patch evaluates the intervals in a currently-held chord and assigns a dissonance value to it. The current evaluation is subjective and cannot accommodate inversions or the subtle differences in complex chords. For future work, I will integrate the material discussed in this note and produce more robust evaluations. http://www.oneonta.edu/faculty/legnamo/theorist/density/density.html

This part of the patch generates the noise value and applies it to the position matrix.

 

A short demonstration of the patch(I know the visualization still looks too simple; I will work on more ways to integrate the signal processed into the rendered objects).

 

Project 1 – Matthew Xie

For Project 1, I created a self-generated melody & drone patch.

First off, a wav file of single piano notes played consecutively is analyzed. While Max randomly selects portions of the wav file to play in snippets, the frequency of the audio being played is analyzed and triggers the 1st higher-pitched drones in intervals. Meanwhile, the 2nd drone patch can be triggered by using the keyboard as a midi-keyboard.

The drone is achieved via subtractive synthesis. The pink noise generator is send through filters, only letting pass certain frequency bands. Then, the subtractive synthesis is done with a handful of inline reson~ objects.

The ‘analyzer~’ object is referenced from the maxobject.com website.

Delay is added to all sound effects. Piano melody can also go through a noise gate at will. The speed of the piano sampling can also be manipulated, which will immediately also effect the speed of the self-generated higher pitched drones.

Here is an example of the music being played:

Code is Here: