Two-person synth – JEENA

I built a system that uses two Kinects to track the movements of two dancers, a digital synthesizer that generates sound solely depend on the skeleton data, and a particle pattern visuals that changes based on both the skeleton data and the sound itself.

For the Kinect part, I use the head height of two users, and the distance between their hands, and their body left-right positions. In order to create the best performance, if any of those data from one of the Kinects stops changing, which indicate the person might have moved out of the range of the Kinect, I reset the values of the sliders that is sending midi data to the synthesizer, so that the filters might not be accidentally set to a very low point.

For the synthesizer part, I strip the sound into two parts — one is manipulated by the filters, and one is not to decrease the chance that the sound might be completely turned off during the performance. The synthesizer has 13 presets that allow people to choose from as starting point.

In the particle pattern visuals, the pattern is distorted by the sound, and the size of the pattern is controlled by one of the dancers. Also, depend on where the two dancers are at, the particles will move left or right with the dancers.

code

Project 2: Rave Visuals Continued – Arnav Luthra

For this project, I continued my work on the first project and added a tap for bpm and pose recognition using machine learning and the leap motion controller.

I kept the same overall layout with a video feed being stripped into separate RGB colorplanes and then moving them against each other but instead of having a single looping video I created a playlist of videos which can be switched by making a fist. I also altered the playback speed of the video using the position of the right palm over the sensor.

Instead of using the problematic beat detection object from the first version, I instead built a simple tap for bpm. I did this through a timer and some zl functions.

If I were to continue this further I would look into more interesting parameters to tweak as well as finding some ways to add some more visual diversity.

Patch:

Ambisonic Template with Generative Features (Project 2) — Jonathan Cavell

For this project, I wanted to create a template for use in a performance setting for an upcoming project I am developing which combines electronics and live vocals.

In this case, this patch acts as a template to load a set of audio files into a polybuffer~ and generate an 8 channel ambisonic audio signal using the files which were imported. In addition, a series of parameters have been added which allow for customization both beforehand and live (using a Leap Motion controller) to the output of the patch.

These parameters include the volume for each of the 8 channels, a biquad filter, a svf~ filter, and the positioning of sound sources within three dimensional space (both using generative and manually controlled movement).

The primary benefit to this template is that it auto-generates a multi-channel audio playback object and automatically connects it into the objects from the hoa object library so that the primary focus of any project using this template is on the customization of parameters rather than building an ambisonic patch from the ground up. It is possible that, using the current form of the patch, you can generate a sound installation for instant playback using only a handful of audio files (within a particular set of bounds) and various parameters of the sound as it is played live.

Given more time, I hope to further revise this patch so that it is more flexible and allows more complex ambisonic instillation to be automatically generated (such as up to the 64 channels currently supported by the Higher Order Ambisonics library).

Patch Available Below (Requires Higher Order Ambisonics Library and Leap Motion Object by Jules Françoise):

Dropbox

Final Project – Willow Hong

For the final project I decided to further explore the connection between motion and sound. I incorporated data from the Myo armband into a music synthesizer that used several techniques I have learned from this class.

The synthesizer is composed of two main parts: the motion data reading section and the music control section. I used an online myo-osc communication application (https://github.com/samyk/myo-osc) and udp messaging to read the armband data. I am able to obtain normalized quaternion metrics as well as several gesture readings. These data laid a solid foundation for a stable translation from motion to sound.

I selected pitch, playback speed, timbre and reverberation as the manipulation parameters. I downloaded music as separate instrument stems  so that I can play with the parameters on individual track without interfering with the overall music flow. After many trials, I eventually had the following mapping relationships:

  1. The up/down motion of the arm will change the pitch of the timbani instrument.
  2. The left/right motion of the arm will change the playback speed of both timbani and percussion part of the music.
  3. The fist/rest gesture will switch between piano-based and bass-based core melody.
  4. The rotation motion of the arm will change the reverberation delay time of the piano melody.

I recorded a section of the generated music, which is shown below:

The code for the project is as follows:

 

 

 

Project 2: Object Generator- Tanushree Mediratta

For my second project, I decided to continue using the leap motion device, but for visual purposes. I decided to create an object generator. The object’s position, size and color are all manipulated through gestures and positions of the hands. I was able to incorporate topics we learnt in class such as Machine Learning, Open GL, etc in my project as well.

Here is a short demonstration:

 

Like project 1, I created my main patch from scratch:

I modified the visual subpath of the leap motion help file:

This is modified patch of the machine learning sam starter and training patch:

Final Project – Generative Music Soundscape Matthew Xie

For the final project, I decided to further explore Max MSP’s self-generating music project, a step above of what I created for Project 1. For this project, 8 different designed sounds are ready. 5 are main sounds while 3 acts as special effects. The patch almost acts as a sequencer, with inputs of tempo and ‘beats per bar’. Each bar, a new sound is triggered completely randomly. However, both the frequency and volume of the sound is from analyzing the user’s input through the piano keyboard and other settings. The user is also able to change the sound design of 5 of the 8 sounds through graphs. The piano keyboard also acts as a slider, as both the frequency and volume is set based on where the user clicks it. Other sliders for the 7 different sounds indicate the octave possibility range. From then on, the 5 main sounds are selected randomly. The 3 FX sounds are played also due to chance, yet this chance is a result within the subpatch. The sounds are processed through reverb and delay effects. Furthermore, a stuttering effect is also available, which splits each bar up into 16 distinct ‘steps’ (inspiration from Autechre).

I originally wanted to due a music generative project based off of possibility and an input from the mic. But after researching online and especially finding out about the music group Autechre I changed my mind. I mainly got my inspiration from their patches. Sound designs were learnt both through the youtube DeliciousMaxTutorials and http://sounddesignwithmax.blogspot.com/. Reference for the reverb subpatch: taken from https://cycling74.com/forums/reverb-in-max-msp.

Here is a recording sample of the piece being played:

 

Code as follows:

Small Production Environment – Will Walters Final Project

For my final project, I created what I’m calling a Small Production Environment, or SPE. Yes, it’s a bad name.

The SPE consists of three parts: the first being the subtractive synth from my last project, with some quality of life and functionality improvements. This serves as the lead of the SPE.

The second is a series of four probabilistic sequencers. This gives the SPE the ability to play four separate samples with probabilities specified for each sixteenth note in a four note measure. This serves as the rhythm of the SPE.

Finally, the third part is an automated bass line. This will play a sample at a regular (user-defined) interval. It also detects the key being played in by the lead and shifts the sample accordingly to match.

It also contains equalization equipment for the bass & drum (jointly), as well as for the lead. In addition, many controls are alterable via MIDI keyboard knobs. A demonstration of the SPE is below.

The code for the main section of the patch can be found here. Its pfft~ subpatch is here.

The embedded sequencer can be found here.

The embedded synth can be found here. Its poly~ subpatch is here.

Thanks to V.J. Manzo for the Modal Analysis library, An0va for the bass guitar samples, Roland for making the 808 (and whoever first extracted the samples I downloaded), and Jesse for his help with the probabilistic sequencers.

 

Air-DJ, A Final Project by Anish Krishnan

As a pretty heavy music listener, I have always wondered to myself if it would be possible to mix a few songs together and create a mashup of my own. After eagerly surfing the web for an app that would let me do just the thing, I quickly realized that using a mouse and keyboard is not the proper interface to work with music. This is exactly why DJ’s use expensive instruments with knobs and dials so that they can quickly achieve the effect they are going for. For my final project, I made an Air-DJ application in Max so that you can convolve your music in a variety of ways using your hands and never touching the mouse or keyboard. Using a Leap Motion sensor, I used various different gestures to control different aspects of a song.

After selecting a song to play, you can use your left hand to add beats. You can add 3 different types of beats by either moving your hand forward, backward, or to the left. Lifting your hand up and down will change the volume/gain of the beat.

Your right hand controls the main track. Again, lifting it up and down will control the volume/gain of the song. With a pinch of your fingers, you can decrease the cut-off frequency of a low pass filter. I also implemented a phase multiplier when you move your right hand towards and away from the screen (on the z-axis). Finally, moving your right hand sideways will increase an incorporated delay time.

Here are a few screenshots of the patch:

 

 

 

 

 

And here is the video of the whole thing!

Original song:

https://drive.google.com/open?id=1Z7nWcNn6fCZ3dw5nnWZ5tU52breicnIr

Air-DJ’d Song:

https://drive.google.com/open?id=1KseRhpuURgx3AZ6PN6Z14abrB5dDtoBS


All the important files are below:

Google Drive link containing all files: https://drive.google.com/open?id=1FmMiDLyB4gIbOK6bx0KgIbESSKyNBcA1

Github Gist: https://gist.github.com/anonymous/4570d6ae97e13fe29337a57a97fb81e5

Project 2: Swiss Design Poster Installation – Sarika Bajaj

For this past semester, I have been conducting a research project under Prof. Susan Finger to install projection systems around the IDeATe Hunt Basement to create a platform for students in the animation, game design, and intelligent environment minors to publicly display their work. Therefore, my projects for Twisted Signals revolved around creating demos for specifically the interactive projection system using Max. My first project, a virtual ball pit, was a good exercise in learning on how to use the Kinect but was not really a conceptually heavy demo. Therefore, for my second project, I wanted to make a system that would actually teach the users something.

The concept that I settled on was to make a system that allowed users to interact with the Hunt Swiss Poster collection, an extensive set of extraordinary Swiss design posters that are housed in the Hunt Library which very students know exist. Originally, I had planned on using the Kinect to allow users to “draw something” using a colored depth map that would then get processed to display the closest Swiss design poster. However, in my early protoyping, it was starting to get apparent that the interaction was not as obvious as it could be, which was leading to a weaker installation. Moreover, as I have had to borrow all of my equipment from IDeATe for every project, I ran into the issue that every Kinect and my specific computer was checked out for the time span that I needed to work on this project. Therefore, I had to pivot.

While planning the projection installation, we were hit with the news that the Kinect was no longer going to be produced. As I was forced to work without a Kinect anyway, I decided to work on creating an interesting interaction with just an RGB camera which thankfully will probably always be produced. Additionally, I realized that, although being a far more difficult path, the best possible way for users to interact with these Swiss posters was to be a literal part of them, which would mean every single poster would have to be designed uniquely. However, this direction would also result in an avenue where several students could choose to participate in this project if they are lacking in their ideas for projects.

Therefore, for my Project 2, I created two different Swiss poster exhibits as well as a very simple UI that an IDeATe staff member would use when turning on the projection system each morning. Each exhibit has an interaction display that mimics a Swiss poster design that is placed next to the original Swiss poster, some information about the poster, and some information about the project.

First Exhibit: 

Second Exhibit: 

UI Snapshot: 

Gist of Code: 

Jonathan Namovic-Final Project

I decided to build off my patch from project 1 and continue on my journey of sound synthesis and instrument creation. I cleaned up the patch and made a presentation view for easier interaction and I added three new instruments. The first is a take on FM synthesis and clipping effects that resulted in a harmonized growling sound. For the second instrument, I took the same clipping effect and applied it to a long resonance filter over a click to create a kind of rounded square wave. For the last instrument I took the resonant click and filtered it in a new way to create a sound similar to a church organ.

Here is a sample loop I created using the new instruments and the drums from my old patch.

and here is the code

Main Patch:

Instruments from patch 1

Drum Patch:

basic notes:

New Instruments

Growl:

Square Wave:

church organ: