I built a system that uses two Kinects to track the movements of two dancers, a digital synthesizer that generates sound solely depend on the skeleton data, and a particle pattern visuals that changes based on both the skeleton data and the sound itself.
For the Kinect part, I use the head height of two users, and the distance between their hands, and their body left-right positions. In order to create the best performance, if any of those data from one of the Kinects stops changing, which indicate the person might have moved out of the range of the Kinect, I reset the values of the sliders that is sending midi data to the synthesizer, so that the filters might not be accidentally set to a very low point.
For the synthesizer part, I strip the sound into two parts — one is manipulated by the filters, and one is not to decrease the chance that the sound might be completely turned off during the performance. The synthesizer has 13 presets that allow people to choose from as starting point.
In the particle pattern visuals, the pattern is distorted by the sound, and the size of the pattern is controlled by one of the dancers. Also, depend on where the two dancers are at, the particles will move left or right with the dancers.
]]>I kept the same overall layout with a video feed being stripped into separate RGB colorplanes and then moving them against each other but instead of having a single looping video I created a playlist of videos which can be switched by making a fist. I also altered the playback speed of the video using the position of the right palm over the sensor.
Instead of using the problematic beat detection object from the first version, I instead built a simple tap for bpm. I did this through a timer and some zl functions.
If I were to continue this further I would look into more interesting parameters to tweak as well as finding some ways to add some more visual diversity.
Patch:
]]>In this case, this patch acts as a template to load a set of audio files into a polybuffer~ and generate an 8 channel ambisonic audio signal using the files which were imported. In addition, a series of parameters have been added which allow for customization both beforehand and live (using a Leap Motion controller) to the output of the patch.
These parameters include the volume for each of the 8 channels, a biquad filter, a svf~ filter, and the positioning of sound sources within three dimensional space (both using generative and manually controlled movement).
The primary benefit to this template is that it auto-generates a multi-channel audio playback object and automatically connects it into the objects from the hoa object library so that the primary focus of any project using this template is on the customization of parameters rather than building an ambisonic patch from the ground up. It is possible that, using the current form of the patch, you can generate a sound installation for instant playback using only a handful of audio files (within a particular set of bounds) and various parameters of the sound as it is played live.
Given more time, I hope to further revise this patch so that it is more flexible and allows more complex ambisonic instillation to be automatically generated (such as up to the 64 channels currently supported by the Higher Order Ambisonics library).
Patch Available Below (Requires Higher Order Ambisonics Library and Leap Motion Object by Jules Françoise):
]]>The synthesizer is composed of two main parts: the motion data reading section and the music control section. I used an online myo-osc communication application (https://github.com/samyk/myo-osc) and udp messaging to read the armband data. I am able to obtain normalized quaternion metrics as well as several gesture readings. These data laid a solid foundation for a stable translation from motion to sound.
I selected pitch, playback speed, timbre and reverberation as the manipulation parameters. I downloaded music as separate instrument stems so that I can play with the parameters on individual track without interfering with the overall music flow. After many trials, I eventually had the following mapping relationships:
I recorded a section of the generated music, which is shown below:
The code for the project is as follows:
]]>
Here is a short demonstration:
Like project 1, I created my main patch from scratch:
I modified the visual subpath of the leap motion help file:
This is modified patch of the machine learning sam starter and training patch:
]]>I originally wanted to due a music generative project based off of possibility and an input from the mic. But after researching online and especially finding out about the music group Autechre I changed my mind. I mainly got my inspiration from their patches. Sound designs were learnt both through the youtube DeliciousMaxTutorials and http://sounddesignwithmax.blogspot.com/. Reference for the reverb subpatch: taken from https://cycling74.com/forums/reverb-in-max-msp.
Here is a recording sample of the piece being played:
Code as follows:
]]>The SPE consists of three parts: the first being the subtractive synth from my last project, with some quality of life and functionality improvements. This serves as the lead of the SPE.
The second is a series of four probabilistic sequencers. This gives the SPE the ability to play four separate samples with probabilities specified for each sixteenth note in a four note measure. This serves as the rhythm of the SPE.
Finally, the third part is an automated bass line. This will play a sample at a regular (user-defined) interval. It also detects the key being played in by the lead and shifts the sample accordingly to match.
It also contains equalization equipment for the bass & drum (jointly), as well as for the lead. In addition, many controls are alterable via MIDI keyboard knobs. A demonstration of the SPE is below.
The code for the main section of the patch can be found here. Its pfft~ subpatch is here.
The embedded sequencer can be found here.
The embedded synth can be found here. Its poly~ subpatch is here.
Thanks to V.J. Manzo for the Modal Analysis library, An0va for the bass guitar samples, Roland for making the 808 (and whoever first extracted the samples I downloaded), and Jesse for his help with the probabilistic sequencers.
]]>
After selecting a song to play, you can use your left hand to add beats. You can add 3 different types of beats by either moving your hand forward, backward, or to the left. Lifting your hand up and down will change the volume/gain of the beat.
Your right hand controls the main track. Again, lifting it up and down will control the volume/gain of the song. With a pinch of your fingers, you can decrease the cut-off frequency of a low pass filter. I also implemented a phase multiplier when you move your right hand towards and away from the screen (on the z-axis). Finally, moving your right hand sideways will increase an incorporated delay time.
Here are a few screenshots of the patch:
And here is the video of the whole thing!
Original song:
https://drive.google.com/open?id=1Z7nWcNn6fCZ3dw5nnWZ5tU52breicnIr
Air-DJ’d Song:
https://drive.google.com/open?id=1KseRhpuURgx3AZ6PN6Z14abrB5dDtoBS
All the important files are below:
Google Drive link containing all files: https://drive.google.com/open?id=1FmMiDLyB4gIbOK6bx0KgIbESSKyNBcA1
Github Gist: https://gist.github.com/anonymous/4570d6ae97e13fe29337a57a97fb81e5
]]>The concept that I settled on was to make a system that allowed users to interact with the Hunt Swiss Poster collection, an extensive set of extraordinary Swiss design posters that are housed in the Hunt Library which very students know exist. Originally, I had planned on using the Kinect to allow users to “draw something” using a colored depth map that would then get processed to display the closest Swiss design poster. However, in my early protoyping, it was starting to get apparent that the interaction was not as obvious as it could be, which was leading to a weaker installation. Moreover, as I have had to borrow all of my equipment from IDeATe for every project, I ran into the issue that every Kinect and my specific computer was checked out for the time span that I needed to work on this project. Therefore, I had to pivot.
While planning the projection installation, we were hit with the news that the Kinect was no longer going to be produced. As I was forced to work without a Kinect anyway, I decided to work on creating an interesting interaction with just an RGB camera which thankfully will probably always be produced. Additionally, I realized that, although being a far more difficult path, the best possible way for users to interact with these Swiss posters was to be a literal part of them, which would mean every single poster would have to be designed uniquely. However, this direction would also result in an avenue where several students could choose to participate in this project if they are lacking in their ideas for projects.
Therefore, for my Project 2, I created two different Swiss poster exhibits as well as a very simple UI that an IDeATe staff member would use when turning on the projection system each morning. Each exhibit has an interaction display that mimics a Swiss poster design that is placed next to the original Swiss poster, some information about the poster, and some information about the project.
First Exhibit:
Second Exhibit:
UI Snapshot:
Gist of Code:
]]>Here is a sample loop I created using the new instruments and the drums from my old patch.
and here is the code
Main Patch:
Instruments from patch 1
Drum Patch:
basic notes:
New Instruments
Growl:
Square Wave:
church organ:
]]>