I decided to build off my patch from project 1 and continue on my journey of sound synthesis and instrument creation. I cleaned up the patch and made a presentation view for easier interaction and I added three new instruments. The first is a take on FM synthesis and clipping effects that resulted in a harmonized growling sound. For the second instrument, I took the same clipping effect and applied it to a long resonance filter over a click to create a kind of rounded square wave. For the last instrument I took the resonant click and filtered it in a new way to create a sound similar to a church organ.
Here is a sample loop I created using the new instruments and the drums from my old patch.
For this project, I used spectral analysis along with machine learning to create a system for chord recognition. The system works by writing FFT frequency bin amplitudes into a matrix, then taking “snapshots” of the matrix and outputting the snapshot as a list, then sending these lists to the ml.svm object for categorization. While the system could easily work with any audio source, for this demonstration I made a simple polyphonic synth using sawtooth oscillators and a MIDI controller to play chords for the system to analyze. The challenge with this project was devising a system for processing the data from the FFT matrix and molding it into a form that is usable by the SVM but still contains enough data to identify specific chord spectra. The algorithm is powerful enough to recognize, for example, the difference between a C major chord and a C minor chord, if given enough training data.
In this demonstration I show how to train the SVM and how to map new chords. At the end I show that the system is not aware that a chord played an octave higher is not recognized. This can be fixed easily by simply mapping one chord played in several octaves (for example play C major chords with roots C3, C4, and C5 as state 1, D3 D4 D5 state 2, etc.)
For my final project I decided to explore more ways of using the leap motion sensor to control different elements of drawing. I made a game through which the coordinates of a hand are tracked to translate to both a rotation and resizing of a square to match up with a target square. When the squares are matched sufficiently, it moves to another one. I have attached a demo of me playing this.
I was also very interested in learning more about different uses for the machine learning patch. I trained the leap motion to detect three different hand gestures: a palm facing down, a fist and a “c” for camera. As shown in the demo below, when I make a “C” with my hand, I am able to take pictures with the camera. I can then use my left hand to distort the image taken. This distortion was influenced by this tutorial.
Body Paint is the visual component of a commission from the Pittsburgh Children’s Museum in collaboration with three sound artists from the School of Drama.
The project is an interactive experience which uses the Kinect 2 to transform each participant’s head, hands, and feet into paintbrushes for drawing colored lines and figures in space. Each session lasts for one minute, following which the patch clears the canvas allowing a new user to take over and begin again.
Participants might attempt to draw representational shapes, or perhaps dance in space and see what patterns emerge from their movements.
The user’s head draws in green, the left hand in magenta, the right hand in red, the left foot in orange, and the right foot in blue.
Body Paint will be installed in the Museum in late January for a currently undefined period of time, free for participants to wander up to, discover, and to experience during their visit.
Visual documentation of the patch in presentation and patcher modes and a video recording of the results of my body drawing in space are below.
For this project I wanted to explore the possibility for using voice to text to control Max. After a lot of research and trial and error I found that speech processing is better suited to other programs, like Processing or Google APIs. So the project transformed in to small “performance” piece, where with a little behind the scenes magic, the user can ask to know their future.
For this project I explored the connection between movement and music, and essentially created my own theremin, which is an instrument that controls the frequency and amplitude of sounds using hand movement.
I used Leap Motion sensor to read the absolute position of my left hand along the z (vertical) axis, and the range of that data stream is translated into 8 MIDI notes from C3 to C4. The velocities of my right ring finger are normalized and then mapped onto the computer system’s volume scale, so the faster my right hand moves, the higher the volume will be.
I also added a slowly rotating noise point cloud to create some visual atmosphere. The note change will be reflected in the color change of the visualization, and volume change will alter the cloud size.
For my project I wanted to see what I could do by modifying mesh points in a 3D model. This for me was an exploration of how Max reads 3D models, and I ended up with a crashing patch that took in a model, and distorted it using a .mov file. It used the normals of the model to extrude the vertices, and each point was being modified in accordance to the video. I eventually tried to use a pfft to get it to react to sound. It created really cool textures, but unfortunately I can’t get the patch to open. Below are screenshots of what I had.
I based this off of the duck distortion patch and this one I found on the internet:
For this project I tried making reactive visuals for rave music. There are two main components, the video of a 3D model dancing and an actual 3D model jumping around in space. The video is separated into R, G and B planes which are then moved around in sync with the music. The 3D model is distorted by a jit.catch on the signal and is bounced around on beat to the incoming audio.
The beat/bpm detection was done by an external object that was rather inaccurate but still created some cool visual effects.
One thing I really wanted to add to this project that would have made it a lot more interesting is having it fade in and out of these separate visual components based on the activity of the low-end. Since rave music is largely driven by kick drums, moments in a song where the kick drum or bass is absent are generally tense and dramatic. Being able to have the visuals correspond to this moment would have been really key. I tried to start this in measuring the difference in peakamps on beat with a low pass filtered signal but couldn’t find a meaningful delta. I then tried to simply map the amplitude of the filtered signal to the alpha channels of the layers but the 3D model would respond to a change in alpha values.
Overall, I think I could greatly improve on this project by more accurately measuring beats/bpm and getting the triggering/fading working. Below is a low-res recording of the visuals as well as the pasted patch.
In this project, I explored the harmony and intervals in midi files and visualized these qualities. Each midi note is represented by a cube, which is pressed down/pulled up when the note is turned on/off. A noise value relevant to the dissonance of the chord currently held is generated and applied to the position attribute of the cubes. The rendered mesh object changes its color mode when there is a root-note change detected in the chord. Unfortunately, the visualization is pretty crude and I’m still very far from what I wanted to do. I had some troubles trying to manipulate each cube independently in a more creative way under jit.gl.mult context, for example, applying a glow effect on a specific cube when a note is turned on. My major plan is to improve my methods of generating the cubes so that ultimately they can be manipulated independently.
This part of the patch evaluates the intervals in a currently-held chord and assigns a dissonance value to it. The current evaluation is subjective and cannot accommodate inversions or the subtle differences in complex chords. For future work, I will integrate the material discussed in this note and produce more robust evaluations. http://www.oneonta.edu/faculty/legnamo/theorist/density/density.html
This part of the patch generates the noise value and applies it to the position matrix.
A short demonstration of the patch(I know the visualization still looks too simple; I will work on more ways to integrate the signal processed into the rendered objects).