In this project, I explored the harmony and intervals in midi files and visualized these qualities. Each midi note is represented by a cube, which is pressed down/pulled up when the note is turned on/off. A noise value relevant to the dissonance of the chord currently held is generated and applied to the position attribute of the cubes. The rendered mesh object changes its color mode when there is a root-note change detected in the chord. Unfortunately, the visualization is pretty crude and I’m still very far from what I wanted to do. I had some troubles trying to manipulate each cube independently in a more creative way under jit.gl.mult context, for example, applying a glow effect on a specific cube when a note is turned on. My major plan is to improve my methods of generating the cubes so that ultimately they can be manipulated independently.
This part of the patch evaluates the intervals in a currently-held chord and assigns a dissonance value to it. The current evaluation is subjective and cannot accommodate inversions or the subtle differences in complex chords. For future work, I will integrate the material discussed in this note and produce more robust evaluations. http://www.oneonta.edu/faculty/legnamo/theorist/density/density.html
This part of the patch generates the noise value and applies it to the position matrix.
A short demonstration of the patch(I know the visualization still looks too simple; I will work on more ways to integrate the signal processed into the rendered objects).
For Project 1, I created a self-generated melody & drone patch.
First off, a wav file of single piano notes played consecutively is analyzed. While Max randomly selects portions of the wav file to play in snippets, the frequency of the audio being played is analyzed and triggers the 1st higher-pitched drones in intervals. Meanwhile, the 2nd drone patch can be triggered by using the keyboard as a midi-keyboard.
The drone is achieved via subtractive synthesis. The pink noise generator is send through filters, only letting pass certain frequency bands. Then, the subtractive synthesis is done with a handful of inline reson~ objects.
The ‘analyzer~’ object is referenced from the maxobject.com website.
Delay is added to all sound effects. Piano melody can also go through a noise gate at will. The speed of the piano sampling can also be manipulated, which will immediately also effect the speed of the self-generated higher pitched drones.
For our first longer term project, I created a patch which resulted in a more stylistic visual which reacted to midi and audio data from Ableton Live session information. Within the patch, gl objects are set to render and erase based of midi information from each instrument and are manipulate by the amplitude of the audio signal generated by each instrument. The end result is a set of shapes/objects, each assigned to their own instrument, which are turned on and off by that instrument and manipulated by its audio signal.
The patch automates the movement, transparency, and rotation of objects within the video window in direct proportion to the amplitude signal of each midi instrument in Live. For the shapes associate with the synth, I created an image in Adobe Illustrator which was then imported into Max and layered to create a new object. The drum kit uses clearly defined geometric shapes to contrast the more amorphous shape generated by the synth.
Midi Send Portions of the Patch
Synth Visual Portion of the Patch
Drum Visual Portion of the Patch
The patch itself, while it is rather large (and divided into a set of four patches within the Live session) is built on a series of smaller patches to execute a simple, but polished concept. My personal goal for this project was to become familiar with a set of techniques which I had not utilized in a previous project and familiarize myself with the Max for Live environment which operates with a unique set of limitations. I wanted to create a patch that generated a stylistic visual element which could be replicated for live performance which came across as a more polished visualization with smoother transitions than what I was able to achieve in previous projects.
Of these techniques, the ones I was most concerned about ended up being the easiest (ex. creating a unique shape/image in Adobe Illustrator and converting it into an .obj for use by jit.gl since I do not have a visual media background) and the ones I thought I should be able to complete easily proved more complicated when working in the Max for Live environment (ex. automating the transparency of different jit.gl objects and creating smooth movement across the video window).
I would like to experiment further with the automation to create a much more experimental version of the visual elements, but I am pleased with how this first version turned out.
For this project, I created a synthesizer instrument called a 3x Oscillator. It does more or less what it says on the tin, the user can control three oscillators which can be played via midi input. When a note is played, the oscillators produce sound in tandem, creating a far fuller sound than a single tone. The oscillators can be tuned and equalized relative to each other, and the waveform of each can be selected – sinusoid, sawtooth, or square. Other options for customization include total gain of the instrument; independent control of the attack, decay, sustain, and release; and a filter with both type and parameters customizable.
Here’s a video of me noodling around with it:
(sorry about the audio in some places, it’s my capture, not the patch itself)
This project was an exploration of how the Kinect might be used to map pre-rendered and generative audio-responsive projections onto the faces of instruments.
The patch uses adjustable maximum and minimum Kinect depth map thresholds to isolate the desired object/projection surface, and subsequently uses the created depth map as a mask. This forces the video content to show through only in the shape of the isolated surface.
The patch is less precise on instruments which expose a large part of the body, as, for example, legs tend to inhabit similar depth thresholds as the face of a guitar; it is better suited to instruments with larger faces that obscure most of the body, such as cellos and upright basses.
In attempting to map to the surface of a guitar, I also toyed around with other uses for the patch, which include this animated depth shadow, which places the video mapped shadow of the performer on the wall, creating the potential for visual duets between any number of performers and mediated versions of their bodies.
I plan to continue exploring how to make this patch more precise on a variety of instruments, possibly by pairing this existing process with computer vision, motion tracking, and/or IR sensor elements.
For my project 1 I decided to try and make an instrument out of my computer. I separated out the keys into distinct regions and assigned them midi values based on where they were on the keyboard. I then used these note values in different modes to produce different sounds. I also included a boomerang effect that allows the user to record a short piece of audio and then the patch loops it and repeatedly plays it. I created ten drum sound effects by filtering noise in different ways. The main instrument portion is a square wave filtered in a similar way to make the note sound less harsh. The last mode is a saw tooth tremolo that repeated plays the same not so long as it is held. The launch pad is polyphonic and can export the sound in the loop buffer.
A short example piece that has been layered three times
For project 1, I used a Kinect to control the motion of a particle system using my hand. I am very interested in different applications of motion tracking and I think this was a good introduction to help me learn how the Kinect works. Here is a download link to a video showing my project in real time: IMG_1479
I used this tutorial to help me create particles from an image that I would then control using input from the Kinect. To control the Kinect I used output from the object dp.kinect2. This took me a while to set up initially. I wanted to have the system use real-time image input from the Kinect as the image – that did not end up working quite like I wanted so I stuck to using one preset image.
This patch analyzes audio in three ways and represents the information through a LED light pattern.
The 30 LED lights are grouped into three subgroups, where there’s an inner layer consist of two lights, a middle layer consist of a ring of 10 lights, and an outer layer of 18 lights. Changes of colors or brightness happen from the inner ring to the outer ring, so that the light propagates outwards.
The audio amplitude controls the brightness(saturation) of the colors. The ratio of lower frequency to higher frequency controls a color picker, which determines the RGB values. Then the values are being sent to the three layers with different amount of delay.
For assignment 4, I decided to utilize the fft object and create an audio effect patch that would mimic the sound of recent favorite music genre ‘Vaporwave’.
I created a noise reduction subpatch. Furthermore, a degrading fft subpatch is also used and linked to the output, playing along with the other signal. After these two subpatches, the audio signal is then run through the original patch where it is stretched out in real time (slowed down). This is done by using the delay effect.
I also added another simple visual presentation, that is very similar to the one we made in class. A japanese city pop song was used in demonstration, to achieve that ‘vaporwave aesthetic’.
Another demo: https://soundcloud.com/thewx/assignment4-demo/s-K8iMc
This project is based on what we did in class – visualizing audio using pfft~. I parsed pfft output into four bands using bin index. Each band covers a range of frequency(low, mid-low, mid-high, high, respectively), represented by blue, green, pink and white. This video shows the visualization of Morton Gould’s Interplay: IV. Very Fast, With Verve and Gusto, a piece with very beautiful orchestration. Piano, which dominants most part of the music, lies in green and blue bands while woodwind, brass, and percussions occasionally pop up in pink and white. I also tried to run the patch with pop music, in which there seems to be a larger pink/white presence.
The video quality seems to be embarrassingly bad… I will work on it next time…
The Patch is similar to the class one, with additions of more matrices in the main patch and band filters in pfft~ subpatch. Currently the bin filters are hard coded, I’ll see if I can improve the model to adjust bin filters on the fly.