Markov Chains in Cm

For our project (Joshua Brown, Garrett Osbourne, Stone Butler), we created a system of Markov chains which were able to learn from various inputs – MIDI information imported and analyzed through the following subpatch, as well as live performance on the Ableton pad.

Our overall concept was to create a patch which would take in a MIDI score, analyze it, and utilize the Markov systems to learn from that score and output something similar, using the captured and edited sounds we created.

Markov chains work in the following way:

Say there are three states: A B C
When you occupy state A, you have a 30% chance of moving to state B, 60% of moving to state C, and 10% chance of remaining in A. A Markov chain is an expression of this probability spectrum.
In a MIDI score, each of the “states” is a pitch in the traditional western tuning system. The patch analyzes the MIDI score and counts each instance of the note C, for instance. It then examines each note that comes after C in the score. If there are 10 C’s in the score, and 9 of those times, the next note is Bb, and the 1 other time is D, the Markov chain posits that when a C is played, 90% of the time, the next note will be Bb, and the other 10% of the time, the next note will be D. The Markov chain does this for each note in the score and develops a probabilistic concept of the overall movement patterns among pitches. Here is the overall patch:

The sounds were sampled from CMU voice students. We then chopped up the files such that each sound file had one or two notes; these notes corresponded to the notes in the c harmonic minor scale. The Markov chain system would use the data it had analyzed from the MIDI score (Silent Hill 2’s Promise Reprise, which is essentially an arpeggiation of the c minor scale) to influence the order / frequency of these files’ activations. We also had some percussive and atmospheric sounds which were triggered via the chain system as well. We played with a simple spatialization element, so as not to overwhelm.

This is what we created:

The audio-visuals patch was made by taking the patch we made in class (the cube visualization which responded to loudness of a mic input) and expanding it to include a particle system that was programmed to change position when a certain peak was reached. the positions were executed using the Ease package from Cycling ’74.

Personal Research Project

Due: Wed, March 29

This shall be an experimental sound synthesis project that you execute independently. It may take the form of a live performance, an audio/video/audio-video recording that is presented in class, an installation that is set up in the Media Lab (or some nearby location), or a research presentation. You will have five minutes total in which to present so make sure you are ready to rock at the drop of a hat. If you choose to present research you will be expected to present a tight, compelling, informative, and insightful slideshow and discussion.

Project 3: Sound of Go

Due: Wed, March 22

In this project you will sonify the ancient game of Go (a/k/a “Weiqi” a/k/a “Baduk”). You may use any type of input you like from the board and stones. For example, you could use a webcam and computer vision to track the black and white stones and use their positions to control musical parameters such as pitch, timbre, tempo, etc. You could add sensors to the board or stones and use the sensor values to control sounds. You could also use the sound of the game itself as the input to your musical system. Whatever route you choose your goal is to make a compelling listening/viewing experience that demonstrates insight to the game.

Some example starter patches for a computer-vision route have been added to the course Github repo. Have a good game!

ESS Project 2 – Music from Blobs

Computer vision programming – Dan Moore
Max Patch Programing and Sound Design – Kaitlin Schaer
Percussion Patch and Documentation – Arnav Luthra

Our goal for the project was to create a live performance system that would create music from the act of drawing. To do this, we utilized computer vision to recognize shapes being drawn on a piece of paper and generate sounds in response to the shapes being drawn. We had three main “instruments” one of which was melodic while the other two were “whooshey” sounds.

Technical Details:
For the project, we ended up using two Max Patches and a separate instance of OpenCV. The computer vision processing was done on Dan’s laptop and allowed us to get the colors of various blobs, the location of the blob’s centroid (blob’s central point) and the velocity at which the blob was growing. We then took these parameters and send them over to Kaitlin’s laptop using OSC (Open Sound Control). From Kaitlin’s laptop, we took these parameters and used them to control an arpeggiator as well as resonant filters on the sounds. The arpeggiator added different notes within a fixed key depending on the location of the blob and then triggered two different midi instruments (the melodic saw wave and on the whooshey noises). The third instrument took white noise and applied resonant filters at a rhythmic interval to create a percussive effect. Parts of this patch were pieced together from various sources online and then compiled and modified to suit the needs of our project.

Video of final performance:

Closing remarks:
Overall our presentation went well despite a few technical difficulties in the beginning (We ran into some difficulties getting Kaitlin’s laptop to receive the information from Dan’s). We were limited in what we could do with the computer vision aspect but if we were to continue this project we could find other interesting parameters to get from the drawings.

A Cyborg Duet in Ode to Bach

To accomplish the goal of having a computer perform in real-time with a human, we “faked” a violin duet (Abby & Nick) in accompaniment of a MIDI keyboard running through Ableton Live (Joey).

With this, we were able to produce a version of Bach’s Partita in D Minor, No. 1 that was truly a unique addition to the alterations/remixes of classical music that have been done in the past.


The beginning of our ideation naturally started with how we could combine human live-performance with computer-performance. From the start, we were lucky enough to have both Abby and her electric violin; Nick is always excited to put his Max abilities to the test, and Joey was quick to be interested in getting on a MIDI keyboard to fill in any empty space that was naturally going to exist in our piece. After our quick decisions on the real-time human performance, we went through a few ideas concerning the real-time performance from the computer.

At first, we considered using a pedal board to allow Abby to create, play, and pause loops, but quickly realized this would be a lot of strain on her and could be much more “computerized” anyways. We ultimately decided we would “fake” a duet using the Max patch Nick made with Joey on the MIDI keyboard connected to Live.

Max Patch
The pitch and volume of the incoming signal control both the playback position and the volume of grainstretch. Using gbr.yin to track the pitch of Abby’s violin and meter, we’re able to track the incoming amplitude. After recording audio into Silo#0, a timer sends the length of the recorded audio to a scale object attached to the tracked pitch of the violin so that it can accurately control the position of the grain playback.
As a side-note worth mentioning, Nick built in a lot of extra functionality that we didn’t use (e.g. the transport control & the ability to record and loop the data from the violin).
[source code]

Presentation and Closing Remarks
The presentation went extremely well other than a few technical difficulties in the beginning — sadly this was strictly due to not turning the “audio on.” Other than this hiccup, the volume on our keyboard for Joey’s side of the performance could have been slightly higher, but we received great feedback from the class.


Technology and production by Nick Erickson (max programming), Abby Adams (live violinist), and Joey Santillo (live synth); documented by Jack Kasbeer.

Sound Squad

Arcade Jungle

Dry song:

Steven: For the section of our project where we generated the music Luigi and I used Ableton Link to tempo sync our real-time performances together. Luigi used Propellerheads Reason to create his sounds, while I used Ableton Live 9/Max for Live and a hardware synthesizer. Ty then used our separate mixes through his grain system. My specific sound generation system was tempo locked via Ableton Link with Luigi, in my session I had some subtle drum loops alongside a cowbell playing in tempo with Luigi’s tracks. I was also using a hardware synthesizer called the Pocket Operator Arcade (PO-20). In order to tempo lock the hardware synthesizer with my Live set and Luigi’s instruments, I created a second CV clock sync system. The secondary analog sync system I used started with a second USB audio interface for more I/O on my machine. I then created an aggregate device using both my built-in sound card, and the USB sound card. The output of the USB interface was sending the analog pulse to my pocket operator. Then the input of the USB interface was receiving the output from the pocket operator that was being monitored as an audio track in Live. The pocket operator has a sync setting that once it receives the analog pulse it immediately starts playing it’s patterns in time with the sync (this setting is called sync mode 2). While sync mode 2 was active the analog sync system played perfectly in time with Ableton Link. Also, to keep all of my audio in time I lowered sample buffer size to only 64 samples. That way the audio from the pocket operator had only ~5 milliseconds round trip latency. I also created a drunk algorithmic pitch shifter that was used in the second half of the piece. The pitch shifter used the drunk object in Max for Live. The pitch shifter would raise or lower the pitch by randomly selecting a new random pitch close to it’s previous step. Here is the code to my Max for Live in just normal max patch form:


In short, I had a great time creating music with Luigi and Ty and learning about how Ableton Link works across DAW’s to make real time music.

Luigi: My contribution came in the form of a couple of synthesizers meant to be played by a person but whose output was heavily algorithmically influenced. The two most prominently featured in the piece were a modified dulcimer sampler that was triggered using key locked chord arpeggiation, and a graintable synth with as much movement as I could pull out of one synth short of generating white noise.

The first made use of Reason’s Scales and Chords rack extension paired with a velocity arpeggiator, allowing me to input the root note of a chord in the key of a minor and get random velocity arpeggiation of that chord, as played by the dulcimer sampler.

The second main synthesized started with two digeridoo grain oscillators with 4 beat repeating oscillation on the sample index. This was routed through a saturation shaper unit followed by a keyboard tracked resonant comb filter, which in addition to the final volume of the synth was partially controlled by a random square type LFO. The overall pitch of the synth was modulated over a range of two octaves in a triangle wave pattern at .22Hz. The output of this synthesizer was run through a delay module whose delay time and right channel offset were inversely controlled by a triangle type LFO at .53Hz. All of this was passed through a reverb, the output of which was EQ’d to accentuate high end and the harmonics of the root note of our key A minor.

A third bass synth was formed by compressing the 50/50 dry wet reverb signal from two sawtooth multi-oscillators with a long reverb tail and gating the resulting sound using reason’s alligator triple filtered gate. All of these were played over a set of percussion loops pulled from the reason factory library including bongos, congas, and timbales, and club beat loops, which were brought into the mix using midi controlled volume sliders during the live performance.

Ty: The output streams from Luigi and Steven flowed through a midi-controlled grainstretch patch. The midi board was a Launchpad using Automap to control Max. Each column of buttons was used to control the amount of pitch shift, stretch, and grain size. Using buttons to apply the effects, vs. a slider was not ideal. When jamming, the quick application of the effect sounded fine, but when listening to a recorded playback, it was easy to tell the effects came in off-beat.

My original patch uses grainstretch~ to affect each of the other team members’ outputs. In addition to setting effect levels for the two, the patch had an effect crossfader. This crossfader applied effects to one person’s track, but not the other. I was able to juggle back and forth with effects and dry signals, quickly. This inverse dry/wet crossfader added an interesting dynamic to the sound, but was one of the causes for the disturbing performance.

Our in-class performance was a giant learning curve for me. The main thought going through my head was “I really hope this wont break Jesse’s speakers.”

For the performance, I did not compare the levels while each of my teammates were playing together. There were level meters inside the maxpatch, but I only checked to see if sound was coming in. The previous day we had issues with routing the sound to the speakers at an audible volume, so that became my focus (instead of seeing if it was too loud).

The group decided to re-record the performance with corrected errors. As I mentioned above, my my effect patch was not working well, and it felt jumpy. To correct this, we recorded dry signals of Luigi and Steven playing together, then I re-recorded applying effects. I tried adding a ramp to my patch to ease the jumpiness, but could not figure out the implementation, so I used a knobbed midi-controller and used Serato Scratchlive’s built in effects.


Live Performance Project

A live audience participation based rhythmic composition which evolves in real-time via filtering and effects done in Ableton Live.

As we were conceiving our idea for the project we knew two things. We wanted to involved people to allow them the chance to familiarize themselves with our project on a deeper level and we also wanted to incorporate different rhythms. For our project we had four different rhythms that can all be combined to sound symmetrical with each other. The rhythms increased in difficulty from number one to number four. We chose to do this because the knowledge and performance ability for rhythm is very diverse between people. Some are very good at reading and performing difficult rhythms but also some are not the most adept at it. We knew that adding these different rhythms would challenge the participants and further enhance their engagement through the performance.

Our main objective for the piece was to get people to have fun. We accepted the risk of human error and imperfection before hand and wanted to focus on how to get people involved as much as possible. After organizing people into four groups we recorded everyone through Ableton live. We then edited the recording and sent it to specific speakers around the room.

As we were conceiving the idea for our project we knew two things. We firstly wanted to involve people and allow them the chance to familiarize themselves with our project on a deeper level and secondly we wanted to incorporate different rhythms. For our project we had four different rhythms that can all be combined to sound symmetrical with each other. The rhythms increased in difficulty from number one to number four. We chose to do this because the knowledge and performance ability for rhythm is very diverse between people. Some are very good at reading and performing difficult rhythms but also some are not the most adept at it. We knew that adding these different rhythms would challenge the participants and further enhance their engagement through the performance.

The DAW of choice for this project was Ableton live.
The microphone as routed from input #10 of the UTRACK32.
Pre-amp on and gain sufficiently turned up.

Ambisonic Speaker Spread:

Technical Details:
The goal was to record the group after a 4 beat click and continuously build the rhythmic layers as we progressed through the groups. As this was happens live effects such as ping-pong delay, reverb, and various equalizations were applied to the recorded clips.

Bottoming out the piece was to be a guitar improvisation.

The original recording can be listened to here:

During our actual performance we ran into issues with the equipment. The main problem was the sound console settings. Gains were too low, it was set to ‘surround’ instead of ‘ambisonic’ and certain inputs were in the wrong spot. We were able to set and test all other equipment, but because every group used the console differently we were forced to set it during the performance. This lead to delays in the performance and a lot of waiting around time. We didn’t realize how much we relied on the console for our performance, and now realize for next time we should save our settings to a thumb drive. Our performance can be found here:

or Here:

A zip file of the project and the recordings can be accessed here:
[hyperlink zip file download]

Dan – Musician, Music Writer,

Nick – Ableton Engineer, Guitarist, Sound Editor

Kayla – Board Operator, Recording Engineer, Documenter, MC

Project 2: Mbira – Omnichord Duo


With our project, we wanted to use the unique instruments that some of the group members owned in a live performance manner. In the end, we discovered that both the Omnichord and the Mbira have peaceful and dream like qualities to them. We decided, then, to pair up these two instruments to make a dream-like live performance.



Max Patch:

The max patch is structured to allow the user to create loops of live instrumentals in real time, while projecting a granularized version of the loop through a speaker across the room from the dry sound. We decided to use granular synthesis to further the trancelike direction we wanted our piece to have. The user can control grain parameters such as duration, rate and slope by adjusting the range for each parameter. For every grain produced, the parameters are randomized based on the current ranges selected. This gives the piece a sense of movement and change. The user can also control the levels of all speakers within the patch, and create crossfades between the dry and wet sounds for each loop. We used the grooveduck object to handle the loops, and the 03h granular synthesis library along with the poly~ object to create the grains. We also utilized the parallel parameter of the poly~ object to parallelize the computation of each grain in order to reduce CPU load.

In terms of changes to make in the future, one change that we would make would be to utilize a high-pass filter on the Mbira to remove the physical sounds that might have been generated from the way it is played. We could also utilize movement between the speakers and a more solid sense of sound direction to prevent the piece from becoming stagnant.

Link to the Max Files:

Project 2: Real Time is Real

Project 2 will be a performance-based project that combines real-time human performance with real-time computer performance.

The human performance aspect may be using a traditional music instrument or any other kind of sound-making object. It may also be a human performance of an electronic instrument or system.

The computer performance aspect should use some kind of generative/algorithmic/stochastic process to generate or process sound in realtime. This may be processing the sounds of the human performer, or producing some kind of accompaniment.

Project 2 will be presented in class on Monday, Feb 27. Each group should be completely ready to present at the start of class so make sure you are well-rehearsed before then. Your performance can be anywhere from 3 to 15 minutes in duration, whatever is appropriate for your work. As always, carefully consider every aspect of how the work is experienced including the placement of the audience relative to performers/gear, lighting, entrances/exits, etc.

For inspiration I include here art for an album by one of my former teachers in which the artist is seated up a very large cake and holding a synthesizer with a giant clock on the wall behind him. There also seems to be another giant clock underneath the cake. The title of the album is “Real Time” so this is definitely #relevant.

Golan Levin’s Presentation: The Stomach Song

When Golan gave his lecture, he showed us a multitude of different experimental methods to produce and experience audio as well as visuals.

The entire lecture was captivating in the sense that it gave us a means of understanding what has been done, what can be done, and where to go from there. How can we expand on experimental methods of producing and experiencing. One specific video that Golan shared with us caught my attention and that was William Wegman – Stomach Song –

It caught my attention because I believe in creating experiences that expand on a guests understanding of what it is to be a human being in order to expand our experience as human beings?

The average person in this video can see there is a face made up on the stomach. We grow to humanize it, and personify it. During the video we forget that it’s not a face, but rather a stomach. That stomach doesn’t have feelings, nor does it have desires. But, this doesn’t matter to us at the time because in order to suspend our disbelief and serve the narrative we unconsciously understand that we have to think of this as a face in order to gain anything from what I am watching.

Life is full of defeating moments that are hard for us to bear. But, humans are survivors, through thousands of years of evolution we have learned to endure and that is how we have made it this far. But, perhaps the next step in human evolution isn’t continuing to fight back from these defeating moments but rather to surrender to them. To find a balance and say “It’s okay for me to feel right now. It’s okay for this to be happening right now.”

Finding a harmony between fighting and surrendering in our experiences as we fill them with reliability and experimental aspects. Giving to the listener and urging the listener give back to us.

Perhaps, this is the next step in the evolution of understanding how to expand our experiences as human beings and I think we can do this through music.

I feel comfortable saying that most people listen to music because music makes them feel something. If a piece of music makes me feel nothing then I, personally, will be less inclined to listen to it, unless I am listening to it for the purpose of study. Using this approach of creating experiences that we might not understand but can learn to surrender to and appreciate is one method that will change how we experience listening and perhaps, life in general.