Project 3: Sound of Go

Due: Wed, March 22

In this project you will sonify the ancient game of Go (a/k/a “Weiqi” a/k/a “Baduk”). You may use any type of input you like from the board and stones. For example, you could use a webcam and computer vision to track the black and white stones and use their positions to control musical parameters such as pitch, timbre, tempo, etc. You could add sensors to the board or stones and use the sensor values to control sounds. You could also use the sound of the game itself as the input to your musical system. Whatever route you choose your goal is to make a compelling listening/viewing experience that demonstrates insight to the game.

Some example starter patches for a computer-vision route have been added to the course Github repo. Have a good game!

Sound Squad

Arcade Jungle

Dry song:

Steven: For the section of our project where we generated the music Luigi and I used Ableton Link to tempo sync our real-time performances together. Luigi used Propellerheads Reason to create his sounds, while I used Ableton Live 9/Max for Live and a hardware synthesizer. Ty then used our separate mixes through his grain system. My specific sound generation system was tempo locked via Ableton Link with Luigi, in my session I had some subtle drum loops alongside a cowbell playing in tempo with Luigi’s tracks. I was also using a hardware synthesizer called the Pocket Operator Arcade (PO-20). In order to tempo lock the hardware synthesizer with my Live set and Luigi’s instruments, I created a second CV clock sync system. The secondary analog sync system I used started with a second USB audio interface for more I/O on my machine. I then created an aggregate device using both my built-in sound card, and the USB sound card. The output of the USB interface was sending the analog pulse to my pocket operator. Then the input of the USB interface was receiving the output from the pocket operator that was being monitored as an audio track in Live. The pocket operator has a sync setting that once it receives the analog pulse it immediately starts playing it’s patterns in time with the sync (this setting is called sync mode 2). While sync mode 2 was active the analog sync system played perfectly in time with Ableton Link. Also, to keep all of my audio in time I lowered sample buffer size to only 64 samples. That way the audio from the pocket operator had only ~5 milliseconds round trip latency. I also created a drunk algorithmic pitch shifter that was used in the second half of the piece. The pitch shifter used the drunk object in Max for Live. The pitch shifter would raise or lower the pitch by randomly selecting a new random pitch close to it’s previous step. Here is the code to my Max for Live in just normal max patch form:


In short, I had a great time creating music with Luigi and Ty and learning about how Ableton Link works across DAW’s to make real time music.

Luigi: My contribution came in the form of a couple of synthesizers meant to be played by a person but whose output was heavily algorithmically influenced. The two most prominently featured in the piece were a modified dulcimer sampler that was triggered using key locked chord arpeggiation, and a graintable synth with as much movement as I could pull out of one synth short of generating white noise.

The first made use of Reason’s Scales and Chords rack extension paired with a velocity arpeggiator, allowing me to input the root note of a chord in the key of a minor and get random velocity arpeggiation of that chord, as played by the dulcimer sampler.

The second main synthesized started with two digeridoo grain oscillators with 4 beat repeating oscillation on the sample index. This was routed through a saturation shaper unit followed by a keyboard tracked resonant comb filter, which in addition to the final volume of the synth was partially controlled by a random square type LFO. The overall pitch of the synth was modulated over a range of two octaves in a triangle wave pattern at .22Hz. The output of this synthesizer was run through a delay module whose delay time and right channel offset were inversely controlled by a triangle type LFO at .53Hz. All of this was passed through a reverb, the output of which was EQ’d to accentuate high end and the harmonics of the root note of our key A minor.

A third bass synth was formed by compressing the 50/50 dry wet reverb signal from two sawtooth multi-oscillators with a long reverb tail and gating the resulting sound using reason’s alligator triple filtered gate. All of these were played over a set of percussion loops pulled from the reason factory library including bongos, congas, and timbales, and club beat loops, which were brought into the mix using midi controlled volume sliders during the live performance.

Ty: The output streams from Luigi and Steven flowed through a midi-controlled grainstretch patch. The midi board was a Launchpad using Automap to control Max. Each column of buttons was used to control the amount of pitch shift, stretch, and grain size. Using buttons to apply the effects, vs. a slider was not ideal. When jamming, the quick application of the effect sounded fine, but when listening to a recorded playback, it was easy to tell the effects came in off-beat.

My original patch uses grainstretch~ to affect each of the other team members’ outputs. In addition to setting effect levels for the two, the patch had an effect crossfader. This crossfader applied effects to one person’s track, but not the other. I was able to juggle back and forth with effects and dry signals, quickly. This inverse dry/wet crossfader added an interesting dynamic to the sound, but was one of the causes for the disturbing performance.

Our in-class performance was a giant learning curve for me. The main thought going through my head was “I really hope this wont break Jesse’s speakers.”

For the performance, I did not compare the levels while each of my teammates were playing together. There were level meters inside the maxpatch, but I only checked to see if sound was coming in. The previous day we had issues with routing the sound to the speakers at an audible volume, so that became my focus (instead of seeing if it was too loud).

The group decided to re-record the performance with corrected errors. As I mentioned above, my my effect patch was not working well, and it felt jumpy. To correct this, we recorded dry signals of Luigi and Steven playing together, then I re-recorded applying effects. I tried adding a ramp to my patch to ease the jumpiness, but could not figure out the implementation, so I used a knobbed midi-controller and used Serato Scratchlive’s built in effects.


Live Performance Project

A live audience participation based rhythmic composition which evolves in real-time via filtering and effects done in Ableton Live.

As we were conceiving our idea for the project we knew two things. We wanted to involved people to allow them the chance to familiarize themselves with our project on a deeper level and we also wanted to incorporate different rhythms. For our project we had four different rhythms that can all be combined to sound symmetrical with each other. The rhythms increased in difficulty from number one to number four. We chose to do this because the knowledge and performance ability for rhythm is very diverse between people. Some are very good at reading and performing difficult rhythms but also some are not the most adept at it. We knew that adding these different rhythms would challenge the participants and further enhance their engagement through the performance.

Our main objective for the piece was to get people to have fun. We accepted the risk of human error and imperfection before hand and wanted to focus on how to get people involved as much as possible. After organizing people into four groups we recorded everyone through Ableton live. We then edited the recording and sent it to specific speakers around the room.

As we were conceiving the idea for our project we knew two things. We firstly wanted to involve people and allow them the chance to familiarize themselves with our project on a deeper level and secondly we wanted to incorporate different rhythms. For our project we had four different rhythms that can all be combined to sound symmetrical with each other. The rhythms increased in difficulty from number one to number four. We chose to do this because the knowledge and performance ability for rhythm is very diverse between people. Some are very good at reading and performing difficult rhythms but also some are not the most adept at it. We knew that adding these different rhythms would challenge the participants and further enhance their engagement through the performance.

The DAW of choice for this project was Ableton live.
The microphone as routed from input #10 of the UTRACK32.
Pre-amp on and gain sufficiently turned up.

Ambisonic Speaker Spread:

Technical Details:
The goal was to record the group after a 4 beat click and continuously build the rhythmic layers as we progressed through the groups. As this was happens live effects such as ping-pong delay, reverb, and various equalizations were applied to the recorded clips.

Bottoming out the piece was to be a guitar improvisation.

The original recording can be listened to here:

During our actual performance we ran into issues with the equipment. The main problem was the sound console settings. Gains were too low, it was set to ‘surround’ instead of ‘ambisonic’ and certain inputs were in the wrong spot. We were able to set and test all other equipment, but because every group used the console differently we were forced to set it during the performance. This lead to delays in the performance and a lot of waiting around time. We didn’t realize how much we relied on the console for our performance, and now realize for next time we should save our settings to a thumb drive. Our performance can be found here:

or Here:

A zip file of the project and the recordings can be accessed here:
[hyperlink zip file download]

Dan – Musician, Music Writer,

Nick – Ableton Engineer, Guitarist, Sound Editor

Kayla – Board Operator, Recording Engineer, Documenter, MC

Project 2: Real Time is Real

Project 2 will be a performance-based project that combines real-time human performance with real-time computer performance.

The human performance aspect may be using a traditional music instrument or any other kind of sound-making object. It may also be a human performance of an electronic instrument or system.

The computer performance aspect should use some kind of generative/algorithmic/stochastic process to generate or process sound in realtime. This may be processing the sounds of the human performer, or producing some kind of accompaniment.

Project 2 will be presented in class on Monday, Feb 27. Each group should be completely ready to present at the start of class so make sure you are well-rehearsed before then. Your performance can be anywhere from 3 to 15 minutes in duration, whatever is appropriate for your work. As always, carefully consider every aspect of how the work is experienced including the placement of the audience relative to performers/gear, lighting, entrances/exits, etc.

For inspiration I include here art for an album by one of my former teachers in which the artist is seated up a very large cake and holding a synthesizer with a giant clock on the wall behind him. There also seems to be another giant clock underneath the cake. The title of the album is “Real Time” so this is definitely #relevant.

Sound Squad 5 Group Project

The ambisonic Squad Five project focuses on letting a user control the sounds he/she experiences. We accomplished this by linking TouchOSC to a Max Patch running sounds each group member recorded. An iPhone, running TouchOSC, was passed around the class allowing other students to control the location of pre-recorded sounds. Each group member created sounds to play using a technique that interests each of us. We decided on a fundamental structure for each of us to follow, tempo, key, and meter.

Abby’s Section, Violin loops:
For the violin Section of the controller, I created a set of 7 loops. Using Logic and a studio projects microphone, I set the tempo at 120 bpm and played a repeating G octave rhythm. I then recorded melodic lines, percussive lines, and different chord progressions over an 8 bar period. I originally recorded 14 separate tracks, but we were only able to fit 7 on a page. I also recorded odd violin noises that I meant to edit together making a melodic line out of it, but it was a little tricky because of all the other variables in all of our other loops. If we were able to continue this project, I would think about the length of the loops because if you play them too many times it can get a bit repetitive. While the diversity of loops available helps, there could be parameters set so after a certain amount of loops are triggered you can’t add more, or if you trigger certain loops, you can’t trigger others. I also think designing loops with basic parameters makes working on each of our sections easier, but has limitations in making the piece a whole unit.

Ty’s section, percussion:
For the percussion page of the TouchOSC controller, I created a Max Patch that reads two photoshop files then uses the images to sequence sounds. The first file is an RGB image, and each color value determines the volume level of a pre-recorded percussive sound. The second image file is a black & white bitmap; this image triggers the sounds to play on black pixels. Originally, I used a drum synth, rather than pre-recorded files, but found using RGB to control the synths difficult because of my limited synth knowledge. For a next step in this project, I would use my TouchOSC page to let the user build their own sequencer. Additionally, I would change the sounds I recorded. It would be interesting to find four sounds that fit well with the other two musician’s pages, and let user control the sequence and ambisonic location of those sounds.


For the section by Steven:

I made seven other recordings following Abby’s lead. We wanted to make these recording harmonic with each other and all stayed in G minor. We also wanted to stay in time so we all recorded at 120 BPM. For my recordings I played my electric guitar using my AC30 guitar amp. The electric guitar recordings were chords, the first one was just Gm repeated, and the second one was Gm and Dm repeated. Then I also did a Gm arpeggio on the electric guitar as well. I also did one recording using the acoustic guitar. The other three recordings were done using the Korg Monologue synthesizer. The first one was a Gm arpeggio. The second one was an analog drum patch I made. The final recording was a bass Gm patch.
On the Max side, I used the HOA library, and I did all of my panning in polar form. Doing everything in polar made everything very straightforward, and easy to visualize. From the UDPreceive object I routed the magnitude and phase information into the HOA map message. The magnitude information was very straight forward 0 to 1 in float decimal format mapping directly to the HOA map message. The phase information however had to be converted from a 0 to 1 float to a 0 to 1 float times 2*Pi. That would give the output to the HOA map of 0 to ~6.3 in float decimal format. I had to repeat these steps for all 21 recordings, so I created an encapsulation called “pie_n_pak” that does the magnitude and phase calculation, and packs all of the proper outputs together. All of the recordings are triggered from one toggle box. The UDPreceive object also gets the state of the recording being triggered from TouchOSC. The state is a toggle box from TouchOSC, if it is active it sends a 1, and if it is not toggled is it a 0. The fader encapsulation sub-patch fades the audio track in or out depending on the toggle present. It is important to note that all of the audio files are always playing in the background, and are just faded in and out depending on the current state being sent from TouchOSC.
In TouchOSC, I made three different pages for each of our recordings. The top gray bar is each of our pages. The toggle buttons on the top fade in (or out) for each recording. The long fader is the magnitude of the panning in the HOA panner. The encoder underneath is the phase information being sent to the HOA panner.

Here is a link to the code:

Here is a link to all the supporting audio files:

Project 1: Leap Motion Controlled Ambisonics

Presentation setup in Media Lab

For our project, we incorporated a leap motion into a field recording soundscape composition in order to create an effect of organic control of spatial motion. Our project was created in Max MSP using the Higher Order Ambisonics Library to place sounds in space using an 8 speaker setup. We also used the aka.leapmotion library to capture palm position data in the X and Y plane. In the patch, the user can dynamically move the source of each sound clip in the room by moving their hand over the leap motion.

The sound clips featured in our project include field recordings from moving robots, ticking clocks, and cooking sounds such as boiling, sizzling, chopping and pouring. In our sound design we sought to take these everyday sounds and manipulate them into something that sounds alien and sci-fy to create an ambient soundscape.

Compressed stereo mix of our composition:

Max Patch in Presentation Mode
Full Max Patch

Instructions for Use To begin recording palm data, simply connect a leap motion to your computer and press the purple button on the Max Patch. To select a sound clip, press the corresponding number on your computer keyboard. Once a sound clip is selected it will begin to play, and you can move your hand in the XY plane above the leap motion to control the position of the sound in the room. To lock a sound you are moving in place, press the space bar. To lock a sound in place and begin moving another, press the number of the new sound you would like to move. You can start and stop clips by pressing the corresponding green and red buttons(fade in and fade out is built in), and adjust the level of each sound clip by moving the corresponding slider.

Screenshot recording of leap motion moving sound clips(Silent)

Click here for Max patcher code and supporting sound clips

Sara Adkins: Recording, assisting sound design, main max patch development, documentation
Dan Moore: Recording, main sound design, assisting max patch development, performance
Estella Wang: Recording

Ambisonic Sound Installation

For the field recordings assignment, our group chose to first focus in on the little sounds of everyday life. We recorded small moments which might ordinarily fade into the background, but when focused in on, revealed satisfying textural experiences. Some of these were the babbling of the stream in a park, the snap of a camera, the squeak of a hand on a rail, and the droning hums of a refrigerator and stove top. Though our chosen sounds all came from very different environments, we sought to combine them into a single immersive space, leveraging the possibilities of the 8.1 system to create a soundscape that was both familiar and yet surprising. An important aspect of our approach was in changing the “scale” of the sounds as we heard them. We quickly realized that the ambisonics system would allow us to make these tiny sounds feel absolutely enormous— this would place these familiar sounds into a sonically unfamiliar space, and even make the listener feel improbably small.

We used to HOA library to create an ambisonic experience in an 8.1 speaker system. Using 8 sounds:

Fridge Hum Coin Drop A Burner Click A small Brook in Schenley
CFA Handrail A Camera Click A chip ‘CRUNCH’ Tape Ripping

We split them into two groups and ambi-sonically spun one group clockwise and one group counter-clockwise. We played the Fridge hum on all 8 speakers equally because it was such a low frequency that you were unable to pick up where it was coming from. The Video shows the way both groups rotated and changed throughout the course of the piece. video link:

The installation had a small amount of live-performance added to it. We kept the gain of each sound effect ‘Gooey’ and open to change as the piece went on based on reactions to the audience and the relative environment. The piece started by slowly increasing the gain for one sound effect, and then slowly adding more over time. The end of the piece was the slow manual fade of each sound. We did this because we wanted each experience of the installation to be unique and different.

We chose to present the piece with all of the lights (or as many as the room would allow for) turned off. This helped to further disengage the listeners from the Media Lab environment, removing nearly all visual input and thus heightening the sonic experience. Its interesting to note that the first cue of the sound piece was a lighting effect instead of an aural effect.

We uploaded our code so that anyone is able to use and open our max-patch. Max Code Link:

Group Contribution:

Kayla – Sound Recorder, Max-Patch Coder, Lighting Engineer, and Ambience Engineer.

Julian – Sound Recorder, Audio Editor, Audience Prompter, and Ambience Engineer.

Kaitlin – Sound Recorder, Performance Documenter, and Ambience Engineer.

Joey – Sound Recorder, Audio Editor, Lighting Engineer, and Ambience Engineer.

Golan Levin Response

Sound is complex, complicated, and convoluted. (And those are just the “C’s”). It’s pressure waves that travel through the air as vibrations, when they reach your ear it causes your ear drum to move, ‘knocking’ a small chain of bones together in order to make liquid in your inner ear (cochlea) move up and down. Your brain takes this tiny bit of moving liquid and interprets it as sound. THAT IS INSANE. No really, just think about how completely insane that is. When I first starting my interest in acoustics I constantly looking for little short cuts or rules about sound that are always a constant. But sound isn’t so simple. It is because the concept of sound is so extraordinary, that I am still interested in it.

In the beginning of Golan Levin’s lecture, he said that he hasn’t worked in the aural realm in over 9 years. Which is extremely surprising after hearing him speak. That class was an hour and a half of fast-paced exciting theory, experiments, toys, scientific research, and sound concepts. We jumped from youtube video to youtube video. There was barely enough time for questions in between his excitement to share other installations and experiments. and the best part, 80% of what he showed us I have seen before. But instead of getting bored or tired, my passion for sound grew as i remembered all of the different ways you could express sound.

Like I said in the beginning, sound is a complicated thing. And I think it takes a repetition of explaining from multiple points of view before you are truly able to grasp everything that sound is. As an architect I come from a different background and viewpoint than most people in the field. I can’t read music or think of sound in terms of tempo and pitch, I’m not an engineer who can calculate decibels or reverberation time, I am not a coder/Computer scientist who thinks in terms of scripts and numbers or a sound designer who programs with speakers, mixers, wires and hardware. But I have taken classes in each of these departments, and its only when the same idea begins to overlap that I truly understand how complex sound goes. Golan spoke to me from the point of view of an artist and a designer, and re-watching and re-thinking about old concepts brought me a greater understanding of sound.

Reflection: Golan Levin

I believe Golan Levin sought to flip our perception on how we can be creative as artists, not so much as pushing the boundaries of the obscure/avant-garde realm via technology but really delving into the origins of how we traditionally visualize sound to isolate what truly draws us in as humans towards the experience of nature, work, or a piece of sound art. I will discuss some of the ramifications of this overarching theme and present some examples to support my claim.

Golan first started off his presentation referencing a work of art which stunningly portrayed projected facial reactions of music critics to a live performance. One can strive for automation or robotic manipulation of sound but one of the most powerful mediums for enhancement of the emotional experience of music would be the human face. Witnessing facial expressions convey their myriad of emotions will always have a direct and profound impact on the viewer. Vis-versa, from an alternative perspective it has been proven that music shapes the way we perceive facial expressions. There exists a strong symbiotic relationship between the visual and auditory cortex. The illusion of what we think we see in a facial expression can override what we are hearing, this is has been proven via experiment and is called the McGurk effect.

Continue reading “Reflection: Golan Levin”

Project 1: A Pittsburgh Soundfield

The above was recorded in the Media Lab 8 speaker arrangement with a H1 Zoom to recreate the sense of space experienced when listening to the piece

With our project, we created a narrative of different Pittsburgh residents performing different mundane tasks throughout their day, highlighting the certain beauty that can be overlooked during these events. Conversations in different languages or ATM sounds are something that the average person may experience, but there is a lack of attention given to them, something that we, as a group, wanted to highlight.
We further wanted to add a degree of movement and expressiveness with our recordings. Through spatialization, we were able to create a sense of motion by physically having the sound rotate around the audience utilizing the 8 speaker arrangement. This also forced the audience to pay particular attention to these sounds that might have otherwise become stationary. We also added a drone sound in the background of our piece in order to add a sense of cohesiveness between the range of sounds.

audience listening to piece in 8 speaker setting
Daniel John editing recordings
Josh Brown working on Max patch

screenshot of Max Patch
Max patch can be downloaded here.


Daniel John – Recording, Sound Editing
Josh Brown – Recording, Max Patch Creation, Performance
Brooke Ley – Recording, Discussion Leader, Documentation

Higher Order Ambisonics – Multichannel Max Patch
Nick Ericson – Rotation Max Patch