For this piece, I tried to recreate some of the noise music that I’ve listened to based on some of the techniques Alexander showed us last week. I took two recordings from freesound, one of a man screaming and the other of soda being poured into a cup. I used the soda sound for a base rumble underneath the piece. This was mainly done by pitching the sound downward and adding reverb. I then distorted the scream in order to create the static on top. I edited this sound through time stretching and different filters. I used ableton live and also used a Noise Vocoder to distort the sound even further.
I originally created these videos for an installation piece for Augmented Sculpture, which is where the opening picture is from. The video is found footage of cacti and mushroom timelapses distorted through datamoshing. More information about the method of datamoshing I use can be found at this link. I used I-Frame destruction on the left and P-frame duplication on the right
Note of caution: Datamoshing is really annoying to do. I like this method the most because it gives you the most range of freedom over how it is distorted. However, you have to use this very ancient software since you’re not supposed to be able to corrupt video files in this way. If you want to get into datamoshing, feel free to contact me for some tips and techniques I’ve found that help.
Audio
The original piece I used was a paulstretched version of this song by my friend from Pitt. For this project, I wanted to recreate what I did with my video by distorting it in an artistic way. I used the Vinesauce ROM Corruptor which is typically used for distorting ROM files for video game emulators. It was named after this channel which will play corrupted videogames for fun results. Since it uses bit corruption, you can use this tool for more than just ROM files. I sent a wav of the paulstretched track through the corruptor twice, using bitshifting both times. I edited the track a bit afterwards in Ableton live to make it a little bit more listenable. I hope to experiment more with this tool in the future.
For the synthesizer engine of the piece, we used Max MSP. From Dan’s computer, he sent over OSC data to Steven’s machine to synthesize the gameplay. A video of this process can be found above. We used the UDP receive object to grab the data.
The OpenCV information was sent in the form of two arrays. The arrays were 9 binary representations of the columns. For example, the white array would send: “/white/100011101”. A ‘1’ would represent a piece on the go board section, and a ‘0’ would represent no piece on the go board section. The Max code would route the white and black array to different sections, and parse/unpack the arrays into individual messages. The combination of the unpack object, and a simple if statement, would send a “bang” message if a ‘1’ was received. There was 9 different indexes for both arrays, so there needed to be a total of 18 bang message systems. Each of the bangs were sent out to an individual synthesizer. In the code we named them “little_synth”, they were little synthesizers with an oscillator, and an ADSR. The inputs to the abstraction were for a bang message(note on), the pitch number, the attack of the filter, and the decay of the filter.
Each “little synthesizer” has an output that is set at the loading of the patch. The difference between the white group of little synths and the black little synths is the oscillator waveshape that is produced. The white little synths use a square wave to generate their sounds, using the “rect~” object in Max. The black little synths use a saw wave to generate their sounds, using the “phasor~” object in Max. We thought this would be a fun fight between the different waveshapes throughout the Go battle.
The little synths also played in different, yet harmonic keys. Each synth was a chord voicing over many octaves. The white synthesizers played an A minor 7 chord (A C E G) using MIDI pitch numbers, 45, 48, 52, 55, 57, 60, 64, 67, and 69. The black synthesizers played a C 7 chord (C E G Bb) using MIDI pitch numbers, 48, 52, 55, 58, 60, 64, 67, 70, 72. Ideally, the piece would sound more major, or minor, depending on who is winning the game, by having the most amount of pieces on the board. The sound is then all sent to an effects section. In the effects section there is a reverb object (using cverb~), and a delay object (using delay~). These object help liven up the dry synthesizers. The delay object particular helps create space in between each OSC message being received. After the effect section, it is sent to the dac~ object for output.
In the future, one improvement that could be made would be a method of actually calculating the current winner of the game, as it does not necessarily correlate to the number of pieces that player has on the board. It also might be effective to add samples from videogames to add to the theme created by the chiptune-like synths.
During Golan’s lecture, he showed us this video depicting a “talking piano”. I’m sure a lot of technical effort went in to both constructing the mechanical piano playing robot and converting speech to be played in such a way that it would be recognized. However, it reminded me of a certain horrifying video.
They both rely on the same concept of layering sound waves, including the overtones and undertones, to produce something that sounds like speech. I think it is pretty comedic that one could be praised for being a technological advancement and the other has become a meme.
I also think that it’s interesting that a midi converter, in what is probably the most simple form, is able to get the notes in order to recreate these vowel sounds. I think that another part of recognizing the speech in this song is just from how well the song is known by the listeners. Considering that this has now become the most popular Christmas song, it is no surprise that our brain is able to make out the original lyrics despite a strange presentation.
The above was recorded in the Media Lab 8 speaker arrangement with a H1 Zoom to recreate the sense of space experienced when listening to the piece
With our project, we created a narrative of different Pittsburgh residents performing different mundane tasks throughout their day, highlighting the certain beauty that can be overlooked during these events. Conversations in different languages or ATM sounds are something that the average person may experience, but there is a lack of attention given to them, something that we, as a group, wanted to highlight.
We further wanted to add a degree of movement and expressiveness with our recordings. Through spatialization, we were able to create a sense of motion by physically having the sound rotate around the audience utilizing the 8 speaker arrangement. This also forced the audience to pay particular attention to these sounds that might have otherwise become stationary. We also added a drone sound in the background of our piece in order to add a sense of cohesiveness between the range of sounds.