For this week’s looking outwards, I decided to focus on an algorithmic sound art called “I am sitting in a machine” by Martin Backes. The work first begins with a recording of an artificial human voice reciting a text, which is run through an MP3 encoder over and over again. Through each iteration of the loop, the artifacts of the encoding process reinforce themselves and gradually distorts the artificial human voice, thus revealing its data format. This piece of work is a homage to composer Alvin Lucier’s sound art piece “I am sitting in a room” in 1969, but in a computational way. “I am sitting in a room” features similar ideas, where an recording is played over and over again, due to the emphasis of certain frequencies in the room, slowly the words become unintelligible, replaced by the pure resonant harmonies and tones of the room itself.
Alvin Lucier’s work explores the physical properties of sound, the resonance of spaces and the transmission of sound through physical media; whereas Backes’ work is more about digitized information and its artifacts, hearing science and telecommunications. He wanted to show how digitized information produces unexpected phenomena the same way physical environments do. He explains how he achieved this phenomena through computational techniques: “I have rewritten the original lyrics from the perspective of a machine. As a next step, I used the artificial human voice of a text-to-speech function and recorded the text via a script. I then wrote another script and ran the recording into a MP3 encoder automatically, over and over again. By the help of this recursive algorithm, I produced 3000 successive iterations of the 128 kbps 44.1 kHz MP3 encoding.”
I admire this project because it creates a connection between the computational and physical world, revealing that similar phenomena are able to occur in both situations. There is also a web version of this sound art online: I am sitting in a machine