Looking Outwards -04 : Sound Art

Weather Thingy ECAL/Adrien Kaeser from ECAL on Vimeo.

One piece of art that stands at the intersection of sound and computation is “Weather Thingy” by Adrien Kaeser, created in 2018. This invention takes real time weather data input and converts it into computational data that varies the sound control of connected instruments. It consists of two main components, a weather station on a tripod microphone, and the custom built sound controller that’s connected to the weather station. The controller interface allows the device to assign the parameters received to different audio effects. What is inspiring to me about this piece is that it interprets data across different platforms – it used weather stimulation, such as rain collection or wind power, and is able to translate it into computational data, and then further reapply that to generate specific sounds. I think this is intriguing in its interdisciplinarity. The artist’s sensibilities come through in the way in which he assigns the different audio effects to the inputs – there is a distinct, ethereal quality in the noises produced.

LookingOutwards-04

The title of the work I chose to discuss is Soundmachines, Creative Sound Production Device 2011 and was made in collaboration with
producer Yannick Labbé. It consists of three discs, spinning similarly to vynil records. Each disc is encoded with different
arrangements of colors that correspond to individual tracks in Ableton Live, a digital audio workstation. I particularly enjoyed
this project because I am fascinated by analog synthesizers, a musical tool that relates deeply to programming. I also enjoy that
there is a degree of human interaction with the piece, where the user can shift the placement of the readers along the radius of
each disc to change the output.

youtube.com/watch?v=_gk9n-2lBb8

LO-04: Blue Jeans and Bloody Tears

Blue Jeans and Bloody Tears is an AI-generated eurovision song written by AI, sung by Izhar Cohen, produced by Avshalom Ariel, and published online by Sweaty Machines. The creators fed AI hundreds of eurovision song lyrics & melodies to a neural network, which then created thousands of lines, licks, melodies, beats, etc. From these a few elements were then carefully selected and “welded” together to create the final piece.

The project, amazingly, produced an eerily catchy song despite its nonsensical lyrics. Commenters even inferred deep meanings from the lyrics, despite the fact that they were generated at random. Through the insanity, the instincts of the producers for selecting the catchiest beats shines through in a song that makes you suspicious that maybe AI really is coming for all of our jobs after all.

-Robert

LO-04: Sound Art

One piece I found particularly interesting was Keiko Uenishi’s LandFilles. It is a structure made out of recyclables, with an “instrument” made out of used plastic bottles that Uenishi moves to interact with the piece. An algorithm (she doesn’t describe it) then interprets her movements as though the bottles were a tube or resonance chamber to create sound, while a separate person uses video cameras, lights, and monitors to change the environment of the instrument. It’s a great example of how to find uses for anything, and I find the concept of natural resonance extremely interesting. It was fascinating to see how to algorithm read Uenishi’s movements with the structure of bottles to create these hollow, airy sounds. It also had an interesting range and tone, between a lower brass instrument such as a french horn, and the upper limits of a flute, while also incorporating piano-like sound as well. The program involved most likely follows something such as a set of sensors throughout the bottles, or a mapping program which follows their movement, then models a chamber for sound that is then played through speakers. It presents Uenishi’s personal views of the environment as conservation-focused, as the entire structure is built from recycled materials, and the sounds coming from the instrument aren’t your typical beautiful notes. It produces eerie, unsettling pitches that I can only see as a commentary on our treatment of the environment.

http://soundleak.org/works/landfilles

A small section of the audiovisual performance

LO 4

I have decided to focus on is Christina Kubisch’s Cloud, once piece of the collection I was able to experience SFMOMA. Kubisch’s Cloud sculpture is a snarl of vivid red wires and cable ties suspended in the gallery space. Visitors experience the audio of the piece by wearing headphones that intercept the electromagnetic transmission from the piece, with each movement changing the audio that is transmitted. General areas have sound recorded from the same space but ultimately are still different sounds. In essence, the viewers become human mixers, moving around or turning their heads to create what they want to hear from Cloud, creating real-time compositions.

Kubisch’s Cloud project started in 2011, and the project has become even more sophisticated since then. The sounds of the Cloud are programmed into different sections of the physical Cloud, creating the interaction fo moving around the Cloud to experience different sounds.

LookingOutwards-04

Laetitia Sonami’s signature instrument, the Lady’s Glove, was particulary interesting to me. It is fitted with a vast array of sensors which track the motions of her dance, and those movements shape the music. It is very interesting and admirable that she takes the physical movements and process them digitally to generate the sound into the physical world again. It is very unconventional and innovative. The signals go through Sensorlab at STEIM Institute, and are mapped onto MAX-MSP software. Also, the mapping and sonic material changes in every composition. Signals control sound parameters and processes. They can also control motors, light bulbs, and video too. It is very interesting how intuitive body movement would result into digitally processed sound art.

LO 4 Sound art

The Long player
The long player is a thousand year long “musical piece” created by composer and artist Jem Finer. It started playing on the midnight of the 31st of December 1999 and won’t repeat itself until 2999. The piece itself is in reality a re-interpretation of a source piece that is 20 minutes and 20 secs in length. After being reprocessed by a simple computer algorithm, the variations and combos can be played for a runtime of 1000 years. The algorithm is written in the SuperCollider language, and is actually itself written to plan for the eventual failure and obsoleteness of current technology. Thus, the music score itself was designed to be able to be played on any instrument with any technology. Finer, produced the piece after spending several years studying musical systems, however the piece itself presents an exploration on a far larger scale. With the piece lasting 1000 years, the piece is a reflection and exploration on the concept of time, scale, ephemerality and our traditional ideas about music. Although the score is predetermined, the exact way the piece will be played is a constant unknown for a human listening to it because of the vast scale it exists on. For finer, the piece is an exploration into the unknown, something that will capture the human imagination on a unfathomable level.

The LongPlayer

LO-04-Sound Art

Multiverse is an audio-visual installation, created by fuse, that attempts to show the eternal birth and death of infinite parallel universes. I was attracted by this artwork not only because of its astonishing visual, but also because of its concept. I’ve been fascinated with the idea of the multiverse, or just universe in general, and time. However, I’ve never thought about showing the concept through digital art and sound installation. I didn’t even know that it was possible to physically show something abstract like that. The installation shows two large surfaces mirroring each other generate an infinite reflection of the image towards the sky and the center of the earth. The creators succeeded in combining the theory with algorithmic art and created a memorable artwork that is both visually and audibly stunning.

An application developed in openFrameworks manages the various scenes that interact with Ableton Live and MAX/MSP  for the production of soundtracks through a generative sound system. In order to display “realistic” and infinite scenes, the program provides small random changes to parameter values of physical laws during the bounce, leading to a new universe with slightly different fundamental properties. In addition, in order to prevent it from replaying the same scene, for every thirty minutes, the program is set to enter an evolutionary transition from the previous genetic information.

video of Multiverse

LO-04- Sound Art

Don Ritter’s Intersection. 2019 (reconstruction). 

This installation takes place in a completely dark room with a light lock door and plays audio of 8 lanes of moving traffic sounds. The installation interacts with visitors walking through the room by having a car screeching to a stop noise when the visitor walks past it. If a visitor stands still for a few seconds, then they will hear cars smashing into the paused car. The installation uses infra-red sensors, feedback technology, and 8 or 16 channel audio to create the sounds of traffic. 

This work is inspiring in how it addresses the power of simulation with technology and the generation of creating hypersensitivity to sound with the subtraction of sight. Arthur Kroker mentions how “we are always being smashed by the freeway traffic of high technology”. By having the high intensity audio interact with the visitor’s walking Ritter creates a fully immersive space that reacts to human motion. Ritter manifests a commentary on the advancing speed of growing technology portrayed through the speeding traffic as well as rendering the visitor’s blind in the dark space. The dichotomy of the installation lies in its dependency on these forms of advancing technologies to portray the sounds and interaction with the visitors, and at the same time is depicting ideas of the intimidating pace of growing technology. 

Looking Outwards 4

Project Name: Sorting

Created by Ren Yuan in 2018, Sorting attempts to transform data into an auditory system through various algorithmic methods of sorting the set. Yuan has created 7 different sorting algorithms, which allows the production of a different set of sounds each time even though the same data set could be run through each sorting system. The auditory and sorting output is also visualized through circular webs and shaded sectors that fluctuate depending on how the data is sorted, compared, swapped, and historically related to other data sets. Yuan describes 7 different algorithms that he uses to sort a data set: “insertion sort, shell sort, bubble sort, quick sort, selection sort, heap sort, [and] merge sort”. Each visual and auditory output created depends on the data inputted, the algorithm used, as well as the random initial condition that is created. However, Yuan does create an initial color palette and web base through the use of the aforementioned circular shape, which helps to define some visual boundaries in which the final piece is limited to.