This week I am doing my Looking Outwards on a technologist, hacker, DJ, and composer Martin Backes. He is based in Berlin, Germany.
This piece is called I am Sitting in a Machine; it is a custom-made computer algorithmic work that begins with a recording of an artificial human voice that recites the text given in the video. The voice is then run through an MP3 encoder multiple times to distort the voice by revealing the data format. It’s interesting how this work strips away the more “human” parts of the speaking voice to create the automated voice and the screeching sound towards the later parts.
Game of skill 2.0 by Christine Sun Kim is an interactive installation, resembling a zipline, where a text about the future written by Kim and voiced by a museum intern (as she is deaf since birth) is played at different levels and speeds depending on the direction and speed of the participant. I find this project compelling, as she conveys listening, which is often regarded as a passive activity, as something requiring physical labor in order to “acquire”. And being deaf since birth, I think this project communicates her personal struggles, making it a meaningful project. The technical side of the project involves the use of velcro, magnets, custom electronics and the intern’s voice.
[A snapshot of how users interact with the installation]
Link | http://christinesunkim.com/work/game-of-skill-2-0/
Work | Christine Sun Kim. Games of Skill 2.0. October 11 2015 – March 7 2016
example of the Continuator (2000) by François Pachet in use, with Albert van Veenendaal on piano
The Continuator (2000) is a computational music generation system by Francois Pachet. It works in conjunction with a musician to improvise real-time musical sequences, creating melodies that may be indistinguishable to those played by the human musician (in the video shown, it’s being used as part of a “Musical Turing Test” to see if humans can tell the computer-generated music apart from the human-generated music).
I’m really interested in the way the Continuator works in conjunction with a musician, as it uses the musician’s playing as a reference point to come up with new but stylistically similar tunes. I like the idea of using algorithms in partnership with human creativity, as that seems to open up more possibilities than just trying to create an algorithm that will recreate human abilities perfectly.
This series of “sound objects”, created by installation artist Zimoun are architectural soundscapes constructed from simple and functionally components. These components have ranged from ping pong balls, chains, cardboard boxes, springs and slats of wood, usually “activated” or displaced using an array of simple servo motors. While some of their more elaborate collaborative pieces incorporate plotters and hot plates, the majority of these sound objects call not to their complex build but the sonically resonant qualities of the commonplace materials used, especially in mass quantities. The architectural systems constructed from these individual sound objects articulate the tension between the orderly and the chaotic (or the chaotic within the orderly). Rather than a true sound being produced, these “sound objects” are characterized as emitting more of an acoustic hum that feels industrial, yet elegant.
I am extremely interested in the fabrication and facilitated performance of large-scale patterns- how intimate sonic and visual experiences can unfold from an environment, rather than an encapsulated piece of media. I also love how, through analog materials and motion, the “computational” aspects of this piece are downplayed in a very elegant way. To me, this really emphasizes what computational art is actually about- not the depictions of technology but of ideas. That being said, large-scale installations with functionality like these “sound objects” could not be accomplished easily without computation. The vastness of these patterns play to the computer’s ability to process large quantities of information and execute many outputs simultaneously.
For this week, I decided to look into the work of David Cope. As a music professor at the University of California, Santa Cruz, Cope writes algorithms and programs that take in music and outputs new compositions in the style of the original input.
The example above uses his experiments in musical intelligence in order to recreate the style of Antonio Vivaldi. His program relied on processes such as deconstruction, analyzing signatures, and finding compatibility in works in order to create new compositions. In the case of this example, the music video accompanying this piece was also algorithmically created.
I think that the concept of using new technologies in order to recreate styles of old composers is incredibly interesting, yet works surprisingly well. However, I think that one flaw to this sort of technique is that algorithms can only analyze how each of the notes are treated, but do not take into account the emotions that any given composer has as they create.
Porter Robinson is an EDM producer specializing in electrohouse, dubstep, synthpop, progressive house, and experimental genres. In his album Worlds, Porter used heavy sampling of reproduced vocaloids as well as original soundfonts and sound capturing methods. As an album project, Worlds influenced me significantly in my approach and philosophy towards performance art and the role of sound and music in experiential design.
Computationally, Porter developed many of his ambient noises through the use of simulated improvisation via program learning. Likewise, Porter utilized music databases to sample from and incorporate into each song of Worlds.
My example of computer music is interesting in it’s creativity and alternative utilization of a computer based music program which ends in a pretty interesting way. Using an MIDI editor, Andrew Huang drew a unicorn on the digital score sheet and took a video of the music it played. He used the MIDI as a drawing device and literally gave a voice to a drawing he made, allowing it to sing for itself in a sense. The only date available is the date the artist posted the video to YouTube which is Feb. 20th, 2017 – so this is a very recent piece.
There’s not a lot I can say about it considering it’s not really meant to be viewed as a formal art piece, aside from mentioning how I just found this whole concept to be quite funny and clever, and the music itself sounds surprisingly good. It’s easy to imagine how something like this could turn into somebody’s art practice, and in fact it’s actually inspired many other people to create drawings that double as musical compositions using MIDI as a medium/interface – there are even tutorials that help you learn how to do it yourself.
In terms of the algorithms associated work the work, I’m not so sure how heavy of an aspect it is in the creation of these sound drawings, and the algorithms lie in the MIDI program itself.