LO – 10

Pazhutan Ateliers is a computational music education and production project by duo M. Pazhutan and H. Haq Pazhutan. The course topics listed on the website include (but are not limited to) electronic/computational music, music appreciation, and sound art.

The particular project I looked at was “Cy-Ens,” short for cybernetic ensemble. To quote the project page, “Cy-Ens is our computer music project with the ambition of expanding the potentials of understanding the aesthetics of computational sound and appreciation of logic, math and art.” The album consists of 15 to 30 minute tracks of ambient computer generated noise. The creation of the work involved the use of open-sourced audio and programming languages, as well as various physical MIDI controllers such as knobs, sliders, and percussion pads. The concept of the project is to create abstract sound compositions by allowing them to arise from mathematical patterns.

Looking Outwards – 10 – Computer Music

The Stanford Laptop Orchestra was founded in 2008 by Ge Wang. Using Chuck, the laptop orchestra was born. Chuck is a programming language for music, where the programmer can code frequencies to create different sequences and repitition of sequences, generating computer music.

The orchestra itself is an ensemble of laptops, humans, controllers, and special hemispherical speaker arrays. I really admire how the speakers were created from nonstandard ways of utilizing an everyday object. The speakers are actually bowls from Ikea turned upside down. Six holes are drilled in them. A base plate is made, and car speakers and amplifiers are put in them. The finished speakers are placed near the orchestra. They project the sound coming from the laptops, emulating real instruments where music comes from the artifact itself. The controllers were originally created for gaming purposes. Wang decided to use them for prototyping instruments because of its ability to track the position of players’ hands.

Wang’s artistic sensibilities manifested in the Laptop Orchestra because his passion for computer music transformed conventional Orchestras with technology. The computer’s precision allows for the creation of new sounds and automation to perform music.

LO-10: 1935 by Florian Hecker

“What do machines hear that humans cannot?”

For this week’s Looking Outwards, I looked up several artists and composers who work with sound as a part of their artwork. For all these artists I came across, sound is material to experiment with and create artistic expression. While several sound artists Rie Nakajima build objects as a part of the installation that create sound effects the audience can experience. Sound artists design sound for different spaces and environments which determines the quality and style of sound art.  

While reflecting on the differences between electronic music and sound art, I learnt that many sound artists use environments and physical objects to make sounds. But computer music is unique in that sense. I research the computer musician Florian Hecker – specifically with his work 1935. I chose this work, because it does justice to expressing the medium it is creating in the overall effect of the sound.

1935 by Florian Hecker:

The final project is a soundscape that varies in modulation as different data is used as input.

In the description of this video, it describes how the sound itself embodies the listening behavior of machines.He exhibits how different sound generated by adding different inputs to the computer can show measures of abstraction and scales of resolution. He also tends to personify machines and asks, “What do machines hear that humans cannot?”. He therefore truly create an effect of a different type of non-human listening, and that computers listen differently than us.

Florian Hecker is a media artist who is affiliated to Edinburgh College of Art, the MIT art program and also showcases his work and installations at leading art galleries.

Blog link: http://florianhecker.blogspot.com/

Looking Outwards 10

Since my looking outwards 04 was about a new musical instrument I decided to focus on computational music for this looking outwards. A piece that I found interesting is called Blue Jeans and Bloody Tears. This piece is a eurovision song sung by Izhar Cohen and produced by Avshalon Ariel. This piece was created using AI that was fed hundred of eurovision songs to give outputs to create a final product and song. I admire this piece because it was able to produce a cool and good song based on an AI algorithm with just other songs inputted. This shows how smart AI is becoming that it is able to create a good catchy song meaning that AI has to potential for even more in the future.

Looking Outwards 10 – Computational Music

The project I am looking at for this week is Charlie Puth’s Attention (it’s a song). Charlie Puth is not a classically trained musician, so a lot of his self-produced music is made through computational software. For instance, for the verse of Attention, he recorded his voice just humming out the melody on voice memos on his phone. After uploading it to Pro Tools, he could choose the instrument he wanted to play that melody, and then go into a graph editor and change how that melody sounded without ever needing to pick up an instrument. He also was able to change the quality of the sound to achieve a different emotional effect subconsciously. He added tape cracks into the background to give the impression of analog music. I admire this project because it widens the sphere of accessibility to make music. As a person who always picked up instruments without ever truly succeeding at them, this is very appealing to me.

http//www.avid.com/pro-tools#Music-Creation

LO: Computer Music

Iamus: Hello World! (first piece composed by Iamus)

While I was doing some exploring in Computer Music, I stumbled upon Iamus. Iamus is a computer that can write contemporary classical music scores. It needs only 8 minutes to create a full composition in different musical formats. Iamus is taught the basics of human composition, such as the limitations of what can be played. Iamus is constantly evolving as more source material is added to the software, similar to how a musician only grows with more practice. Iamis is inspired by evolution, as it picks and alters the source material to create complex music pieces. So far, it can only compose contemporary classical music, but it has the potential to evolve and compose other genres of music. It is so weird to think about how a computer can compose music to the same level as the composers we admire. However, there is the question of whether artificial intelligence can match the authenticity of humans, especially the drive and passion of composers translated into music.

Iamus: http://melomics.com/iamus

LO: Sonic Computational Art

Game of Skill 2.0 is a piece created by deaf sound artist Christine Sun Kim that explores the relationship between labor and listening. With the use of magnets, velcro strips, and custom electronic pieces driven by a sound/position tracking code, Kim is able to represent the subjectivity in an activity that seems so instinctual and simple for those without hearing disabilities. 

As the computational electronic system produces a radio sound that responds to the players’ direction and speed, Kim’s piece lends another perspective to the world in which we exist. This begs the question, “how much harder do those hard-of-hearing or deaf need to work in order to reap an experience that the hearing community takes for granted?” Through the use of an interactive installation experience, Kim is able to create a space in which everyone has to work just as hard to produce radio noise, all while drawing parallels between a game of skill and a ‘natural’ daily activity. 

Custom electronic radio box.

While exploring Kim’s work and specifically her piece Game of Skill 2.0,  I began to question the ability for us to use computational art and diving into a wide range of mediums and elements in order to transform our artwork into tools to create a more equitable society. Although the coding that goes on behind the scenes may be far more complicated than the actual activity of dragging a stemmed-radio-machine along a velcro strap gives itself to be, it creates opportunity to help offer a range perspectives and open minds in an approachable environment. 

LO 10

EMMY
Composer: David Cope

“EMMY” is a program developed by David Cope, a classical composer with a deep interest in the role which computers and AI can play in music. His experiments in music intelligence allowed him to create EMMY, a program which can be fed examples of work from specific musicians and composers and in turn produce new pieces based on the characteristics of the music it studies. I find it fascinating that EMMY is able to break down and learn aspects of what makes a specific composer’s work unique and recreate it in a way that is not necessarily copying, but familiar and new. Moreover, Cope theorized that “what made a composer properly understandable, properly “affecting”, was in part the fact of mortality.” Interestingly, this led him to unplug EMMY in 2004, after it had produced over 11,000 pieces.

A Bach-Style Chorale written by EMMY

Looking Outwards 10: Computer Music

I am interested in Ryoji Ikeda’s live set of his project “test patterns,” which was originally developed in 2008. In this work, Ikeda leverages a real-time computer program written by Tomonaga Tokuyama which converts analog audio signals into digital black and white patterns which resemble a barcode. The music generated by the computer combined with the unique and contrasting visuals showcases the interplay between the analog and the digital as well as the audio and the visual.

Ryoji Ikeda performs Test Patterns as a live set in 2013.

The rapidity and synchronicity of the final work gives the code a performative effect, allowing the viewer to immerse themselves within a generative soundscape. I think, too, the visuals help to highlight the dissonance of the computer-generated music.

LookingOutwards-10

The project that I’m discussing is “Travis Bott” by the creative agency space150. The project uses artificial intelligence and machine learning to create a completely generated song that sounds as if it was created and performed by the american hip hop artist Travis Scott (hence the name “Travis Bott”). They had different teams that analyzed the music separately, breaking it down into beats, melodies, lyrics, and more. As the algorithm learned likely sequences and patterns in note and lyrical structure, they could then guide it in reconstructing a completely new song. Not only did they have to reconstruct the instrumental and lyrics, but Travis Scott’s vocal information was reconstructed as well. In some ways, Travis’ artistic sensibilities are the main focus here as anything that the algorithm was produced from his content and influence, but the packaging of the final song and video had some input from the studio behind it.

Travis Bott (2020)
https://www.space150.com/work/v45-travisbott
Feb 2020 by space150