Xu Xu – Looking Outwards – 10

For this week’s looking outwards, I decided to focus on an algorithmic sound art called “I am sitting in a machine” by Martin Backes. The work first begins with a recording of an artificial human voice reciting a text, which is run through an MP3 encoder over and over again. Through each iteration of the loop, the artifacts of the encoding process reinforce themselves and gradually distorts the artificial human voice, thus revealing its data format. This piece of work is a homage to composer Alvin Lucier’s sound art piece “I am sitting in a room” in 1969, but in a computational way. “I am sitting in a room” features similar ideas, where an recording is played over and over again, due to the emphasis of certain frequencies in the room, slowly the words become unintelligible, replaced by the pure resonant harmonies and tones of the room itself.

Alvin Lucier’s work explores the physical properties of sound, the resonance of spaces and the transmission of sound through physical media; whereas Backes’ work is more about digitized information and its artifacts, hearing science and telecommunications. He wanted to show how digitized information produces unexpected phenomena the same way physical environments do. He explains how he achieved this phenomena through computational techniques: “I have rewritten the original lyrics from the perspective of a machine. As a next step, I used the artificial human voice of a text-to-speech function and recorded the text via a script. I then wrote another script and ran the recording into a MP3 encoder automatically, over and over again. By the help of this recursive algorithm, I produced 3000 successive iterations of the 128 kbps 44.1 kHz MP3 encoding.

I admire this project because it creates a connection between the computational and physical world, revealing that similar phenomena are able to occur in both situations. There is also a web version of this sound art online: I am sitting in a machine

Xiaoyu Kang – Looking Outwards – 10

The project that I looked at is named Data Peluda. It is a performance done by Jorge Chikiar and Luis Conde at Roseti in Buenos Aires, on August 11, 2017. Jorge Chikiar is a composer and sound artist from Argentina. He has worked at many places such as Colon Theater, CETC, and Michell  Maccarone’s Art Gallery. He has been experimenting with different ways to present music for many years, and many of his project involves the use of different kinds of computer technologies.

This performance itself used a combination of saxophone and computer technologies. The music that the audience heard is the sound of the saxophone modified electronically by the computer. The processed sound turns out to be a combination of classic instrumental music and contemporary music. The most impressive part of this performance is that the music is produced live, which means that the process of modifying the saxophone music happened at the same time as the saxophone is played. I found this to be a great example of how computers are used in live music performances.

Jamie Park – Project 10

sketch

//Jamie Park (jiminp)
//Project 10
//Section E

//global variables of the picture and sound files
var instrumentPic;
var bellSound;
var pianoSound;
var drumSound;
var guitarSound;

//feature that determines whether the file gets played or paused
var pianoPlaying = 1;
var drumPlaying = 1;
var guitarPlaying = 1;

function preload() {
    //preloads the image
    var instrumentURL = "https://i.imgur.com/dX3rHBT.jpg";
    instrumentPic = loadImage(instrumentURL);

    //preloads the sound files
    bellSound = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/Bell.wav");
    pianoSound = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/Piano.wav");
    drumSound = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/Drum.wav");
    guitarSound = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/guitar.mp3");
}

function setup() {
    //setup the canvas and prepare for the sound system
    createCanvas(450, 450);
    background(0);
    useSound();
}

function soundSetup() {
    //set the sound volume
    bellSound.setVolume(0.3);
    pianoSound.setVolume(0.3);
    drumSound.setVolume(0.3);
    guitarSound.setVolume(0.3);
}

function draw() {
    // draw the image
    image(instrumentPic, 0, 0);
}

function mousePressed(){
    //the sound files are triggered when mouse is pressed
    if (mouseX > width / 2 & mouseY > height / 2
    && mouseX < width && mouseY < height){
    //if a specific part of a canvas is clicked, add 1 to variable drumPlaying
        drumPlaying = drumPlaying + 1;
        if (drumPlaying % 2 == 0){
            //if drumPlaying is divisible by 2, play the sound file
            //if it is not divisible by 2, pause the sound file
            drumSound.play();
        } else {
            drumSound.pause();
        }
    }

    if (mouseX > width / 2 & mouseY < height / 2 && mouseX < width){
        pianoPlaying = pianoPlaying + 1;
        if (pianoPlaying % 2 == 0){
            pianoSound.play();
        } else {
            pianoSound.pause();
        }
    }

    if (mouseX < width / 2 & mouseY > height / 2 && mouseY < height){
        /*+1 play/pause does not apply to the bell,
        because the sound is relatively short and does not create a melody*/
        bellSound.play();
    }

    if (mouseX < width / 2 & mouseY < height / 2){
        guitarPlaying = guitarPlaying + 1;
        if (guitarPlaying % 2 === 0){
            guitarSound.play();
        } else {
            guitarSound.pause();
        }
    }
}

I created a sonic sketch by making an interactive file that makes sounds when clicked on corresponding icons. When you click on the guitar icon, the file will create guitar noise, and the same idea applies to the piano, drum, and the bell. The file can execute various types of sounds at the same time. I am happy that I figured out a logical way to make the sound stop when the mouse has clicked the corresponding icon again.

Monica Chang – Looking Outwards – 10

MI-MU Gloves

by Imogen Heap

Anatomy of a Mi-Mu Glove: Communication over Wifi
Anatomy of a Mi-Mu Glove: Breathable palm designed for the stage
Anatomy of a Mi-Mu Glove: Change Batteries between soundcheck and the show
Anatomy of a Mi-Mu Glove: Vibration Motor for Haptic Feedback
Anatomy of a Mi-Mu Glove: Flex Sensors Measure bend of the fingers

I first discovered the Mi-Mu Glove through a participating music artist: Ariana Grande. As I have been a fan of her music for a long time, I became aware of this new, technological way of expressing and performing music through movement of the performer during Ariana Grande’s tour in 2015.

These gloves, however, were created by an award-winning musician and technology innovator, Imogen Heap. With these gloves, a wide variety of musicians have explored different ways of performing. For instance, artists such as vocalists, classical pianists, pop artists, beat boxers, and guitarists participated in the earlier versions of these gloves since they were released in the year 2010.

Once Heap began collaborating with a range of musical artists, the MiMu design team began to expand with engineers, designers, and artists specializing in software, textiles, electronics, sensors, UX/UI and music! With this team and these gloves, she continues her search for a relationship between music software and hardware as a musical tool.

Steven Fei-Looking Outwards 10-Sound Art


Bridging a connection between music and digital art, computational tools have created a new genre – the sound art.

Inspired by the heritage of the Polish Radio Experimental Studio, a project called Apparatum is born. Written with javascript, the designer panGenerator takes advantage of the digital interface that emits purely analogue sound. Based on magnetic tape and optical components controlled via graphic score composed with a digital interface, the user is able to flexibly produce sounds from various levels and both graphically and musically invent a symphony of electronic music.

Meanwhile, the physical form of the equipment is designed in the modular fashion inside two steel frames. the 2 tape loopers, optical generators producing sinusoidal tones and noise generators are all presented in a more visual way for the user to have a direct understanding of how and what they are manipulating certain aspects of the sound. The most inspiring feature of the project is its human interaction program. the printout of the user graphical score with the link to the mp3 file of the recording gives the user a much clearer and easier understanding of the sound art and how they are able to control and play with the sound levels, amplitudes, frequencies, noise, and pitches. The artistic sensibility is manifested both in its acoustic flexibility and the visual appearance and the recordings of the varaiations of all the variables that the users are playing with. The project attracts and enlightens me to have more variables for user to control and to design a clear and elegant-looking appearance of the program to arouse the interests of the audience.

The elegant physical appearance of the sound art equipment

Click here to visit the report about the project

Click here to view the Apparatum Project

Angela Lee – Looking Outwards – 10

A video introducing the installation “Apparatum” by panGenerator.
A user situated within the installation space.

“Apparatum” is a sound installation with sound inspired by Bogusław Schaeffer and the aesthetics inspired by Oskar Hansen. The installation in general also draws inspiration from the heritage of the Polish Radio Experimental Studio. The project consists of analog sound generators which are controlled through a digitized sheet music touch pad. I admire the speculative nature of the piece. Because it’s not commercial music that has to appeal to a wide audience, it feels much more thoughtful and edgy, and I am drawn to the process of creating it. I think that the artistic sensibilities manifested in the visual design. The aesthetics complement the sound art without overpowering it, since it has minimal grayscale colors, limited use of textures, and consistent forms. The textures of the sound are also quite interesting, challenging you to think of new ways to weave sounds and tones together.

Ankitha Vasudev – Looking Outwards – 10

Orchestrion is a computerized band that was programmed and created by Eric Singer in 2010. Singer is a Brooklyn-based musician and technologist who founded SingerBots and LEMUR – a group of artists and technologists who create robotic musical instruments. Orchestrion consists of a set of automated musical instruments that are mechanically equipped to perform a composition.

Lido Orchestrion, 2010

I find this project interesting because the instruments in orchestrion can play anything that is composed for them. This means that a musician composes a song on basic production software, but instead of playing the notes back, the program activates the physical playing actions on the orchestrion. The video below shows the Lido Orchestrion, which was built for a nightclub in Paris and consists of 45 automated instruments. 

In 2009, Singer attended Carnegie Mellon as an undergrad— and founded SingerBots, a company fully dedicated to building robotic music instruments. Singer beleives that musicality and infallibility are the two priorities for an orchestrion, so that they sound good and do not make mistakes. I agree with his belief that robotic infallibility could create a lively performance, contrasting to others’ beliefs. 

A video describing Singer’s Orchestrions

Kimberlyn Cho- Looking Outwards 10

“Rhapsody in Grey” by Brian Foo (2015)

Brian Foo draws from his fascination and interest in brain activity in “Rhapsody in Grey” by translating brainwave data into music. He specifically studied the brainwaves of a female pediatric patient with epilepsy so that listeners can empathize or briefly experience what may be going on during a seizure. I was intrigued by the distinctiveness of this project, as well as his creative approach to a scientific topic such as brainwaves. I admire Foo’s usage of his fluency and skills in programming to portray a personal interest(brain activity) in his own creative way(music). I found this project to be inspiring in the endless possibilities of programming and computational art.

Foo uses different variables from EEG brainwave data to computationally generate the intensity and fluidity of the rhapsody. He used Python to extract an excerpt from the EEG data, which he then calculated the average amplitude, frequency, and synchrony of. He then assigned instrumental and vocal samples in correlation to the calculations. The sounds were synthesized into a rhapsody using a music creating program called ChucK. Lastly, he used Processing, a visual programming language, to generate the visual waves that play with his music in the video above.

You’re able to see Foo’s artistic sensibilities in the final work in the sound samples he chose as well as connections he made to the EEG data. For example, he raised the pitch of string instruments during higher frequencies, while adding louder sounds for higher amplitudes. The connections he makes between his calculated algorithms and sound samples are representative of his interest in the human subject, as well as his artistic priorities and decisions.

“Rhapsody in Grey” by Brian Foo (2015)

Taisei Manheim – Looking Outward – 10


The project I chose is called Amper Score by Amper, a company consisting of a team of engineers, musicians, and composers that have worked on many movies and television shows. The idea for this project arose from many people within film and other industries being frustrated by the process of obtaining stock or precomposed music, including finding the music, getting over the legal and financial hurdles, and the lack of customization. Amper’s solution is an AI composer that writes and composes unique royalty free music. There are two workflows: one that allows you to work quickly and another that allows a deep customization of the music. The simple workflow requires the user to input the style of music, mood, and length of the song and the AI will generate a song in seconds, while the more advanced workflow allows for changes in tempo, instruments, and more. This allows businesses, films, and more to create originally composed music that is unique to them. Amper’s AI is built from the ground up and utilizes a massive dataset created by an internal team of composer-developers. Rather than employing an approach that limits contextual awareness and the ability for Score to collaborate with its users, Amper’s data team describes every facet of music, including genre, mood, and instrumentation. What I admire about it is that it is something that anybody could use and the luxury of original music for any business or film suddenly becomes much more accessible to those that wouldn’t have had the money or resources before this tool.

Looking Outwards 10 Ellan Suder

Generative Music by batuhan

Software used: Processing connected to SuperCollider via osc_p5 library and the great p5_sc library, and I used the Atari2600 UGen by Fredrik Olofsson, all custom software.

“Generative Music – Cellular automata and blip blops” (2008) by batuhan is a “somewhat intelligent cellular automata system” developed with some atari2600 style sonification. The music is generated by a cellular life system — the artist does not touch the system once it’s started and it dies by itself, ending the music as well. The thing I really liked about the project was the idea of creating a random, computer generated piece of music tied to the lifespan of a system. The song is the representation of a world from its beginning to its end.

The visuals and the audio (the harsh ‘blip blops’) seem to be inspired by old classic arcade style video games.