adev_Looking Outwards 04_Sound Art

Intersection, 1993

Don Ritter

This art installation has a lot of interesting things going on in it. It is essentially a group of eight speakers, spread out to create four “lanes” in a dark room with no light. The sound is of cars rushing past and when a visitor walks between these speakers, or in one of these lanes, these cars react and come to a screeching halt. If a visitor stays stationary for longer than eight seconds, more cars pile up and then zip off once the visitor leaves.

I think this is extremely thought provoking. It uses sound to create this feeling of rush and fear from car collision experience, one we imagine to be more about our physiological, visual experiences. It uses sound to affectively get that feeling across. It also provides this dark room where this interaction takes place, something that feeds into the fear while also making the experience much more visceral. I think the process makes you and other people so much more aware of you, in spite of the darkness. The screeching of a car is such a convergence of lives and people. In the moment of a car halt, we recognise each other, a very human feature of our inner nature comes out, its this strange act of survival in one’s everyday life, an average red flag.

jamieh-Looking-Outwards-04

Volume, designed by Softlab, consists of a grid of 100 mirror panels that respond and redirect light and sound, as well as interact with human movement through depth cameras. The mirrors panels rotate to face the nearest person. Based on the volume of the ambient sound in the space, LEDs on the sides of the panels light up and the panels move vertically. The interface of the project was coded and built in Processing. Cameras are placed overhead to track people’s positions. The data is then managed with OCP and Arduino microcontrollers.

What I like about this project is how they took sound, which is intangible, and gave it spacial qualities through the rotation towards the person as well as quantitative information of the volume of the sound through the LED lights. Light and sound, which doesn’t seem to move, now visually circulate through space. I like that it reacts to human movement, but based on their description of the interface using the weighted average to find the closest person to the cube of mirrors makes me wonder what happens when there are more people who are standing close to the installation.

LookingOutwards-04-sjahania

Fraction is a French artist and composer named Eric Raynaud who focuses on combining audio and visual art to immerse audiences in a different kind of performance. One of his projects, called Entropia, is a geodesic sphere of lights that react to sound, accompanied by a pixel-mapped display of lights in an IMAX dome. The displays both change based on the sounds that Fraction plays from within the sphere. It uses real-time programming to connect the sound and the display and then project it to the audience.


This is a video of the kinds of performances Fraction gives.

I found this particularly interesting because it uses the lights that react and the pixel mapped part to give the audience an overwhelming sensation. I also did a project about geodesic domes in high school, so that is probably why this one stood out the most. I also cannot fathom how one would make displays from sounds like the ones in the video by coding. I wonder how many different displays he can make with the sounds he coded.

website: http://www.fractionmusic.com/project/entropia-av/

mjeong1-Looking Outwards-04-SectionA

Nature and Humans Harmonize in a Symphony of Architectural Sound

Wunderbugs by OFL Architecture in Rome, Italy

Wunderbug is a installation that explored connection between human and nature. The pavilion has 6 plastic spheres inside of circular wall. Each sphere contains small terrarium and insects. The pavilion is equipped with a series of Arduino sensors to detect motion of insects as well as weather data, which is programmed to record changes and produce a soundtrack of data. I think the way OFL Architecture approaches to nature is interesting because they focused on acoustic quality of nature and how the sound is not only interactive with humans but also with the insects, the nature. I admire how they closely looked into motion of each insect and how they produce tactile experience  for humans not only through visual installation but also through sound effect. It would be a valuable lesson that the nature is not just a beautiful picture frame that people just look at, but it has life and constantly produce small changes even though changes are too subtle to notice.

I think it could be more intersecting to see visual changes of the pavilion based on those data. Perhaps, the facade of the pavilion could be interactive according to the insect motion data. On the other side, the visual quality of the pavilion could be related to sound that is being generated.

Link to archdaily

Sheenu-Looking Outwards-04


Seaquence is a game on the app store that allows you to spawn creatures that produce music by themselves. The creature’s shape determines how a sound is made. Adding notes or making musical sequences and even changing the waveform of sounds alters how the creature looks. A variety of different creatures can form an ensemble of music with just a tap of a button.

The game was developed by a company called Okaynokay and evolved based on a flash game years ago. According to the creators, the game was made and designed using a custom programmed physics engine which allows the creatures to move on their own based on their notes and give the player a game-like experience.

The game has a dedicated community consisting of both regular individuals and professional musicians. So far, over 300,000 songs have been made on Seaquence.

I really admire the fact that people can now make music with an easy-to-use and fun game. Making music is a long and hard process, but with this game many average and ordinary people can now create their own compositions and get inspired by other creators out there. Making a process that is long and complicated easier and more accessible to the public truly makes this game special in my opinion.

agusman-LookingOutwards-04

The Classifier on Creative Applications

The Classifier

Creators: Benedict Hubener, Stephanie Lee, Kelvyn Marte, Andreas Refsgaard and Gene Kogan

The Classifier is an AI paired with a table interface that selects music based on the beverages people are consuming around it at that time. The system is comprised of a microphone that takes in the ambient noise of drinking the beverage and an AI that compares the characteristic sounds to pre-trained models. The current limitation of the models is that they are constrained to three categories: hot beverages, wine or beer. The approach they take in trying to characterize the beverages by their distinctive noises is really unique, yet I have to wonder what other sensors or methods they could’ve used to collect more… telling… kinds of information, such as heat, color, carbonation- which could possibly lead to expanding the beverage categories from the current three.

I’ve also always been fascinated by generating contextual music. In my time at CMU, I’ve come across several student projects who’ve sought to use visual and other sensory input as the lens through which music is created. A friend of mine tried to recreate the sound track of a classic episode of “Tom & Jerry” just purely through a series of frames. Seems like a fascinating and incredibly enigmatic field of study that I would love to pursue.

jiaxinw-SectionB-LookingOutwards 04

Cloud Piano

Cloud Piano was an installation created by David Brown in 2014. In this installation, a piano would be played based on the shapes and movements of the clouds. A camera was used to record the real-time cloud and a software articulated a device to press the keys on the piano according to the moving images from the video. I think the customized software analyzed the real-time images and transferred them into black and white pictures. According to the portion of black and white, the strength and position to press the piano keys were decided.

I like how the artist combined the unpredictable movements of nature and the artificial instrument together. It created an unusual yet romantic feeling for the music. This was a new way of turning nature into artistic sounds. Using the computing skills to analyze the real-time data, the artist turned his feeling of nature into an interesting art installation.

For more information: http://www.dwbowen.com/cloud-piano/

 

keuchuka-lookingoutwards-04

Nightingale and Canary from Andy Thomas on Vimeo.

Nightingale and Canary

Andy Thomas, an Australian artist, creates “audio life forms”. He creates 3D animations that respond to audio inputs. For this project, he used archival bird recordings in order to visualize digital sound sculptures that animate in reaction the songs of each bird. I found crossing the boundary between constructing visual thoughts and sound thoughts interesting and attractive in his work. There is a sensible and lifelike touch in translating bird sounds into bird like forms. Thomas is interested in literally casting technology’s impact on nature into a visualized form, and has recently been diving into audio software. Through this new path of artistic investigation Thomas once again finds a way to create works in which nature and technology combine.

myoungsh-lookingoutwards-04

As soon as I read this looking outwards was about sound a project I was tangentially involved in immediately came to mind. Actually quite similar to our professors work with music by a professor at Princeton university Perry Cook. He had a totally electronic orchestra.

https://www.cs.princeton.edu/~prc/

He did work with my dad, a stone sculptor on a project for an installation in Princeton. They made a lithopone, historically a large upright stone xylophone. The installation quark park was many pieces done in collaboration between local artists and Princeton University scientists.

http://www.princetonoccasion.org/quarkpark/

They created a system that mixed sounds from a stone sculpture being hit with random sounds recorded from the process of making the sculpture toy create a large electronic and stone instrument. When I was a little kid working with my dad on this project I had no idea really what electronic music really meant but as soon as we were showed the trumpet accompanied by a full electronic bras section in class I was reminded of this project.

hschung-LookingOutwards-04

I read an article about the increasing capabilities of machines to make music, especially in the mimicry of iconic music made by people. It was interesting to see this topic made people react a number of ways. Some expressed disgust that people would consider that computer-generated music as “real” music or art, while others were more optimistic and speculated the commercial applications of computer-generated music. David Kope, a composer and computer scientist, also authored a book titled “Experiments in Musical Intelligence” in 1981- so this concept has been growing for some time.

I don’t know much about the algorithms involved in creating computer-generated melodies, but prior to reading this article, I did know about a different but related concept. In 2004, Japan produced a program called Vocaloid, which is a singing synthesizer- meaning samples are taken from voice actors’ or singers’ voices, and those samples are manipulated by program users to create songs. The users have to input the melody and lyrics. I think both of these projects probably have to confront the concepts of what is considered art, and where the source of originality and creation lay. It’s really interesting to me how they both allow for people to create new melodies in sync with machines, that couldn’t have been created by either being alone.

It could indeed be argued that machine-produced music determined by algorithms “doesn’t count” as art, but I don’t think that’s necessarily true- if humans created the algorithms necessary to create the melodies, even if they mimic existent styles, they still went through the creative process in a unique medium.

Link to the article I read-
https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/