Xu Xu – Looking Outwards – 04

For this week’s looking outwards, I discovered an audiovisual installation called “Multiverse” created by fuseworks. This installation discovers the evolution of possible universes through the generation of haunting visual graphics and sound and strives to define the theorization of the multiverse: where infinite numbers of universe co-exist parallel outside space-time.

In the video, it presents the installation almost like a digital painting, which produces beautiful visuals accompanied by the audio. The installation explores and tries to imagine the birth/death of infinite parallel universes, and this “narrative” is based on American theoretical physicist Lee Smolin’s scientific theory. From the fall of black holes comes their decedents, the parameters and physical laws constantly tweaked and modified. This installation tries to create intimacy between the art and the viewer, yet creating two hierarchies: an impermanent, vulnerable human figure vs a vast, impenetrable universe.

The artworks are completely generated by the software developed in openFrameworks, which interacts with the generative system soundtrack of Ableton Live and Max/MSP. In the simulation, the physical laws are constantly being adjusted, which leads to the origin of a “new universe”. After thirty minutes, the previous sequences “evolve” and provide infinite new various outcomes. The creator of this installation explains: “Particularly, the particles react with each other and with the surrounding space, changing the information perceived by modifying a vector field that stores the values within a voxel space. The strategy involved the massive use of shader programs that maximize the hardware performance and optimize the graphics pipeline on the GPU.”

I really admire the creativity of this installation, but what amazes me is that these beautiful visuals are generated purely from the software. Creators provide the framework and allow the program to develop freely from there to present its creativity. I wish I have the chance to see the installation in real life.

Timothy Liu — Looking Outwards — 04

This is Weather Thingy, a “Real-Time Climate Sound Controller.”

For this week’s Looking Outwards on Sound Art, I examined a work called “Weather Thingy” by Adrien Kaeser at ECAL’s Media and Interaction Design Unit. At first glance, the piece seems whimsical, fun, and even a bit weird. But upon further inspection, Kaeser’s work has a striking level of complexity that allows it to interact with its surrounding environment and produce sound and art.

Another photo of Weather Thingy in a different setting.

At its most basic core, Weather Thingy is designed to convert weather signals into musical sound. Three climate sensors detect rain, precipitation, and wind speed and translate those into parameters using an interface equipped with a brightness sensor. Using Arduinos and other computational components, Kaesar fitted Weather Thingy with the ability to react to climate changes in real time with different types of sounds. This is one of the things that really stood out to me; most sound art reacts to sound and produces a visual effect, but Weather Thingy reacts to the climate and produces sound. In other words, the piece is a real-time, reactionary piece of art.

Kaesar mentions that he hopes Weather Thingy can serve as an inspiring tool to help musicians come up with song ideas. It seems that the sheer randomness of wind, precipitation, and climate can lead to some pretty incredible and unique sound effects. In reading about the mechanics of Weather Thingy that are mentioned in the article, I noticed a few clear connections to programming languages. The conversion of weather cues to parameters allows them to be called by Arduinos in Weather Thingy, essentially serving as a function that generates sound. It’s amazing what computational art can create with sound!

Sources:

This is the article I referenced for my Looking Outwards this week.

Mari Kubota- Looking Outwards- 04

The project Storm Room (2009) by Janet Cardiff and George Miller is a mixed media installation that mimics the sounds and visuals of a room on a stormy day. The installation art lasts for around ten minutes. It was created for the Echigo Tsumari Art Triennial and is located in a deserted dentist’s office near Doichi, Japan. The intention of the installation art was to recreate the feeling of danger when taking refuge from a storm. The flow of water, the lights, the strobes, and the fans are controlled by a computer while the audio is projected out of 8 speakers throughout the room to create an immersive experience. Janet Cardiff’s style is expressed in this work because of the care that went into the sound of the installation.

Storm Room (2009)

This project attracted my attention because a lot of different elements that engage different senses were used in order to recreate the experience of a storm. The computer program that controlled the water, lights, strobes, and fans is also of interest to me because of the way it randomizes the experience each time.

Fanjie Mike Jin—Looking Outwards—04

This sound installation is titled “Chijikinkutsu”. It is created by a Japanese artist, Nelo Akamatsu. The name of the title is a sound installation ornament fro traditional Japanese gardens in the 16th century. In “Chijikinkutsu”, sewing needles are floating on water in glass tumblers so they are affected by the magnetism field and would turn like a compass. When electricity is applied to the coil, the needle would hit the glass and create a very delicate sound. This a very minimal approach but with this few elements, the sound it generates are rather complex and colorful. I particularly appreciate this project in that it uses the magnetism as the input parameter and as the magnetism field would be different in different parts of the world, the resultant sound installation would be different.

Video featuring the sound performance
view of the installation
from http://www.everydaylistening.com/articles/tag/glass

Angela Lee—Looking Outwards—04

Adrien Kaeser, the creator of “Weather Thingy,” using his own device.
The interface of the device, which allows users to amplify or reduce the composition.

“Weather Thingy,” designed by Adrien Kaeser controls musical instruments using real time climate-related events that are collected/measured by a rain gauge, wind vane, and anemometer. Users also have some control over the piece, and can choose to amplify or reduce some of the output through the device. What I enjoy about this piece is that it allows people to take in information through the sense of hearing. I also appreciate how the visual design was intuitive through its use of colors and form because it shows how he thoughtful he was in considering how the user would interact and perceive the device itself. As a design student, I’ve created data visualizations and know how challenging they are, and this reminds me of a data visualization but done in an audio form. I’m not sure how Kaeser structured his algorithm, but I think it would make sense if he had defined variables that helped determine how chaotic or serene the composition based on the climate in real time. 

Sean Leo – Looking Outwards – 04

HarmonicTunes – Published on Nov 13, 2010

Chiptune music, or chip music, is produced mostly using video game consoles and home computer technology. Musicians utilize the sound card found on those devices and generate patterns to create their music. Most notably are musicians using the Nintendo Gameboy to create their sounds with. What I find so interesting about chiptune is that there’s an aspect of nostalgia, as we all have been accustomed to 8-bit plings and beeps growing up in an age of fast media advancements. Now the sounds feel old, and out-dated. Which, honestly, is apart of it’s appeal to me. It is choice to not use the newest, highest fidelity systems, and instead use a consumer product from an entirely different industry. Part of the rise of chiptune was it’s accessibility. If you had a game boy you were already half way there. The music from those games are iconic (we could probably all whistle the Mario theme, whenever asked), so to have those same sounds be created in a live setting with the energy of a punk show; it’s incredibly fun.

Monica Chang – Looking Outwards – 04

Meandering River by onformative and Funkhaus Berlin Sound Chamber

meandering river

Meandering River is an audiovisual art installation created by onformative and FunkHaus Berlin Sound Chamber. This collaboration decided to use algorithms to work rhythmically along with music to create this real-time generated visuals which imitate the natural fluctuation of river landscapes.

Meandering River in FunkHaus Berlin Sound Chamber
Visual representation/work of algorithm.
Meandering River : Full look into the installation

This audiovisual landscape spans across multiple screens utilizing the birds-eye view of the landscapes which shows the shape-shifting of surface more clearly. Using musical composition created by the Google Magenta Performance RNN learning model, the team was able to come up with a collection of computational strategies that would eventually translate these musical phrases into visual structures of the animation.

What really gravitated me towards this project was the idea of music/sound values conducting the way that a generated landscape forms and moves. Scientifically, we know bodies of water( in this case, rivers) normally erode the land to form these beautiful forms of land across the Earth. To see an alternative way( despite it being completely imaginary and digital ) to create land was very fascinating to me.

Some of the smaller things I also admired were the choice of colors and the use of various textures for each landscape. With this, the artists were able to create a sense of time over multiple imaginary landscapes as if we were to travel/explore through each region of this digitally-rendered planet.

Claire Lee – Looking Outwards – 04

I have always had a deep appreciation for the products of the intersection between art and biology. However, I’d only ever seen visual examples of this genre of work, so I was really excited and fascinated by Pierry Jacquillard’s Prélude in ACGT, a piece that takes the A-C-G-T (adenine, cytosine, guanine, thymine) order of Jacquillard’s own DNA and uses a Javascript-based program to convert it into the musical notes A, C, G, and T to a musical score. I really admired the concept of combining biology and music to create an organically generated musical piece that also holds deep meaning for an individual in regards to his own identity.

Prelude in ACGT, Chr. 1 to 22 and XY ECAL/Pierry Jaquillard

“This Prelude is important for me, as the technological advances are taking any data (including music) and turn them into DNA in order to save them for almost eternity as they promise. But for me, the most important is more the interpretation of a code rather than the materialism of the code itself. I think that maybe we are just generating data that will last centuries but the key to retrieve them won’t. They could be a kind of post-digital hieroglyphs.” 

Pierry Jacquillard

The algorithm that generated the work is written in JavaScript, using a midi library that generates signals to be converted into electronic sounds. I suppose that the DNA analysis is done outside of the code, but that the program takes the DNA analysis information and converts the A, C, G, and T to corresponding sound files. I believe that conceptually, this work is very simple, but that the concept in itself is very creative.

SooA Kim – Looking Outwards – 04

EXPERIMENT IN F# MINOR| 2013

Janet Cardiff and George Bures Miller are artists known for making sound art in sculptural form and bringing an aural experience to the audience in the space. I have been inspired by Cardiff & Bures Miller’s work in my art practice and realized that their works have been progressed into creating generative sound art installation work. Experiment in F# minor is one of the works where sound is triggered from the viewer’s shadows. By using the light sensors, the shadows cause instrumental tracks coming from the speakers to fade up, overlapping and mingling into various soundscape. As more audience fill in the room, it crates a cacophony of musical compositions and, with less audience in the room, the installation table remains in silence.

Experiment in F# Minor; Janet Cardiff & George Bures Miller from Cardiff & Miller on Vimeo.

looking outwards- 04 – ilona altman

video explaining the workings of the Weather Thingy

I was so happy to discover this project! This project is a beautiful mix of computation and sound. It is called “Weather Thingy” and it was made by Adrien Kaeser, a fellow bachelor’s student in school in Switzerland, last year! In this project, a musical can create music, that responds to, and changes with the local weather conditions.

I especially admire the visual design and the conceptual vision of this piece. Visually, this work is stunning, so simple and looks easy to use. I love the images that pop up on the screen associated to each facet of weather. Conceptually, I love the idea of the weather influencing song. It kind of makes me think about Ellaisson’s Weather Project/ meditating on how weather is one of the ways we experience nature within our city, it is an ever-present source of chaos. (link below)

https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0

Algorithmically, I would guess that each input (of the weather, and the musicians movements on the keyboard), are combined together in the ratio requested by the musician. Each weather input (rain, wind speed) affects a different aspect of the music. Thus the input of the music from the musical must also be analyzed according to these distinct parts so that it can be mixed with the weather inputs accordingly.

The artists artistic sensibility is present even within the name of this project, which is really funny and causal. A sensibility for clear design is also present within both the project’s interface and the documentation.