Paul Greenway – Looking Outwards – 04

CarbonScape by Chris Cheung

CarbonScapes is an installation by Chris Cheung that spreads awareness of CO2 pollution by visualizing sound data collected at various sources of high carbon footprints. Cheung initially collected sound samples from a variety of pollutants such as jet engines, industrial factories, and cargo ships and compiled the data into a single soundscape. To visualize this sound data, long tubes that imitate industrial chimneys are spread across the room each with a black sphere that moves up and down inside the tube in conjunction with the sound data. The combined soundscape in addition to the dynamic black spheres representing CO2 create a clear visual representation based on the initial audio sampling data.

I found the project to be a very interesting and unique way of visualizing audio and thought the pairing of CO2 levels with sound generated worked very well. The forest like appearance of the installation and the constantly moving spheres also tied very well into the theme and intention of the project while accurately depicting the data.

CarbonScape sound data

Hyejo Seo – Looking Outwards-04


Video that demonstrates how Uncertainty Principle of the Invisible works. 

“There is no such thing as emptiness; it is full of randomness”

This interactive art project, Uncertainty Principle of the Invisible, created by Hyesoo Chang was inspired by the connection between human’s voice and the random movement of dust. Every individual has unique vocal tone and rhythm that, consequently, differentiates the movement of the dust in the air. In this case, human’s voice and dust particles work as “the Invisible”, and the random interaction between the two is the “Uncertainty Principle”. How this installation works is that, when a person speaks into the microphone (physical interaction), their voices get transformed into digital files, which, then, affects how the dust bounces. This art installation was created using Maxmsp. 

I think it’s a very intriguing and creative piece that shows how our voices – something that is invisible to us – affects the physical world around us. This project shines light on the interaction between us and our surroundings that is rather overlooked everyday. So cool! 

Steven Fei-LookingOutwards-04

  The advent of robots, sound art has been mediated by machines and artificial intelligence. Robots can be programed to “think” to produce music that can fool humans to consider the sons were made by their same species. From the article “When Robots Write Songs”, some robots were specially designed to emulate some famous Jazz musians and DJs and they indeed surprised me by copying and developing those personal preferences and styles in music compositions. Starting from building the Fibonacci relationships among the notes to generate people’s general appreciation towards beauty, the amazing robots can even develop the algorithms to deconstruct and understand personal music styles in such a rational way.

    One of the most interesting projects about how artificial sound art can change human’s perception of the world is the Prelude in ACGT. Eploring the relationship between biology and music, the tool designed by Pierry Jequillard uses the developer’s own DNA and converts it into music. By controlling several parameters like Tempo, the developer is able to visualize and understand the algorithmic process of how artificial intelligence converts his chromosomes into musical arrangements. Such a technological advances amazes me in its conversion of our natural form into encoded eternal artworks. Moreover, the experiment successfully demonstrates the versatility of artificial intelligence in producing interdisciplinary of arts.

Click here to view the original article  

 

 

Sydney Salamy: Looking Outwards-04

The project I chose to write about is Weather Thingy by Adrien Kaeser. It was made in 2018 with help from Cyril Diagne, Gaël Hugo, Christophe Guignard, Laura Perrenoud, Tibor Udvari, Pietro Alberti, and Marc Dubois. The project is a sound controller that uses the weather to make music. The device converts things like rain, wind, and sunlight into midi data which instruments can then interpret. It gets this data with its rain gauge, brightness sensor, and wind anemometer and vane. The other part of the piece is a controller that’s responsible for the transforming of the weather data.

 

  • I really like the idea of incorporating natural events into manmade activities. The device allows for there to be input from the musician but also allows for a significant amount of randomness from the weather. I like the idea of mixing natural and manmade things because they are essentially opposite things, so having them brought together to create something pretty is a nice idea to me. The random aspect to some of the music is interesting as well. Leaving some stuff up to chance is exciting because no one can really guess what the end product will sound like. 
  • I don’t know much about the algorithms that generated the work. However, I assume they are moderately complicated. Kaeser would have had to assign certain notes and other musical aspects to certain wind speeds, direction, and brightness.
  • The artist seems to have a minimalist aesthetic. I assume this because the final form is very simple yet effective. The video demonstrating is also minimalist looking (although I don’t know how much creative power he had over that). If the music he creates is counted as part of the final product, then it also shows his artistic sensibilities. He is able to play his own notes on the keyboard that he came up with, with the weather music added to it. Also, the different weather events cause different kinds of sounds, so he obviously imagined those events sounding a certain way and then incorporated that into the product.
Video Demonstrating The Music Of “Weather Thingy”

Emma N-M LO-04

TuneTable (2016) 

By: Xambo, A., Drozda, B., Weisling, A., Magerko, B., Huet, M., Gasque, T., and Freeman, J

TuneTable demo in Museum of Design Atlanta 

TuneTable is a project that provides an interactive tabletop application to teach basic computer programming concepts. Users can compose short musical snippets with visual displays by arranging the blocks in different locations to build chains of blocks of code. I admire the fun aspect of creating a learning activity for kids and also how easy it is to make music and visuals without having to think very hard. For the algorithms, I think the blocks placed on the table have special outputs based on where it is placed and also the rotation of it. The creator’s artistic sensibilities comes into play in the musical and visual algorithms created for the blocks that get put down onto the table.

Min Ji Kim Kim – Looking Outwards – 04

Video Demonstration of the Weather Thingy created by Adrien Kaeser at ECAL.

Weather Thingy, created by Adrien Kaeser at ECAL in September 2018, is a sounds controller that takes live climate data to modify the sounds of musical instruments. There are two parts to this device. The tripod microphone includes a wind vane, controlling pan, an anemometer, controlling the LFO, and a rain gauge, controlling the chorus. All of these parts are connected to the main controller with which the composer can manipulate these different elements. Through the Weather Thingy, Kaeser wanted to aurally share with the audience his live source of inspiration and how climate impacted his musical compositions. Software used to create the Weather Thingy include Arduino, C++ and MIDI protocol.

Controller interface of the anemometer input measuring wind speed.

I found this project to be astounding because, having composed music myself, sometimes I wish I could share my sources of inspiration and really let my audience visualize the atmosphere in which I wrote a song in. The Weather Thingy combines musical composition with natural elements in the environment and seeing how Kaeser wasn’t only able to get inspiration from nature but also incorporate nature directly into his compositions, is something I found really refreshing.

Weather Thingy – Real time climate sound controller

Caroline Song – Looking Outwards 04

The piece I chose to focus on this week is called Sonic Playground, by Yuri Suzuki Design (as they collaborated with High Atlanta). Essentially, Sonic Playground is an sound installation based outside which is made up of a collection of colorful sculptures which transmits as well as modifies sound in different and intriguing ways.

Sonic Playground by Yuri Suzuki Design

This interactive sound display is used by sound being manipulated and changed based on where one is standing. For example, talking through one end of the sculpture will twist the sound of the voice as it travels to be received by another person standing and listening at the other end of the sculpture.

According to the article, Luca Dellatorre used Grasshopper to develop the pieces in this sculpture piece, which is a “parametric design plug-in in Rhinoceros”. Dellatorre wrote a plug-in which is a 3D raytracing tool which lets the audience choose a source of sound and send that in a precise direction. He used this method in order to test the different sounds that could be made and therefore, evaluate certain shapes for the mirrors and the bells.

I am intrigued by this piece mainly because of its interaction between the physical space around it and the users. The way the bells are positioned around the different sculptures are interesting in that the users who are experiencing this piece will have to maneuver their heads and their bodies certain ways to properly interact with the sculpture.

Ankitha Vasudev – Looking Outwards – 04

Sonic Playground is an outdoor sound installation that was created by Yuri Suzuki in 2018. This project was made for the High Museum of Art in Atlanta. It features 6 interactive, playful and colorful sculptures that modify and transmit sounds depending on where the participant is standing, listening or speaking. Using horns and pipes the computed acoustics travel from one end to the other in a playful way and by kneeling down or sitting between two pipes one can hear the that sounds are transported from different parts of the sculptures, which creates a fun and unique listening experience.

Children engaging with Sonic Playground

I find this project interesting because it made use of a software that I am learning now – Grasshopper (as a parametric plug-in for Rhino). The code that was written for this is a 3D raytracing tool that allows the user to select a sound source and send sound in a certain direction or towards a certain geometry, in this case the shape of the acoustic mirrors or the bells at the start and end of the pipes to see how the sound is reflected and what is the interaction with the object. 

Grasshopper/Rhinoceros 3D
The pipes transfer sounds depending on where the person is situated

Alec Albright – Looking Outwards 04

Dave Young’s exhibition of Radius Music in 2010.

Dave Young’s “Radius Music”, released in 2010, is an audiovisual device that reads the ultrasonic distance from those that are in the room to the device itself, conducting additive synthesis to create audio and placing light beams on the floor in response to that distance. This project is admirable because of its ability to conduct complicated audio synthesis techniques on the fly, based on feedback it may have never dealt with before. I would imagine that the algorithm maps distance to pitch in some way, but I’m not sure what affects timbre, tempo, or anything else for that matter. Regardless, it seems super cool!

Siwei Xie-Looking Outwards-04

Using “Apparatum” created by panGenerator in 2018.

Apparatum is a custom made apparatus with digital interface that emits purely analogue sound. I admire the project because it takes inspirations from earlier studios and musician, then gives it a modern twist. The physical form is inspired by Polish Radio Experimental Studio, which uses magnetic tape as primary medium. Musically, it is inspired by Boguslaw Schaeffer, who conceived his own visual language of symbols that conveyed the cues for sound engineers. 

The creator, panGenerator, utilizes electron (node.js) and c running on teensy 3.2 to generate  interface and micro-controller elements. Creator’s artistic sensibility manifests by borrowing from Oskar Hansen’s “Black Room.” The original design was generated more than 100 years ago, yet panGenerator is able to add touch screen and modern studio elements to create a chic apparatus. The black and white combination would easily catch consumers and artists’ eyes today.

Link to original source is here.