Julie Choi – Looking Outwards – 11

This video shows how a person can interact with a musical machine Apparatum – Inspired by the Polish Radio Experimental Studio.

Created by a solo exhibition, panGenerator, Apparatum is made with a digital interface that produces purely electroacoustic sounds strictly through creating analog sounds. The visual language within this machine produces audio cues to create a collection of a symphony. I admire this project because it is an innovative way to create work of audio through customizable visual cues. Although there are many music producing machines, this work, in particular, is designed with two magnetic tapes to create tape samplers to collect the noise and base tone of the music. I also respect the physical design of this machine because it makes the user feel like they are in full control as it opens up an interactive space with the screen attached to the standing level of human height.

Rachel Lee Looking Outwards 11 Section E

Screenshot from the anti game, with machine intelligence posing and asking questions to create a generative song.

Atlås from binaura on Vimeo (2017).

This week, I decided to investigate Atlås, an anti game environment that creates generative music, by the creative computing collective binaura. Atlås was built using the p5.js library, while simultaneously investigating the autonomy of algorithms and machine intelligence. At the crux, this project generates sounds that correspond to certain qualities of  answers to questions that are solved by machine intelligence. While this concept in itself is fascinating, what I find the most interesting about this project is how the artistic collective was able to generate an appropriate soundscape for the types of questions generated, as the questions are often quite lofty deal with cognitive processes, human communication and environmental factors (these topics are not the easiest things to talk about). Further, I was impressed by how binaura was able to create a compelling visual narrative to complement the experience, which was guided appropriately by the tempo and feeling of the generated music. Overall, I really admire how well the project was executed, especially with respects to the sensitivity of the relationship between the user, the computer, the music, and visual elements of the anti game environment.

Kyle Leve-LO-Week-11

A piece of sound art that I discovered is called The 40 Part Motet by Janet Cardiff. This project uses 40 different speakers to play 40 different parts of the piece Spem in Alium by Thomas Tallis. The piece is originally very simple however this project makes the piece much more complex. The voices relate to each other in ways that are a lot less apparent in the original versions. What I find inspirational about this project is that Cardiff was able to take a piece of music created in the 16th Century and apply today’s advancements in technology to create something emotional and spiritual. This demonstrates how past and present can be combined to create something that can relate to everyone.

Video explaining the piece

The piece

 

Christine Chen-Looking Outwards-11

Above shows a video of Adrien Kaeser’s real time climate sound controller which he developed in 2018. Link: https://vimeo.com/292088058

While scrolling through the various projects related to computational sound art, I came across one that grabbed my attention- real time climate sound controller. Adrien Kaeser from ECAL, a university of art and design based in Renens, designed this sound controller. This device utilizes aspects of real time weather to alter and control the designed music instruments. The entire device is composed of two major parts- a weather station on a tripod microphone and a custom-built controller connected to the weather station. The weather station receives and senses weather information and passes it on to the controller. The controller transforms the data into one that can be interpreted by instruments. The user could also alter the device, through modifying qualities such as amplitude, to control the output of sounds. The device is made with Arduino mega, Arduino leonardo, Weather Meters, SparkFun ESP32 Thing Environment Sensor Shield, ESP32, SparkFun MIDI Shield, High Speed Optocoupler 6n138, Encoder rotary knobs, Colored buttons and Arduino TFT touch screen. Software includes Arduino, C++ and MIDI protocol.

What I love most about this project is how the creator utilizes weather, the state of the atmosphere at a place and time, into another abstract form which gives people a different experience. It is as if weather is compressed into an audio form. Through allowing users to alter the sound through the controller, the creator make sure that generated sound would not be entirely random and create a composition of muddled music. The controller helps user to alter the output to form sound that is pleasing to them. It is just inspiring to see how such a large, incontrollable data could be transferred and transformed.

The controller allows users to alter the output of sound.

KadeStewart-LookingOutwards-11

Proteus cover

Proteus is an indie video game that came out in 2013, focused on exploration with an emphasis on “nonviolence”. The soundtrack is written to reflect the natural beauty within the game, which is deterministic. However, the music that reaches the player’s ears is non-deterministic because it is influenced by the player’s environment and their interactions with the environment. For example, when the player is in a dense environment, the sound is very dense. When the player is walking, the sounds behave as they do when you yourself are walking.

The video game, as stated above, is intended to reflect a nonviolent existence. The soothing music plays into the message, and the exploratory theme is emphasized by the player’s active role in how the music sounds. I think that this is a very basic but incredibly powerful method of getting the designer’s message across.

Interactive Proteus Music

Jamie Dorst Looking Outward 11

For this week’s looking outward, I am writing about a project done by Pierry Jacquillard, Prélude in ACGT – Sonification of personal (DNA) data, where he converted his own DNA into a musical piece. He wanted to see how nature’s core structure (DNA) could collide with the artificial and man-made (Code). He created five interfaces to help achieve this project. Two of them drive the remote, which allows you to change parameters like Tempo, the musical arrangement or even the type of conversion and the chromosomes’ library where you can choose which one to play and where inside it. The three other are used to visualize the sound, the type of algorithm and his raw DNA – all in search for understanding the process.

Some images of the setup of his project

This project was really interesting to me because it’s something I probably never would have thought about, but it is actually a really creative idea as a way to compare natural and synthetic things. I’m also very surprised at how the music doesn’t sound too eclectic or random–it sounds very well like a contemporary piece that could have been made without the DNA.

Shirley Chen-Looking Outward-11 Computer Music

This project is a collection of graphics generated by computer based on the music by visual designer Cyrill Studer. In this project, the graphics are generating and transforming under the influence of music pieces in a subtle but clever and engaging way. This collections of graphics become the music video for this song. I think this is very fascinating for me that they utilize computing as a tool to visualize the music. It allows the viewers to not only experience the music by ear by also by the visual effect. The representation is very direct and related to the common perception, which allows the precise depiction of the music. The visual concept of the entire music video is based on a single form: the ellipse. Through the variation in angles, distortion, arrangement and number of the ellipses, they achieve the visual effect that is closely representation the music in a commonly understandable way and language.

The graphics were generated in Processing, manually controlled and performed with a midi controller and recorded through the Syphon with the Syphon Recorder.

Music Video – Baby Behold by CARVEL

Generated Graphics Based On Music

SOURCE:

https://www.creativeapplications.net/processing/carvel-baby-behold-music-video-by-cyrill-studer/

Looking Outwards 11 rrandell

https://www.creativeapplications.net/js/prelude-in-acgt-sonification-of-personal-dna-data/

This is a link to the artists work and a clip of his piece ‘Prelude in ACGT’ and below is a photo of his physical manifestation of the work

This Looking outwards is about artist Pierry Jaquillard. I would consider his piece ‘Prelude in ACGT’ sound art and not music, but there certainly is a musical aspect to his work. This piece combines sound and biology in a rather unique way. He examined his own personal DNA and tried to explore it through coding and then make something musical from this exploration. To create sound out of DNA, he coded 5 interfaces that allow certain factors to change. One of the interfaces allows you to access his chromosome library and chose a ‘piece’ of it to play. 3 of the interfaces actually examine the DNA and visualize sound in tandem with his raw DNA. Pierry uses a midi library JavaScript to generate midi signals those signals are then sent into Ableton live to actually generate electronic sounds which is then exported, stored, and translated into sheet music. I am very inspired by his interest and drive to create an intersection with these two fields of interest.

Tanvi Harkare – Looking Outwards 11

Prelude in AGCT is a project created by Pierry Jaquillard at ECAL Media and Interaction Design Unit. The project takes a person’s DNA and turns it into music notes. As of now the project only has Pierry’s DNA. It includes all 23 chromosomes, which are run through different interfaces in order to create different results. Everything is processed through JavaScript scripts. The DNA helps Pierry to visualize his DNA as sound. You can view the data on any digital device, such as an iPad. Pierry uses a midi file to generate signals to a computer that plays the file. Certain aspects of the music track can be changed, such as the tempo, arrangement, instruments, etc.

I find this project interesting because of how each person can create a different soundtrack because each individual has their own unique set of DNA. I wish there was an easier way for users to get a soundtrack unique to them – perhaps of their facial or body structure, which is something else that is unique to everyone.

Converting DNA into midi files onto a digital device

Curran Zhang- Looking Outwards 11

For this week’s post, I decided to investigate the work of Andrius Sarapovas. By converting, phone data into a numerical data set, music can be produced with the room-sized kinetic sculpture. Different segments that compose of metal bar, sound activator, sound damper, resonator, and mechatronics are placed on a surface or hung from the ceiling. With access to Tele2’s 4G network, various algorithms are used to generate music that covers 16 notes: C, D, F, and G along four octaves. As visitors of the art piece walk around the room, different harmonics are composed at different times and locations. In order to create an algorithm for the music to play, extremes of the 4G data of one second is used to create one second of music. Numbers derived from the extremes help determine the rhythm and volume.

Installation that is hung on the wall and from the ceiling

The work done by Sarapovas is to help bridge the two aspects of chaos and structure. Like the ideas of smart devices, his installation must be “smart” and be ever growing and changing with the flow of the internet. This way of expression is very interesting as it helps bring two ideas together and mesh it into a coherent art that can be absorbed by its visitors.

Visitors of all age express interest in the piece of art

https://creators.vice.com/en_us/article/7x9m3a/massive-robotic-instrument-smartphone-data-sounds