Looking Outwards-04

The inspirational project that I found is called Forms- Screen Ensemble , created in 2020. I found this artwork very interesting, as the graphic scores that are created by random probability are transformed into sound through a unique algorithm, creating music as played in the video.  From the description it was interesting that the project was described as something that the audience can literally simultaneously hear what they see. The algorithms that generated the work are that briefly, the three screens each contribute to create sound as they each cover rhythm, texture, and harmony.  The screens display graphics that are endless, and can never be repeated again. 

Link

LO: Sound Art

Kaeser plays with real time weather events to compose a piece of music.

Adrien Kaeser’s “Weather Thingy“ is a sound and computational art that uses real-time climate-related data to create custom-built sound effect and control the musical instruments. Each sound sensor is attached to a weather station on a tripod microphone to measure speed, rain, direction of wind, as well as a brightness sensor to assess ambient levels of light. Then, the sensor transforms the weather-events into MIDI inputs, which adjusts settings like chorus, pan, decay and LFO through a custom instrument panel. I found it really intriguing that the device interprets the differing stimulation of weather and connects user’s input to produce a variety of audio effects on the keyboard with a pair of knobs and four audio buttons. I really admire how weather can be rendered into auditory technology and computational data, and how Kaeser simply collaborated with the environment to compose an interesting musical piece. Kaeser’s artistic sensibility surfaces when he first came up with the idea of musical weather contraption, and later actually plays the music in such a novel way along with the nature.

Reference: https://medium.com/@JeremySCook/hear-the-environment-with-this-weather-thingy-c0d0502fca0e

LO: Sound Art

While looking through many different computational sound projects, the one that stood out to me the most was the Algorithmic Drive – Spectacular car(au)tonomy. The project uses a dash cam on a car to synchronize the information such as location, altitude, orientation, speed, engine RPM, stability, and the temperature of various sensors. I admire this project the most because they turn into everyday noises during a car ride to a beautiful artwork based on the sound. The system has a custom-built interface with illuminated rotary encoders and a monitor installed on a road case that contains a subwoofer. The artist’s artistic sensibilities manifest in the final form by being able to translate sound into digital components that people can view.

Algorithmic Drive – Spectacular car(au)tonomy, François Quévillon

Looking Outwards 04: Sound Art

Reading this week’s topic about automated emulations of music style was very interesting. Considering the evolution and how far music has come and applying it to this week’s topics are thought provoking. After reading more articles about this topic I learned more about simple harmonic motions and how the sounds we hear are actually fluctuations in air pressure, tiny variations from normal atmospheric pressure, cause by vibrations. This reminded a lot of when I took AP physics in high school and went over the sound waves unit. The continuous rise and fall in pressure are what creates the waves of sound. Contemporary technology has evolved dramatically as now digital music technology is possible. Digital music is a method of representing sound in numerical values and is often synonymous with MP3 music as its a common file formate which digital music exists in.

LO-04

FORMS – STRING QUARTET


I chose to discuss the project FORMS – String Quartet. Created by Playmodes, ‘FORMS – String Quartet’ is a live multimedia performance for a string quartet, electronic music and panoramic visuals, in the field of visual sonification. The project originates from a real-time visual music score generator created by Playmodes, that is designed with a set of rules using graphic generation, driven by randomness and probability. The algorithm takes the resulting graphics which are transformed into sound using spectral synthesis algorithms. I think this project is very interesting due to the fact that my sister played in many string quartets as a violinist.

An example of a string quartet

Looking Outward 4

The video “Sound Machines” on the YouTube channel MediaArtTube demonstrates a machine that uses light recognition to play sound. The machine has three sensors and takes in visual information from vinyl record sized discs with patters engraved. The three discs have different types of patterns which the sensors read. The first reads color, with the disk having red, yellow, and blue lines on a white disc, depending on where the sensor is located up or down the radius changes the sequence of colors, therefore changing the sound. The second disc reads either black or white, with 3 patterns made using a black disc hollowed out. When the sensor detects black is plays a noise. The final disk is split into thirds, where white becomes more and more grey. The sensor reads the grey scale and increases the volume the grayer the image gets.

LO-4

Emilio Bustamante

I have some experience with using sounds and robots for accelerating the
construction process for a sculpture. During a robotics class, we recorded
and coded sounds in teachable machine so the robot could recognize them and
do specific actions based on the sound. We used p5.js to communicate the robot
its actions location, speed, movement type, and words. After the robot would
hear the correct word it would pick up a stick and cut it in the specific
length specified by that word. The sculpture was a mixture of multiple sticks
that created a parametric form made with veroni in grashopper. All the process
was accelerated with algorithms from the design to the construction.

LO 4: Sound Art

Luke Du Bois’s piece Pop Icon: Britney is a generative video with sound, meant to be displayed in a physical frame. It explores the definition of “icon” in relation to pop stars and the history of Catholicism. Du Bois strips away the instrumentals from her songs, leaving only her voice, which is then processed with the reverb from the famous San Vitale Basilica in Ravenna, Italy. Without this context, the audio sounds like an ephemeral, yet discordant wash of tones absent of any structure or tonality. Du Bois also emphasizes the extensive and relatively new use of Photoshop and Autotune that ruled Britney’s experience of music and fame.

LO 04: Sound Art

Weather Thingy by Adrien Kaeser is a sound controller that uses real-time climate-related data to produce sound art. The data is used to modify the settings of the various instruments included in the station. The station consists of three climate sensors: a rain gauge, a wind vane, and an anemometer. The data collected from the three sensors pops up to an interface that allows the user to use the information to generate sound. The device is made with various machines including the Arduino mega. The software that connects the various machines includes Arduino, C ++, and MIDI protocol. I admire the performance aspect of the piece, where the audience can experience the live process of sound generation. The machines are put together in a visually appealing way. Furthermore, openness to human control over the data implementation seems to add more creativity to the project. The artist’s design to compose music through programming, live data collection, and human interaction seems to uplift the artistic sensibility of the entire piece

.https://www.creativeapplications.net/sound/weather-thingy-real-time-climate-sound-controller/

Looking Outwards: 04

ORGANVM PERCEPTVS

By Happy Valley Band

“Organvm Perceptvs” is an album by Happy Valley Band consisting of covers of pop classics. These songs are written by feeding the original songs into a custom built machine learning software that spits them back out in an altered re-written state. The band then plays the music that the software has regurgitated. I think this is a cool example of a collaboration between the AI and the human musicians. The AI doesn’t directly synthesize the sounds, but rather there is a level of human interpretation by playing the music that the AI writes. I also think that the output is wonderfully uncanny. The artificial intelligence remixes the music in a way that I doubt any person would think of doing, giving a complex and novel result. 

https://happyvalleyband.bandcamp.com/