Looking Outwards-04

The inspirational project that I found is called Forms- Screen Ensemble , created in 2020. I found this artwork very interesting, as the graphic scores that are created by random probability are transformed into sound through a unique algorithm, creating music as played in the video.  From the description it was interesting that the project was described as something that the audience can literally simultaneously hear what they see. The algorithms that generated the work are that briefly, the three screens each contribute to create sound as they each cover rhythm, texture, and harmony.  The screens display graphics that are endless, and can never be repeated again. 

Link

LO: Sound Art

Kaeser plays with real time weather events to compose a piece of music.

Adrien Kaeser’s “Weather Thingy“ is a sound and computational art that uses real-time climate-related data to create custom-built sound effect and control the musical instruments. Each sound sensor is attached to a weather station on a tripod microphone to measure speed, rain, direction of wind, as well as a brightness sensor to assess ambient levels of light. Then, the sensor transforms the weather-events into MIDI inputs, which adjusts settings like chorus, pan, decay and LFO through a custom instrument panel. I found it really intriguing that the device interprets the differing stimulation of weather and connects user’s input to produce a variety of audio effects on the keyboard with a pair of knobs and four audio buttons. I really admire how weather can be rendered into auditory technology and computational data, and how Kaeser simply collaborated with the environment to compose an interesting musical piece. Kaeser’s artistic sensibility surfaces when he first came up with the idea of musical weather contraption, and later actually plays the music in such a novel way along with the nature.

Reference: https://medium.com/@JeremySCook/hear-the-environment-with-this-weather-thingy-c0d0502fca0e

LO: Sound Art

While looking through many different computational sound projects, the one that stood out to me the most was the Algorithmic Drive – Spectacular car(au)tonomy. The project uses a dash cam on a car to synchronize the information such as location, altitude, orientation, speed, engine RPM, stability, and the temperature of various sensors. I admire this project the most because they turn into everyday noises during a car ride to a beautiful artwork based on the sound. The system has a custom-built interface with illuminated rotary encoders and a monitor installed on a road case that contains a subwoofer. The artist’s artistic sensibilities manifest in the final form by being able to translate sound into digital components that people can view.

Algorithmic Drive – Spectacular car(au)tonomy, François Quévillon

Looking Outwards 04: Sound Art

Reading this week’s topic about automated emulations of music style was very interesting. Considering the evolution and how far music has come and applying it to this week’s topics are thought provoking. After reading more articles about this topic I learned more about simple harmonic motions and how the sounds we hear are actually fluctuations in air pressure, tiny variations from normal atmospheric pressure, cause by vibrations. This reminded a lot of when I took AP physics in high school and went over the sound waves unit. The continuous rise and fall in pressure are what creates the waves of sound. Contemporary technology has evolved dramatically as now digital music technology is possible. Digital music is a method of representing sound in numerical values and is often synonymous with MP3 music as its a common file formate which digital music exists in.

LO 4: Sound Art

The Cotodama lyric speaker was produced to act as a canvas that would digitally display the lyrics to any song playing through its speakers. Although this doesn’t exactly fall under the classification of computationally generated music, I feel like this is a product that works under very similar mechanics.

I like the product itself due to its elegant look, though it looks as if it comes as two separate pieces, it is actually one solid unit designed to look like vinyl record jackets leaned up against the wall. I also think it’s a great example of sound data being used in coordination with databases to create visuals. When connected to an audio source, its display panel sources lyrics from databases
and displays the lyrics in sync with the song. It also is programmed to analyze a song’s mood to pick corresponding fonts and animations. In cases where no lyrics can be found for a song, the speaker will display AI-generated animations that move in sync with the music instead.

https://lyric-speaker.com/en/history/

LO-04

FORMS – STRING QUARTET


I chose to discuss the project FORMS – String Quartet. Created by Playmodes, ‘FORMS – String Quartet’ is a live multimedia performance for a string quartet, electronic music and panoramic visuals, in the field of visual sonification. The project originates from a real-time visual music score generator created by Playmodes, that is designed with a set of rules using graphic generation, driven by randomness and probability. The algorithm takes the resulting graphics which are transformed into sound using spectral synthesis algorithms. I think this project is very interesting due to the fact that my sister played in many string quartets as a violinist.

An example of a string quartet

Project 4: String Art

wpf-string-art
//Patrick Fisher Section B

var dx1;
var dy1;
var dx2;
var dy2;
var numLines = 50;

function setup() {
    createCanvas(400, 300);
    background(50);
    fill(10);
    rectMode(CENTER);
    rect(200,150,250,250)
    dx1 = (275-25)/numLines;
    dy1 = (275-25)/numLines;
    dx2 = (25-275)/numLines;
    dy2 = (25-275)/numLines;
}

function draw() {
    var x1 = 75;
    var y1 = 25;
    var x2 = 275;
    var y2 = 275;
    stroke(255,0,0,200);
    for (var i = 0; i <= numLines; i += 1) { //crates x design that goes from upper left to bottom right
        line(x1, y1, x2, y2);
        x1 += dx1;
        y1 += dy1;
        x2 += dx2;
        y2 += dy2;

    }

    x1 = 375;
    y1 = 25;
    x2 = 75;
    y2 = 275;
    for ( i = 0; i <= numLines; i += 1) { //creates x design that goes from lower left to upper right
        line(x1, y1, x2, y2);
        x1 += -dx1;
        y1 += dy1;
        x2 += -dx2;
        y2 += dy2;
    }

    dx1 = (325-25)/numLines;
    dy1 = (75-25)/numLines;
    dx2 = (25-275)/numLines;
    dy2 = (25-0)/numLines;

    x1 = 75;
    y1 = 25;
    x2 = 325;
    y2 = 75;
    stroke(0,0,255,200);
    for ( i = 0; i <= numLines; i += 1) { //creates the blue pattern at the top
        line(x1, y1, x2, y2);
        x1 += dx1;
        y1 += dy1;
        x2 += dx2;
        y2 += dy2;

    
    }



    x1 = 325;
    y1 = 75;
    x2 = 75;
    y2 = 25;
    for ( i = 0; i <= numLines; i += 1) { //creates the second half of the top blue pattern
        line(x1, y1, x2, y2);
        x1 += -dx1;
        y1 += -dy1;
        x2 += -dx2;
        y2 += -dy2;

    
    }

    x1 = 325;
    y1 = 175;
    x2 = 75;
    y2 = 250;
    stroke(0,0,255,200);
    for ( i = 0; i <= numLines; i += 1) { //creates the botom blue pattern
        line(x1, y1, x2, y2);
        x1 += -dx1;
        y1 += dy1;
        x2 += -dx2;
        y2 += dy2;   
    }

    x1 = 75;
    y1 = 250;
    x2 = 325;
    y2 = 175;
    stroke(0,0,255,200);
    for ( i = 0; i <= numLines; i += 1) { //creates the top half of the bottom blue pattern
        line(x1, y1, x2, y2);
        x1 += dx1;
        y1 += -dy1;
        x2 += dx2;
        y2 += -dy2;

    }
    noLoop();
}

I really struggled with this project, both creatively and technically. I had to take the sample code and finger with it mostly randomly to create anything that I was remotely satisfied with it.

Looking Outward 4

The video “Sound Machines” on the YouTube channel MediaArtTube demonstrates a machine that uses light recognition to play sound. The machine has three sensors and takes in visual information from vinyl record sized discs with patters engraved. The three discs have different types of patterns which the sensors read. The first reads color, with the disk having red, yellow, and blue lines on a white disc, depending on where the sensor is located up or down the radius changes the sequence of colors, therefore changing the sound. The second disc reads either black or white, with 3 patterns made using a black disc hollowed out. When the sensor detects black is plays a noise. The final disk is split into thirds, where white becomes more and more grey. The sensor reads the grey scale and increases the volume the grayer the image gets.

LO-4

Emilio Bustamante

I have some experience with using sounds and robots for accelerating the
construction process for a sculpture. During a robotics class, we recorded
and coded sounds in teachable machine so the robot could recognize them and
do specific actions based on the sound. We used p5.js to communicate the robot
its actions location, speed, movement type, and words. After the robot would
hear the correct word it would pick up a stick and cut it in the specific
length specified by that word. The sculpture was a mixture of multiple sticks
that created a parametric form made with veroni in grashopper. All the process
was accelerated with algorithms from the design to the construction.