looking-outwards-04-ssharada-sound-art

Italian artist Loris Cecchini uses sound waves hitting water and creating ripples as the inspiration for his sculptural work – he focuses on creating something invisible, naked and visible in its true for to an unassuming viewer.

Each of Cecchini’s installation pieces freeze momentary natural forms on the wall, encased in polyester resin and white paint. The lack of motion and their monochromatic treatment leave each beautiful form open to interpretation and with an invitation to appreciate the patterns of the natural world.

And of course, none of this would have been possible without the correct use of technology, as Cecchini himself says:

“I’m very interested in the utopian dimension which bonds technology and nature. And I try to interpret a cultural landscape made of different “realities “, working on a diffused perception made of virtuality and consistent matter; in this sense I try to bridge. Naturally my work opens that to the wandering thoughts of the spectator.”

sijings-project04-stringArt

Sijings-StringArt

//Clair(sijing) Sun
//session C
//sijings@andrew.cmu.edu
//stringArt
var angle=0
function setup() {
    createCanvas(400, 300);
}

function draw() {
    
    //outer patel
    strokeWeight(1.5);
    background(247,234,186);


    //left top middle petal - 1st curve
    var curveinY = height/2+40;//local variable for determing the changes in location of lines
    var curveinX2 = width/2;

    for (var i5 = 0 ; i5 < 80 ; i5 ++) {
        stroke(250,112,154); 
        line(0, curveinY, curveinX2, 0);
        curveinY -= 10;//by minusing 10, we change the start position of y everytime a new line is created
        curveinX2 += 10;//by adding 10, we change the end position of x everytime a new line is created
    }
    

    //left bottom petal - 2nd curve
    var curve1Y = height/2+50;
    var curve1X2 = width/2+50;
    var color31=251;//an initial value for color changing
    var color32=171;
    var color33=126;
    var colorchange3=0;//the variable helps changing the color

    for (var i = 0 ; i < 50 ; i ++) {
        colorchange3+=3;//the value each time the RGB value changes
        stroke(color31,color32+colorchange3,color33+colorchange3); 
        line(0, curve1Y, curve1X2, height);
        curve1Y -= 10;//similar concept as the previous curve
        curve1X2 -= 10;
    }
    
    //petal at the top left, far back - 3rd curve
    var curve2X = 50;
    var curve2Y2 = height/2+50;
    var color21=250;//color for 3rd curve variables
    var color22=172;
    var color23=168;
    var colorchange2=0;//vairble helps chainging the color

    for (var i2 = 0 ; i2 < 50 ; i2 ++) {
        colorchange2+=4;//changed value everytime the RGE value changes
        stroke(color21,color22+colorchange2,color23+colorchange2); 
        line(curve2X, 0, 0, curve2Y2);
        curve2Y2 -= 10;//ending y position changed
        curve2X += 10;//starting x position changed
    }
    
    //patel at the top right -4th curve
    var curve3X = 50;
    var curve3Y2 = 50;
    var color1=255;
    var color2=185;
    var color3=205;
    var colorchange1=0;

    for (var i3 = 0 ; i3 < 50 ; i3 ++) {
        colorchange1+=2;
        stroke(color1,color2+colorchange1,color3); 
        line(curve3X, 0, width, curve3Y2);
        curve3X += 10;//starting x position changed
        curve3Y2 += 10;//ending y position changed
    }
    
    //petal at the bottom right - 5th curve
    var curve4Y = 50;
    var curve4X2 = width-50;
    var color11=255;
    var color12=65;
    var color13=101;
    var colorchange=0;

    for (var i4 = 0 ; i4 < 50 ; i4 ++) {
        colorchange+=5;//color gradient between lines 
        stroke(color11,color12+colorchange,color13); 
        line(width, curve4Y, curve4X2, height);
        curve4X2 -= 10;
        curve4Y += 10;
    }
    
    
    //top left petal, on the front - 6th curve
    var curvein2X = width/2;
    var curvein2Y2 = height/2;
    var color41=255;
    var color42=125;
    var color43=125;
    var colorchange4=0;

    for (var i6 = 0 ; i6 < 80 ; i6 ++) {
        colorchange4+=3;
        stroke(color41,color42+colorchange4,color43); 
        line(curvein2X, 0, 0, curvein2Y2);
        curvein2X -= 10;
        curvein2Y2 += 10;
    }

    push();//to avoid changing color and stroke for other variables
    noStroke();
    fill(255,255,255);
    ellipse(width/2,height/2-20,135,135);
    pop();

    //middle spinning flower
    if (mouseX>110&mouseX<270&&mouseY>90&&mouseY<200){
        //when mouse approaches the middle, circle appears
        for(var ii = 1; ii <=100; ii+=0.1) {//set to 100 so it will keep spinning
            angle+=30;//changing variable for spinning
            stroke(255, 114, 115);
            line((width/2-20)+computeX(3,angle), (height/2+20)+computeY(3,angle),
                width/2+computeX(70,angle+140), (height/2-20)+computeY(70,angle+140));
        //the inspiration came from deliverables
        }

    }
    

}

function computeX(r,angle) {//creating own function to cauculate the spinning circle
    return r*cos(angle);
}

function computeY(r,angle) {
    return r*sin(angle);
}




For this project, I had a really hard time to figure out what I really want to do because I personally don’t like string art which seems to be too scientific for me. However, after seeing the provided examples, I start to understand its beauty. I first analysed the Holly Hatfield’s blog on “7th Grade String Art” to figure out the algorithm. Once I understood how curves were created, my project became easier. My intention was to create a unique style of an eye where the spinning eyeball will appear once your mouse is on the pupil. After I finished creating the project, I felt my algorithm was too simple so I started figuring out how to make gradient curves (which is shown and explained in the comments). The whole process was fun and improved my understanding of this language.

adev_Looking Outwards 04_Sound Art

Intersection, 1993

Don Ritter

This art installation has a lot of interesting things going on in it. It is essentially a group of eight speakers, spread out to create four “lanes” in a dark room with no light. The sound is of cars rushing past and when a visitor walks between these speakers, or in one of these lanes, these cars react and come to a screeching halt. If a visitor stays stationary for longer than eight seconds, more cars pile up and then zip off once the visitor leaves.

I think this is extremely thought provoking. It uses sound to create this feeling of rush and fear from car collision experience, one we imagine to be more about our physiological, visual experiences. It uses sound to affectively get that feeling across. It also provides this dark room where this interaction takes place, something that feeds into the fear while also making the experience much more visceral. I think the process makes you and other people so much more aware of you, in spite of the darkness. The screeching of a car is such a convergence of lives and people. In the moment of a car halt, we recognise each other, a very human feature of our inner nature comes out, its this strange act of survival in one’s everyday life, an average red flag.

nahyunk1-Project04-String Art

sketch

function setup() {
    createCanvas(400, 300);
    background("lavender");
}
function draw() {
    for (var i = 1; i < 300; i += 6) { // first set of chrome lines;
      strokeWeight(2);
      stroke(mouseX-1, mouseX-5, mouseY-10);
      line(i, 0, i*10, 500);
    }
    for (y = 10; y < 300; y += 10) { //the yellow lines;
      stroke(255,250,205);
      strokeWeight(1.2);
      line(-i, y/400 + 30, 400, y*2);
    }
    for (i =0; i < 40; i++) { // white beams and orange beams;
      stroke("white");
      strokeWeight(0.2);
      line(1, i*1.5, i+200, 400);
      stroke("orange");
      line(i+400, 300, i*1.5, i*3.5);
    }
    for (i = 10; i <300; i += 3) { // coral pink lines at the bottom;
      var y1 = i*(30/400);
      stroke("pink");
      strokeWeight(0.5);
      line(i, y1*9, 90, 400);
    }
    for (h =0; h < 100;h +=8) { // floating blue lines;
      stroke("skyblue");
      var y1 = 170;
      var y2 = 23;
      line(h+20, y1, h*5 ,y2);
      y2 += y2
    }
}

I basically played around with the codes to get a better understanding of how each of the equations affect the shape and size of each set of line drawings. I also used color to identify each set of strings to show more distinction from one another.

jamieh-Looking-Outwards-04

Volume, designed by Softlab, consists of a grid of 100 mirror panels that respond and redirect light and sound, as well as interact with human movement through depth cameras. The mirrors panels rotate to face the nearest person. Based on the volume of the ambient sound in the space, LEDs on the sides of the panels light up and the panels move vertically. The interface of the project was coded and built in Processing. Cameras are placed overhead to track people’s positions. The data is then managed with OCP and Arduino microcontrollers.

What I like about this project is how they took sound, which is intangible, and gave it spacial qualities through the rotation towards the person as well as quantitative information of the volume of the sound through the LED lights. Light and sound, which doesn’t seem to move, now visually circulate through space. I like that it reacts to human movement, but based on their description of the interface using the weighted average to find the closest person to the cube of mirrors makes me wonder what happens when there are more people who are standing close to the installation.

yunzhous-Project-02-Variable-Face

sketch

var faceD = 150;
var cheekW = 35;
var cheekH = 20;
var LeftEarW = 110;
var LeftEarH = 150;
var RightEarW = 70;
var RightEarH = 100;
var R = 252;
var G = 200;
var B = 200;
var BubbleD = 40; // diameter of bubble
var x = 425; //point position of nose
var y = 130; //point position of nose
var eyeW = 10;
var eyeH = 10;
var Px = 240; //point position of line

function setup() {
    createCanvas(640, 480);
    angleMode(DEGREES);
}

function mousePressed() {
    faceD = random(150, 170);
    LeftEarW = random(100, 180);
    LeftEarH = random(145, 185);
    RightEarW = random(60, 140);
    RightEarH = random(95, 135);
    R = random(200, 255);
    G = random(100, 220);
    B = random(100, 220);
    BubbleD = random(20, 60);
    x = random(415, 475);
    y = random(110, 170);
    Px = random(240, 280);
    eyeW = random(10, 20);
    eyeH = random(10, 20);

}

function draw() {
    background(255, 236, 236);
    
    //body
    noStroke()
    fill(210);
    beginShape();
    curveVertex(200,  250);
    curveVertex(220,  260);
    curveVertex(175,  330);
    curveVertex(165, 400);
    curveVertex(300, 400);
    curveVertex(330, 240);
    curveVertex(350, 240);
    endShape();

    //body division
    stroke(50, 32, 32);
    strokeWeight(2);
    curve(240, 300, 240, 350, 290, 330, 300, 330);
    line(260, 345, Px, 400)

    //right ear
    push();
    rotate(10);
    noStroke();
    fill(210);
    ellipse(270, 120, RightEarW, RightEarH);
    fill(252, 225, 225);
    ellipse(270, 120, RightEarW/1.5, RightEarH/1.5); //inner ear
    pop();

    //Left ear
    noStroke();
    fill(210);
    ellipse(220, 210, LeftEarW, LeftEarH);
    fill(252, 225, 225);
    ellipse(220, 210, LeftEarW/1.5, LeftEarH/1.5); //inner ear

    //face
    noStroke();
    fill(210);
    ellipse(300, 220, faceD, faceD);

    //eye
    fill(50, 32, 32);
    ellipse(345, 220, eyeW, eyeH);

    //cheek
    fill(R - 20, G - 20, B - 50);
    ellipse(330, 250, cheekW, cheekH);
   

    //nose
    fill(210);
    beginShape();
    curveVertex(350, 200);
    curveVertex(355, 190);
    curveVertex(375, 180);
    curveVertex(x, y);
    curveVertex(x - 5, y +15);
    curveVertex(x - 5, y + 45);
    curveVertex(400, 195);
    curveVertex(390, 220);
    curveVertex(365, 255);
    curveVertex(365, 260);
    endShape();

    //bubble
    fill(R, G, B);
    ellipse(x + 25, y + 20, BubbleD, BubbleD); //large bubble
    fill(255);
    ellipse(x + 25, y + 20, BubbleD*.7, BubbleD*.7); //white part
    fill(R, G, B)
    ellipse(x + 22, y + 17, BubbleD*.7, BubbleD*.7); //to cut white part
}

I wanted to make an elephant whose ear size, nose length and cheek color changes. In this project I tried out the curveVertex function. It was very confusing at first but I gradually learned how it works.

LookingOutwards-04-sjahania

Fraction is a French artist and composer named Eric Raynaud who focuses on combining audio and visual art to immerse audiences in a different kind of performance. One of his projects, called Entropia, is a geodesic sphere of lights that react to sound, accompanied by a pixel-mapped display of lights in an IMAX dome. The displays both change based on the sounds that Fraction plays from within the sphere. It uses real-time programming to connect the sound and the display and then project it to the audience.


This is a video of the kinds of performances Fraction gives.

I found this particularly interesting because it uses the lights that react and the pixel mapped part to give the audience an overwhelming sensation. I also did a project about geodesic domes in high school, so that is probably why this one stood out the most. I also cannot fathom how one would make displays from sounds like the ones in the video by coding. I wonder how many different displays he can make with the sounds he coded.

website: http://www.fractionmusic.com/project/entropia-av/

mjeong1-Looking Outwards-04-SectionA

Nature and Humans Harmonize in a Symphony of Architectural Sound

Wunderbugs by OFL Architecture in Rome, Italy

Wunderbug is a installation that explored connection between human and nature. The pavilion has 6 plastic spheres inside of circular wall. Each sphere contains small terrarium and insects. The pavilion is equipped with a series of Arduino sensors to detect motion of insects as well as weather data, which is programmed to record changes and produce a soundtrack of data. I think the way OFL Architecture approaches to nature is interesting because they focused on acoustic quality of nature and how the sound is not only interactive with humans but also with the insects, the nature. I admire how they closely looked into motion of each insect and how they produce tactile experience  for humans not only through visual installation but also through sound effect. It would be a valuable lesson that the nature is not just a beautiful picture frame that people just look at, but it has life and constantly produce small changes even though changes are too subtle to notice.

I think it could be more intersecting to see visual changes of the pavilion based on those data. Perhaps, the facade of the pavilion could be interactive according to the insect motion data. On the other side, the visual quality of the pavilion could be related to sound that is being generated.

Link to archdaily

Sheenu-Looking Outwards-04


Seaquence is a game on the app store that allows you to spawn creatures that produce music by themselves. The creature’s shape determines how a sound is made. Adding notes or making musical sequences and even changing the waveform of sounds alters how the creature looks. A variety of different creatures can form an ensemble of music with just a tap of a button.

The game was developed by a company called Okaynokay and evolved based on a flash game years ago. According to the creators, the game was made and designed using a custom programmed physics engine which allows the creatures to move on their own based on their notes and give the player a game-like experience.

The game has a dedicated community consisting of both regular individuals and professional musicians. So far, over 300,000 songs have been made on Seaquence.

I really admire the fact that people can now make music with an easy-to-use and fun game. Making music is a long and hard process, but with this game many average and ordinary people can now create their own compositions and get inspired by other creators out there. Making a process that is long and complicated easier and more accessible to the public truly makes this game special in my opinion.

agusman-LookingOutwards-04

The Classifier on Creative Applications

The Classifier

Creators: Benedict Hubener, Stephanie Lee, Kelvyn Marte, Andreas Refsgaard and Gene Kogan

The Classifier is an AI paired with a table interface that selects music based on the beverages people are consuming around it at that time. The system is comprised of a microphone that takes in the ambient noise of drinking the beverage and an AI that compares the characteristic sounds to pre-trained models. The current limitation of the models is that they are constrained to three categories: hot beverages, wine or beer. The approach they take in trying to characterize the beverages by their distinctive noises is really unique, yet I have to wonder what other sensors or methods they could’ve used to collect more… telling… kinds of information, such as heat, color, carbonation- which could possibly lead to expanding the beverage categories from the current three.

I’ve also always been fascinated by generating contextual music. In my time at CMU, I’ve come across several student projects who’ve sought to use visual and other sensory input as the lens through which music is created. A friend of mine tried to recreate the sound track of a classic episode of “Tom & Jerry” just purely through a series of frames. Seems like a fascinating and incredibly enigmatic field of study that I would love to pursue.