Min Ji Kim Kim – Looking Outwards – 10

Overview of the Prélude in ACGT project by Pierry Jaquillard.

Prélude in ACGT, created by Pierry Jaquillard at the Media and Interaction Design Unit at ECAL, utilizes Jaquillard’s own DNA chromosomes data and transforms it to generate sound. This project created through JavaScript, midi and Ableton Live, consists of  five interfaces. Two of the interfaces allow the user to control features such as tempo or musical arrangement while the other three, visualize sound, the algorithm type and the DNA itself. The user can also export the midi file to record and generate a musical score using music notation software.

User interface screens of the different features that can be controlled to manipulate the sound.

I really admire this project because it seamlessly combines the field of human biology with computer science. The idea of “coding” DNA is quite practically literally represented in this project. Furthermore, I believe that this project has endless possibilities. No one’s DNA is the same which means that using this software, we would be able to create extremely unique pieces of music. 

552 page musical score representing 0.2% of Jaquillard’s DNA.

Steven Fei – Project – 10


sketch

For this project, I applied 4 different sounds into my project 3 post. When the mouse clicks, an explosion will be initiated to represent a start of the program. There are 3 variables in my sketch – the size that directly controls the radius of the hexagons, the color that changes when the mouse moves, and the angles that manipulates the positions of the hexagons on the canvas. Therefore, the idea is to give the 3 variables different sound effects to determine whether they have reached certain bounds. For the size, a “slutty wave” sound will pop up to indicate the size of the first hexagon in the sketch reaches the upper limit. For the color changing, a ghostly impression will be made everytime when the color transitions between pink(blue) and purple. Ultimately, the angular positions of the hexagons will be suggested through a “boon” sound everytime the hexagons finishes a 1/60 cycle. All the changes can be initiated when the mouse moves and clicks.

//Steven Fei
//Assignment 10
//Section - A
//zfei@andrew.cmu.edu
function setup() {
    createCanvas(600, 480);
    useSound();
    
}

var size = 8; //hexagon size that can change according to the mouse movement
let color = 0; //hexagon color that can change according to the mouse movement
var colorDir = 2; //the degree of change for the color change
let angle = 0; //the initial rotation angle for the hexagon
var dir = 1; // the growing direction of the hexagon, either positive or negative
var speed = 2; //the growing speed of the hexagon
var clickSoundSciFi;
var ghost;
var sizeShrink;
var rotatingPeriod;
function preload(){
    clickSoundSciFi = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/490266__anomaex__sci-fi-explosion-2.wav");
    ghost = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/490515__staudio__ghostguardian-attack-01.wav");
    rotatingPeriod = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/490316__nicknamelarry__cartoon-space-sfx.wav");
    sizeShrink = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/489892__tkky__slutty-808.wav");
}
function soundSetup(){
    clickSoundSciFi.setVolume(0.3);
    ghost.setVolume(0.3);
    sizeShrink.setVolume(0.4);
    rotatingPeriod.setVolume(0.5);
}
function mousePressed(){
    clickSoundSciFi.play();//an explosion sound when mouse is pressed
}
function mouseMoved(){
    color = color +colorDir;
    if (color<0){
        colorDir = 2;
    } else if (color>255){
        ghost.play(); //a ghost will be created when the color of the hexagon transitions between purple and blue
        colorDir = -2;
    }
    angle +=0.5;
    if(angle % 60 == 0){
        rotatingPeriod.play();// a "boon" sound will make when the hexagons finishes 1/6 of a cycle
    }
    size += dir * speed;
    if(size<0){
        dir = 1;
        size = 0;
    }else if (size>=60){
        dir = -1;
        size = 60;
        sizeShrink.play();//a sound is made to imply the hexagons are reaching the maximum sizes
    }
}

var diffx = 0;
var diffy = 0;
var circlex = 300;
var circley = 300;

function draw() {
    background(0);
//    locate the mouse position
    diffx = mouseX - circlex;
    diffy = mouseY - circley;
    circlex = circlex + 0.1*diffx;
    circley = circley + 0.1*diffy;
    fill("white");
    circle(circlex,circley,20);
    
    fill(color,37,213);
    var x = max(min(mouseX,300),5); // decide the starting point of the hexagon, when the mouse is far on the left the canvas, the hexagons may shrink together and when the mouse is far on the right of the canvas, the hexagons may move away from each other
    translate(300,240); //move to the center of the canvas
//    draw the basic shape for 1st Hexagon  
    beginShape();
    rotate(radians(angle));
    vertex(x/2,0);
    vertex(x/2+size*cos(radians(60)),0-size*sin(radians(60)));
    vertex(x/2+size+size*cos(radians(60)),0-size*sin(radians(60)));
    vertex(x/2+size+2*size*cos(radians(60)),0);
    vertex(x/2+size+size*cos(radians(60)),size*sin(radians(60)));
    vertex(x/2+size*cos(radians(60)),size*sin(radians(60)));
    endShape();
    // draw the basic shape for 2nd Hexagon
    rotate(radians(60));
    beginShape();
    vertex(x/2+1.3,0);
    vertex(x/2+1.3+1.3*size*cos(radians(60)),0-1.3*size*sin(radians(60)));
    vertex(x/2+1.3+1.3*size+1.3*size*cos(radians(60)),0-1.3*size*sin(radians(60)));
    vertex(x/2+1.3+1.3*size+2*1.3*size*cos(radians(60)),0);
    vertex(x/2+1.3+1.3*size+1.3*size*cos(radians(60)),1.3*size*sin(radians(60)));
    vertex(x/2+1.3+1.3*size*cos(radians(60)),1.3*size*sin(radians(60)));
    endShape();
//    draw the basic shape for 3rd Hexagon
    rotate(radians(60));
    beginShape();
    vertex(x/2+1.5,0);
    vertex(x/2+1.5+1.5*size*cos(radians(60)),0-1.5*size*sin(radians(60)));
    vertex(x/2+1.5+1.5*size+1.5*size*cos(radians(60)),0-1.5*size*sin(radians(60)));
    vertex(x/2+1.5+1.5*size+2*1.5*size*cos(radians(60)),0);
    vertex(x/2+1.5+1.5*size+1.5*size*cos(radians(60)),1.5*size*sin(radians(60)));
    vertex(x/2+1.5+1.5*size*cos(radians(60)),1.5*size*sin(radians(60)));
    endShape();
//  draw the basic shape for 4th Hexagon
    rotate(radians(60));
    beginShape();
    vertex(x/2+1.7,0);
    vertex(x/2+1.7+1.7*size*cos(radians(60)),0-1.7*size*sin(radians(60)));
    vertex(x/2+1.7+1.7*size+1.7*size*cos(radians(60)),0-1.7*size*sin(radians(60)));
    vertex(x/2+1.7+1.7*size+2*1.7*size*cos(radians(60)),0);
    vertex(x/2+1.7+1.7*size+1.7*size*cos(radians(60)),1.7*size*sin(radians(60)));
    vertex(x/2+1.7+1.7*size*cos(radians(60)),1.7*size*sin(radians(60)));
    endShape();
//    draw the basic shape for 5th Hexagon
    rotate(radians(60));
    beginShape();
    vertex(x/2+1.9,0);
    vertex(x/2+1.9+1.9*size*cos(radians(60)),0-1.9*size*sin(radians(60)));
    vertex(x/2+1.9+1.9*size+1.9*size*cos(radians(60)),0-1.9*size*sin(radians(60)));
    vertex(x/2+1.9+1.9*size+2*1.9*size*cos(radians(60)),0);
    vertex(x/2+1.9+1.9*size+1.9*size*cos(radians(60)),1.9*size*sin(radians(60)));
    vertex(x/2+1.9+1.9*size*cos(radians(60)),1.9*size*sin(radians(60)));
    endShape();
//    draw the basic shape for 6th Hexagon
    rotate(radians(60));
    beginShape();
    vertex(x/2+2.1,0);
    vertex(x/2+2.1+2.1*size*cos(radians(60)),0-2.1*size*sin(radians(60)));
    vertex(x/2+2.1+2.1*size+1.9*size*cos(radians(60)),0-2.1*size*sin(radians(60)));
    vertex(x/2+2.1+2.1*size+2*2.1*size*cos(radians(60)),0);
    vertex(x/2+2.1+2.1*size+1.9*size*cos(radians(60)),2.1*size*sin(radians(60)));
    vertex(x/2+2.1+2.1*size*cos(radians(60)),2.1*size*sin(radians(60)));
    endShape();
    
    
}

Min Ji Kim Kim – Project 10 – Sonic-Sketch


sketch

I got this week’s project inspiration from the facebook emojis and decided to animate them with sound. You can click on each emoji and it will generate the corresponding mood sound. The hardest part of this project was trying to figure out how to use a local host and uploading it to WordPress, but overall, it was really fun!

/*
Min Ji Kim Kim
Section A
mkimkim@andrew.cmu.edu
Project-10
*/

var laugh;
var wow;
var crying;
var angry;

function preload() { //load sound files
    laugh = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/laugh.wav");
    wow = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/wow.wav");
    crying = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/crying.wav");
    angry = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/angry.wav");
}

function setup() {
    createCanvas(480, 300);
    noStroke();
    //create different background colors
    fill("#184293"); //laughing emoji
    rect(0, 0, width / 4, height);
    fill("#05B5C3"); //wow emoji
    rect(width / 4, 0, width / 4, height);
    fill('#BC2D15'); //angry emoji
    rect(width / 2, 0, width / 4, height);
    fill(0); //sad emoji
    rect(width * 3 / 4, 0, width / 4, height);
}

function draw() {
    noStroke();
    //create 4 emoji heads
    for (i = 0; i < 4; i++) {
        fill("#FBD771");
        circle(i * width / 4 + 60, height / 2, 90);
    }
    
    //laughing emoji
    //eyes
    stroke(45);
    line(35, 130, 50, 135); //left
    line(35, 140, 50, 135);
    line(70, 135, 85, 130); //right
    line(70, 135, 85, 140);
    //mouth
    noStroke();
    fill(45);
    arc(60, 155, 55, 50, 0, PI);
    fill('#F35269');
    ellipse(60,170,38,20);
    
    //wow emoji
    //eyes & mouth
    fill(45);
    ellipse(160, 140, 13, 20);
    ellipse(200, 140, 13, 20);
    ellipse(180, 170, 25, 35);
    //eyebrows
    noFill();
    stroke(45);
    strokeWeight(3);
    curve(130, 180, 152, 125, 166, 120, 140, 120);
    curve(170, 140, 193, 120, 207, 125, 200, 150);

    //angry emoji
    //eyebrows
    stroke(45);
    strokeWeight(4);
    line(270, 150, 290, 155);
    line(310, 155, 330, 150);
    //eyes
    fill(45);
    circle(283, 157, 5);
    circle(318, 157, 5);
    //mouth
    ellipse(300,170,20,3);

    //crying emoji
    //eyes
    ellipse(400, 150, 10, 12);
    ellipse(440, 150, 10, 12);
    //eyebrows
    noFill();
    stroke(45);
    strokeWeight(3);
    curve(410, 130, 392, 140, 405, 135, 450, 160);
    curve(410, 150, 435, 135, 448, 140, 450, 165);
    //mouth
    arc(420, 175, 20, 15, PI, TWO_PI);
    noStroke();
    //tear
    fill("#678ad6");
    circle(445, 185, 15);
    triangle(438, 182, 445, 165, 452, 182);
}

function mousePressed() {
    if(mouseX < width / 4) { //play lauging sound
        laugh.play();
    }
    if(mouseX > width / 4 & mouseX < width / 2) { //play wow sound
        wow.play();
    }
    if(mouseX > width / 2 & mouseX < width * 3 / 4) { //play angry sound
        angry.play();
    }
    if(mouseX > width * 3 / 4) { //play crying sound
        crying.play();
    }
}

Hyejo Seo – Looking Outwards – 10

Ge Wang’s Ted Talk on computer music

Ge Wang is a professor at Stanford Department of Music and a co-founder of ‘Smule’. As a professor at Stanford, he created different instruments for the Stanford Computer Orchestra, using programming language, Chuck, and game track. Game track was first commoditized for golf players as it is a device that tracks your hand gestures. After coding for different instruments, the orchestra members play these instruments using this device. It not only produces different sounds depending on how much you have pulled or the location of your hand (left or right), but it also promotes interactions just like the traditional instruments.

As a co-founder of Smule, Ge Wang created an app that would function as portable instruments. In his Ted talk, he demonstrates playing ocarina with this app. As he is blowing into the microphone of his phone and pushing different buttons on the screen, the app starts to make sound. The app has Chuck code – music programming language – that detects the strength of your blowing and synthesizing the sounds.

I decided to talk about this talk for several reasons. First, the fact that Stanford University has Computer Orchestra was really impressive. Students are able to learn and experience the future of music – exploring the interdisciplinary field of Computer Science and Music. Furthermore, I thought it was really interesting that Ge decided to keep the interaction between the people and their instruments. Overall, his works are innovative and pioneering.

Steven Fei-Looking Outwards 10-Sound Art


Bridging a connection between music and digital art, computational tools have created a new genre – the sound art.

Inspired by the heritage of the Polish Radio Experimental Studio, a project called Apparatum is born. Written with javascript, the designer panGenerator takes advantage of the digital interface that emits purely analogue sound. Based on magnetic tape and optical components controlled via graphic score composed with a digital interface, the user is able to flexibly produce sounds from various levels and both graphically and musically invent a symphony of electronic music.

Meanwhile, the physical form of the equipment is designed in the modular fashion inside two steel frames. the 2 tape loopers, optical generators producing sinusoidal tones and noise generators are all presented in a more visual way for the user to have a direct understanding of how and what they are manipulating certain aspects of the sound. The most inspiring feature of the project is its human interaction program. the printout of the user graphical score with the link to the mp3 file of the recording gives the user a much clearer and easier understanding of the sound art and how they are able to control and play with the sound levels, amplitudes, frequencies, noise, and pitches. The artistic sensibility is manifested both in its acoustic flexibility and the visual appearance and the recordings of the varaiations of all the variables that the users are playing with. The project attracts and enlightens me to have more variables for user to control and to design a clear and elegant-looking appearance of the program to arouse the interests of the audience.

The elegant physical appearance of the sound art equipment

Click here to visit the report about the project

Click here to view the Apparatum Project

Hyejo Seo-Looking Outwards 11

Lauren Lee McCarthy’s How We Act Together installation

‘How We Act Together’ is a project done by an artist, Lauren Lee McCarthy. She is a LA based artist who explores “social relationships in the midst of surveillance, automation, and algorithm living.” After looking through her projects, I realized that most of her projects put the viewers out of their comfort zones. For example, her other project called “SOMEONE” is a human version of Amazon Alexa. McCarthy recruited four households across America in which she installed cameras and microphones. When viewers come to the installation at the museum, they get to play the role of human Alexa. When I first read about this project, I was a little creeped out quite frankly because, if people get paid to be strangers’ “someone” behind the screens in the future, it would be creepy. 

Just like her “SOMEONE” installation, McCarthy challenged people to feel somewhat uncomfortable by “asking participants to repeat different gestures until exhausted, to a point where the gesture no long feels natural and its meaning begins to shift” in her “How We Act Together” project. Evidently, she is playing around different gestures and facial expressions that are used in social situations. Using a software, participants are asked to scream, which will be detected by the computer once their gestures conform to the metrics of computer vision algorithms. As participants are screaming to the screen as seen in the video above, the screen displays another person screaming back at them. Looking at strangers screaming back at you eventually triggers a natural response from the current participant. 

I chose to talk about Lauren’s projects because she pushes participants to a point where they feel uncomfortable by manipulating awkward and uncomfortable social situations. I thought it was really interesting that she exposes  her participants to different social phenomenon, which triggers  uncomfortable responses. Her projects make one think deeper into uncomfortable social situations we are constantly exposed to. 

“Greet” – a part of How We Act Together project.

ilona altman – project 09

sketch

var theImage;

function preload() {
    //loading my image
    var myImageURL = "https://i.imgur.com/3SFfZCZ.jpg";
    theImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(480, 480);
    background(250,250,250);
    theImage.loadPixels();

    // going through each pixel 
    for (x = 0; x < width+10; x = x+3) {
        for (y = 0; y < height+10; y = y+2) {
            var pixelColorXY = theImage.get(x, y);
            if (brightness(pixelColorXY) >= 0 & brightness(pixelColorXY) <= 20) {
                //light pink
                stroke(255,230,230,70);
                line(x, y, x-1,y-1);
            } else if (brightness(pixelColorXY) >= 20 & brightness(pixelColorXY) <= 50) {
                //orange
                stroke(250,170,160);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 50 & brightness(pixelColorXY) <= 55) {
                //pink
                stroke(230,130,160);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 55 & brightness(pixelColorXY) <= 60) {
                // light green
                stroke(180,195,200);
                line(x, y, x-1,y-1);
            } else if (brightness(pixelColorXY) >= 65 & brightness(pixelColorXY) <= 70) {
                //yellow orange
                stroke(235,180, 100);
                line(x, y, x-2, y-2);
            } else if (brightness(pixelColorXY) >= 75 & brightness(pixelColorXY) <= 85) {
                //blue
                stroke(196,130,130);
                line(x, y, x-1, y-1);
            } else if (brightness(pixelColorXY) >= 85 & brightness(pixelColorXY) <= 95) {
                //dark red
                stroke(220,80,80);
                line(x, y, x-1, y-1);
            } else if (brightness(pixelColorXY) >= 95 & brightness(pixelColorXY) <= 110){
                //pink
                stroke(220,69,90);
                line(x, y, x+2, y+2); 
            } else if(brightness(pixelColorXY) >= 110 & brightness(pixelColorXY) <= 130){
                //medium blue
                stroke(80,130,60);
                line(x, y, x+1, y+1); 
            } else if (brightness(pixelColorXY) >= 130 & brightness(pixelColorXY) <= 160){
                //light orange
                stroke(220,170,130);
                line(x, y, x+1, y+1);
            } else if (brightness(pixelColorXY) >= 130 & brightness(pixelColorXY) <= 160){
                //light orange
                stroke(202,70, 100);
                line(x, y, x+1, y+1);
            } else if (brightness(pixelColorXY) >= 160 & brightness(pixelColorXY) <= 190){
                //white
                stroke(255,255, 255);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 190 & brightness(pixelColorXY) <= 220){
                //yellow
                stroke(150,130, 90);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 220 & brightness(pixelColorXY) <= 255){
                //yellow
                stroke(200,60,60);
                line(x, y, x+3, y+3);

            }
         
         }   
    }

}
function draw() {

}

In this project, I was thinking about memories with my family and about my grandma. I took some photos this summer of her teaching the rest of my family how to make Lithuanian dumplings. In my psychology class it is interesting because we have been learning about how distortion happens to what we remember over time. This is why I chose to make my image a bit distorted, and not so clear. I also love gradient maps, and wanted to emulate this with this piece. Yellow, green and red are the colors of the Lithuanian flag as well.

Danny Cho – LookingOutwards 9

For this week’s LookingOutwards, I was inspired by Refik Anadol’s work, Melting Memories, previously reviewed by Kristine Kim, my classmate. Initially, the display that replicates the characteristics of solid and liquid caught my attention, assuming that it is actually a material, not a 2D display. However, it turned out that it was a visualization on a screen.

This made me curious to see how can real physical materials could become ephemeral. It certainly is magical to imagine a tangible form constantly morphing or growing into something else. I also was intrigued by New Balance’s 3D printed midsole, seeing how generative design is affecting a 3D form that will later become tangible and be used in actual products.

This project was reviewed by another classmate Ilona Altman. With Melting Memories project, this led me wondering and eager to see a physical form shifting in realtime as a reaction to generative design algorithm.

Danny Cho – Project 9


Sketch

I made a self portrait generator that recognizes and connects bright red spots in an image and emphasizes / connects them. I also tried to imitate the characteristic of water color by playing with the transparency of ellipses being drawn.

This is the original image

This is what has been generated by the algorithm

//Danny Cho
//Project 9
//changjuc@andrew.cmu.edu
//section A

var underlyingImage;
var pixColor;
var squareColor;

//arrays for red spots' coordinates
var redSpotsX = [];
var redSpotsY = [];

//preloads image
function preload() {
    var myImageURL = "https://i.imgur.com/5PlTu4V.jpg";
    underlyingImage = loadImage(myImageURL);
}

//sets up the canvas
function setup() {
    createCanvas(750, 1334);
    background(0);
    underlyingImage.loadPixels();
    frameRate(1000);
}

function draw() {
    //random pixels chosen
    var px = random(width);
    var py = random(height);
    //limiting the randomness to be integers within the canvas boundary
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    //color at the location
    var theColorAtLocationXY = underlyingImage.get(ix, iy);
    pixColor = color(theColorAtLocationXY);

    //if red value is the highest number out of RGB palette,
    //and is higher than 230, the part becomes pure red
    //and saves the coordinates of them for the lines to be drawn
    if (Math.max(red(pixColor), green(pixColor), blue(pixColor)) == red(pixColor)
        & red(pixColor) > 230) {
        redSpotsX.push(ix);
        redSpotsY.push(iy);
        
        pixColor = color(255, 0, 0);
    }
    //connects the red spots with lines
    for (var i = 0; i < redSpotsX.length - 1; i++) {
            stroke(200, 0, 0);
            strokeWeight(.1);
            line(redSpotsX[i], redSpotsY[i],
                 redSpotsX[i + 1], redSpotsY[i + 1]);
        }

    noStroke();
    // changes the transparency of the ellipses
    pixColor.setAlpha(100);
    fill(pixColor);
    ellipse(px, py, 15, 15);

}

looking outward – Ilona Altman – 09

I agree very much with Yoshi that this project is very beautiful and incredibly in they way it is effective for a specific user.  I think it is interesting how something so useful could also be formally beautiful, and that there would be a commercial incentive toward computer generated forms because of the ease in which they can be individualized.  It is interesting to me that the same forms which occur in nature can be used in the design process, and that there is a warmth inherent to structures that. Resemble natural forms.. I think it is beautiful that there is such a deep relationship between growing natural forms and a sort of. Geometry that unifies that which exists within nature. This makes me think about fractals and sacred geometry. 

I think it could have been interesting if the entire shoe, instead of just the sole, was constructed in this way… 

a video of the project, new. balance generated soles based on pressure distribution in the body
Yoshi’s response I was inspired by