Sean Meng – Looking Outwards – 2

Chang Liu’s “Central Park, face to a tree.”

Artist Chang Liu’s Wild Growth, is a Processing application that generates abstract artworks from data gathered from live videos. She employs a live camera feed that “sees” colors when positioned toward natural environments, pictures, or human subjects. In turn, her software “paints” the colors into quick-generated portraits that blossom out of paint drips, splatters, and brush strokes. What I admire about this project is it embraces the uncertainty of robotic generated image and give it the control over the the painting or artwork that is produced. And by capturing the characteristics of these drawings, the artist give new meanings to the computation generated images and transfer them into the actual substance such as “the trees in Central park” or “snares”. 

Sean Meng – Project 02 – Variable face

hmeng-project 2

//Sean(Han) Meng
//Section C
//hmeng@andrew.cmu.edu
//Project-02-Variables-Face

var hairColor = 20;
var hairShape = 10;
var glassThickness = 3; 
var mouthSize = 2

function setup() {
    createCanvas(480, 480);
    background(240);
 
}

function draw() {   
    background(240);

 //Left Ear
    stroke(240,224,179);
    fill(240,224,179);
    strokeWeight(1);
    strokeJoin(ROUND);
    beginShape();
    vertex(180, 240);
    vertex(167, 240);
    vertex(160, 260);
    vertex(180, 290);
    vertex(195, 300);
    vertex(195, 260);
    endShape();

//Right Ear
    stroke(240,224,179);
    fill(240,224,179);
    strokeWeight(1);
    strokeJoin(ROUND);
    beginShape();
    vertex(340, 240);
    vertex(344, 240);
    vertex(350, 260);
    vertex(330, 290);
    vertex(325, 300);
    vertex(195, 260);
    endShape();

 //Face
    stroke(255,244,189);
    fill(255,244,189);
    strokeWeight(3);
    strokeJoin(ROUND);
    beginShape();
    vertex(195, 270); 
    vertex(195, 310);  
    vertex(230, 335);  
    vertex(247, 342); 
    vertex(270, 345);
    vertex(317, 298); 
    vertex(317, 225);   
    vertex(195, 225); 
    endShape();

 //The color of hair changes in different shades of blue randomly everytime mouse clicks
    stroke(hairColor, hairColor * 2, hairColor * 3);
    fill(hairColor, hairColor * 2, hairColor * 3);
    strokeWeight(4);
    strokeJoin(ROUND);
    beginShape();
    vertex(182.5, 262.5);
    vertex(167.5, 220);

//The silhouette of hair expands randomly everytime mouse clicks
    vertex(180 - hairShape * 2, 150 + hairShape);
    vertex(250, 127.5 - hairShape * 2);
    vertex(317.5 + hairShape / 2, 150 - hairShape / 2);
    vertex(340 + hairShape * 2, 202.5 - hairShape * 2);
    vertex(328, 263.25);
    vertex(322, 265.25);
    vertex(317.5, 225);
    vertex(198, 231);
    vertex(280, 225);
    vertex(255, 235);
    vertex(240, 230);
    vertex(230, 235);
    vertex(197, 230);
    vertex(195, 270);
    vertex(182.5, 262.5);
    endShape();

//The thickness of the glasses frame varies everytime mouse clicks
    stroke(0);
    strokeWeight(3);
    fill(255);
    strokeWeight(glassThickness);
    arc(225, 265, 42, 37, 0, PI + PI);
    fill(255);
    arc(285, 265, 42, 37, 0, PI + PI);
    stroke(150);
    strokeWeight(3);
    curve(246, 263, 246, 260, 260, 260, 282, 263);
    stroke(0);
    strokeWeight(3);
    line(195, 260, 205, 260);
    line(305, 260, 319, 260);

//Nose
    strokeWeight(2);
    line(255, 270, 250, 300);
    line(250, 300, 255, 300);

//The size of the mouth increases when mouse clicks
    noStroke();
    fill(240,214,159);
    triangle(240 - mouthSize, 325, 273 + mouthSize, 325, 250, 320 - mouthSize);
}


function mousePressed(){
    hairColor = random(0, 80);
    hairShape = random(5, 30);
    glassThickness = random(3, 7);
    mouthSize = random(2, 10);
}

In this project, I explored with interactive component “mousePressed” to assign random variables such as color or shape to my drawings. In this drawing, I was inspired by the character in Rick and Morty, so the hair varies in shades of blue and expand outwards.

Jai Sawkar – Looking Outwards – 02

komorebi is a Platform that uses a robotic projector and generative projections to replicate the natural reflections and shadows of sunlight.

This week, I learned about Komorebi, a project created by designer Leslie Nooteboom. Through this projector, sunlight & natural reflections are generated and shown on surfaces, mimicking daylight in a room. I spent most of my adolescent life in the Seattle Area and then moving to Pittsburgh for school; moreover, this year, my room is located in the basement of my house. It is through these experiences that I can attest that daylight, or lack of it, can play a large impact on how you feel. Dwindling amounts of sunlight in the winter and simply minimal daylight staying in a basement can drastically change how you feel; hence, through this platform that generates real-time data (anywhere in the world) can enhance the day-to-day lives of its users, boosting there mood and energy anytime they are in the same room as it.

From my understanding of how the platform works, Kopmorebi uses data of sunlight patterns throughout its available cities and relays the info to the platform itself. It is there that light and its “random shadows” are signaled to mimic the illusion of light through a window.

What I admire about the artist is they saw a need for sunlight that many of us are seeing less & less of; they used both design & computation to create a solution and made it beautiful.

Article

Jai Sawkar – Project 02 – Variable Face

Portrait_Jai

//Jai Sawkar
//Section C
//jsawkar@andrew.cmu.edu
//Project-02: Variable Face 

var milkBody = '#99dfff';
var milkDark = '#008bcc';
var milkDark1 = '#80d7ff';
var mouth1 = 44;
var mouth2 = 131;
var mouth3 = 171;
var compli1 = 179;
var compli2 = 231;
var compli3 = 255;
var pupil = 9.5;
var pupilLeft2 = 274;
var pupilRight = 321;
var eyeLeft = 0;
var eyeRight = 0;
var bg1 = 230;
var bg2 = 247;
var bg3 = 255;
var m1 = 299;
var m2 = 239;
var m3 = 42;
var m4 = 17;

function setup() {
    createCanvas(640, 480);
}

function draw() {
    background(bg1, bg2, bg3);

    stroke(1)

    //milkcarton
    fill(milkBody);
    quad(217.4, 132.1, 301.1, 127.4, 307.8, 142.3, 225.4, 147.7);
    quad(225.4, 147.7, 257.9, 190.7, 339.4, 179.6, 307.8, 142.3);
    quad(257.9, 190.7, 339.4, 179.6, 339.3, 283.5, 261, 298);

    //shadow1
    fill(milkDark1);
    quad(183.5, 178.6, 185.4, 280.4, 261.1, 298.1, 257.9, 190.7);
    triangle(257.9, 190.7, 183.5, 178.6, 225.4, 147.6);

    //shadow2
    fill(0, 139, 204, 80);
    noStroke()
    triangle(183.5, 178.7, 231.1, 154.9, 225.4, 147.7);

    //shadow3
    fill(0, 139, 204, 80);
    triangle(183.5, 178.7, 242.7, 171.9, 231, 154.8)
    stroke(1)
    line(183.5, 178.7, 242.7, 171.9)

    //faces

    noStroke()
    //face1
    fill(milkDark)
    //left and right eye
    ellipse(278, 215, eyeLeft)
    ellipse(323, 210, eyeRight)


    //pupil1
    noStroke()
    fill(compli1, compli2, compli3)
    ellipse(pupilLeft2, 213, pupil)
    ellipse(pupilRight, 208, pupil)

    //mouth1
    fill(milkDark)
    ellipse(m1, m2, m3, m4)
}

function mousePressed() {
    pupil = random(9.5, 14)
    pupilLeft2 = random(274, 280)
    pupilRight = random(321, 328)
    eyeLeft = random(25, 30)
    eyeRight = random(25, 30)
    bg1 = random(102, 240)
    bg2 = random(0, 204)
    bg3 = random(204, 255)
    m1 = random(290, 310)
    m2 = random(239, 250)
    m3 = random(30, 50)
    m4 = random(10, 24)

}

When beginning the project, I wanted to start with something normal and bland, but once clicked, it would turn into an unexpected reaction. It is for this reason I chose a milk carton and made the initial face quite neutral; once activated, a sense of euphoria fills the little carton, changing his expression & the background with every click.

Carly Sacco – Looking Outwards – 02

A generated baroque architecture example of Celestino Soddu.

Celestino Soddu is known for being an architect who uses generative art to create forms and designs for architecture. One of his notable projects from 2011, is using algorithms to replicate and create new geometries that mimic the designs of Baroque  Architecture to imagine new dynamic structures. I think this is a very interesting way to use generative art to take a precedent and recreate the same ideas but re-imagine them with the use of modern technology.

Celestino Soddu using generative methods to create Baroque style architecture.

Soddu used an algorithm matrix based on 21 that generates seemingly perfect architecture, but is rarely repeatable. The algorithm adapts to achieve unpredictable orders. This is very representative of his work due to the fact that Soddu tries to have his generative forms still represent the styles known to baroque. Although he is digitally creating a new design, the product can still be associated to baroque architecture which is his ultimate goal.

Generative art that represents styles of Baroque Architecure.

Aaron Lee-Looking Outwards-02

“theatroacoustic” system. Courtesy LMN Architects

Creator’s name: LMN Architects

Title of the work: Ceiling of a new concert hall at the Voxman School of Music

Year of creation: 2016

Link: https://lmnarchitects.com/project/university-of-iowa-voxman-music-building

Construction sequence. Courtesy LMN Architects

Today, architects use scripting or programming to achieve complex yet fascinating geometry by computers which straightly leads to digital fabrication for mockups. The curves of these new shapes are not only proven to be the new aesthetic but also logical to be built in reality.

LMN Architects in Seattle designed the ceiling of a new concert hall at the Voxman School of Music using generative design and construction. Sophisticated algorithms in scripts were necessary in order to perform seamlessly an acoustic reflector, a distributor of light and air and most importantly aesthetically pleasing. Parameters / variables were first defined and generative algorithm calculated the best degree of curves, number of openings of the 946 unique, folded-aluminum composite modules while none are alike.

The further advantage of designing in scripts is that the design can be easily transcribed to fabrication language. In this case, any changes made to parameters could be produced into physical mock up to simulate acoustic level. Iterations of this process proved that the final product was the best option.

Taisei Manheim – Project – 02 – Variable – Face


sketch

When coding this project, at first I had difficulty controlling the different variables in order to make them work in the way that I wanted them to, such as the color of the face and body.  However, once I got the hang of it I was able to have multiple elements of the face changing at the same time without the face becoming too distorted.

//Taisei Manheim
//Section C
//tmanheim@andrew.cmu.edu
//Assignment-02

var eyeSize = 10;
var faceWidth = 100;
var faceHeight = 200;
var mouthWidth = 10;
var mouthHeight = 10;
var bodyWidth = 300;
var bodyHeight = 200;
var color = 100;
var armWidth = 20;
var armHeight = 100;

function setup() {
    createCanvas(600, 480);
}

function draw() {
    background('pink');

    fill(50, 155, 255);
    fill(50, 155 - color, 255 - color);

    var armLx = width / 2 - bodyWidth / 4 - armWidth / 2
    var armRx = width / 2 + bodyWidth / 4 - armWidth / 2
    var armLy = height / 2 + faceHeight / 2 - armHeight + 40
    var armRy = height / 2 + faceHeight / 2 - armHeight + 40
    rect(armLx, armLy, armWidth, armHeight);
    rect(armRx, armRy, armWidth, armHeight);
    //arms

    ellipse(armLx + armWidth / 2, armLy, 40, 60);
    ellipse(armRx + armWidth / 2, armRy, 40, 60);
    //hands

    var bodyPos = height / 2 + faceHeight / 2 + bodyHeight / 2 - 10
    ellipse(width / 2, bodyPos, bodyWidth, bodyHeight);
    //body

    ellipse(width / 2, height / 2, faceWidth, faceHeight);
    //face

    fill('white')
    var eyeLX = width / 2 - faceWidth * 0.25;
    var eyeRX = width / 2 + faceWidth * 0.25;
    arc(eyeLX, height / 2, 2.5 * eyeSize, eyeSize, 0, PI);
    arc(eyeLX, height / 2 + 2, 2.5 * eyeSize, eyeSize, PI, PI + PI, OPEN);
    arc(eyeRX, height / 2, 2.5 * eyeSize, eyeSize, 0, PI);
    arc(eyeRX, height / 2 + 2, 2.5 * eyeSize, eyeSize, PI, PI + PI, OPEN);
    //eyes

    fill('black');
    ellipse(eyeLX, height / 2 + 1, eyeSize - 2, eyeSize - 2);
    ellipse(eyeRX, height / 2 + 1, eyeSize - 2, eyeSize - 2);
    //pupils

    fill('black');
    rect(eyeLX - 15, height / 2 - faceHeight / 4, 30, 8);
    rect(eyeRX - 15, height / 2 - faceHeight / 4, 30, 8);
    //eyebrows

    fill(255, 0, 0);
    ellipse(width / 2, height / 2 + faceHeight / 4, mouthWidth, mouthHeight);
    //mouth

    fill('orange');
    rect(width / 2 - 60, height / 2 - 10 - faceHeight / 2, 120, 30);
    rect(width / 2 - 25, height / 2 - 110 - faceHeight / 2, 50, 100);
    //top hat
}

function mousePressed() {
    faceWidth = random(100, 150);
    faceHeight = random(100, 200);
    eyeSize = random(10, 20);
    mouthWidth = random(5, 30);
    mouthHeight = random(5, 30);
    bodyWidth = random(300, 500);
    bodyHeight = random(200, 400);
    color = random(0,155);  
    armWidth = random(20, 30);
    armHeight = random(80, 120);
}

Taisei Manheim – Looking Outward – 02

The generative project that I chose was Taxi, Taxi which is a project created in 2016 by Robert Hodgin, in collaboration with Jonathan Kim and with the help of Ryan Bartley.  Taxi, Taxi is a digital installation in the Samsung 837, the Samsung Experience Center in New York City. The project uses NYC Open Data taxi data in order to create a real time simulation of the different routes that taxi cab drivers take.  It was created with the Cinder C++ coding framework, a framework that Robert Hodgin was the co-creator for. They also used Open Street Map to create the 3D model of New York City.  Robert Hodgin and the other people that worked with him had to have some design sensibilities because even though they got all the taxi data from a New York City database, it was up to them to display the information in a clear and aesthetic manner.

Joanne Chui – Looking Outwards 02

Crows are Chased and the Chasing Crows are Destined to be Chased as well, Transcending Space – By teamLab

This digital piece is generated through interaction with visitors in the space. The visuals are played along with music, and are rendered in real time, not pre-recorded or set on a loop. Visitors are able to walk around in the space and are completely surrounded by the projections. The projection consists of crows flying around the space and chasing each other, and if they crash either into each other or into a visitor, they explode into a floral pattern. Because of its generative nature, the digital installation is always changing and never producing the same results.
I was really interested in this because usually we view digital art as flat, but this installation really immerses and surrounds users into the art piece. It became a digital architecture in which users interacted with the generative artwork through movement in a space.
A suggestion I have would be to possibly experiment more with the surfaces of the rooms/the architecture of the space. It would have been interesting to see a curved wall for example.

Ammar Hassonjee – Looking Outwards 02

An Interactive Wall That Moves According to Hand Movement

An interesting example of generative art I found online is a project called “Breathing Wall II” built in 2014 by Behnaz Farahi, an architecture professor at USC.

According to Farahi’s website, the “Breathing Wall” installation is made out of wood, PVC, and fabric, and its directly controlled by an Arduino that interacts with a Leap Motion system and depth camera to capture hand movement. The hand motions and signals are processed using DC systems and algorithms and then projected on the wall as contour lines, as well as the changes in the wall surface and color.

What I like most about this project is its exploration of the relationship between movement and the material, light, and color of the physical system. This installation gives its users the power to interact and impact their environment by allowing them to use intuitive controls like swiping and tapping to directly disturb the form and color of the wall, which contributes to a sensational experience. In my opinion, this art that is adaptive and involves activity from the user is the most compelling form of art.

A user interacts with “Breathing Wall II”.
Contour lines can be seen projected on the background of the wall.

Behnaz Farahi is known for her exploration in computational design, specifically with the use of interactive and motion based installations. Her work aims to connect human experience with structure as the focus is on how intuitive human motions and movement can impact a system. In addition to wall like installations, she has also designed interactive clothing and ceilings, which can be found on her website.