Atraylor – Project 02 – Variable Faces

When making this project, I was thinking of all the proportional measurements I use to make human faces, especially the relationship between the eyes and the rest of the face. Starting out, I noticed that my random faces were just scaling up and down because the ratios were all established on one variable: the eye size. The end result is more interesting and resembles some of the steps I would make to draw a face.

sketch


//Allison Traylor, atraylor,
//section B, Project 02


var eyeSize = 30; //eye size
var eyeSpaceL = 0.25; //eye space left
var eyeSpaceR = 0.25; //eye space right
var faceWidth = (eyeSize * 5); //face oval width (relative to eye size)
var faceHeight = 210; // face oval height
var craniumWidth = 150;
var craniumHeight = 150;
var mouthWidth = 25; // corner locations for mouth

var colorL = (200,200,200,200); // color of left eye
var colorR = (200,200,200,200); // color of right eye
var mColor = (200,200,200,200); // color of mouth


function setup() {
    createCanvas(480, 640);
    background(220);

}


function draw() {

    background(222);
    var noiseVal = random(0.001, .6); // flickering lines
    //chin height is proportional to faceHeight, tring to make it right below the mouth
    var chinHeight =  height/2 + faceHeight/4 + faceHeight/8 + faceHeight/16;

    // Cranium
    noFill();
    ellipse(width/2, height/2 - eyeSize, craniumWidth, craniumHeight);

    //bounding box
    push();
    noFill();
    stroke(noiseVal * 255); // lines flicker
    rect(width/2 - craniumWidth/2, height/2 - eyeSize - craniumHeight/2, craniumWidth, craniumHeight);
    pop();

    //face ellipse
    noFill();
    ellipse(width/2, height/2, faceWidth, faceHeight);


    // nose line proportional to eyes
    line(width/2 - eyeSize/2, height/2 + faceHeight/4,
         width/2 + eyeSize/2, height/2 + faceHeight/4);

    //eyes
    fill(colorL);
    ellipse((width/2) - faceWidth*eyeSpaceL, height/2, eyeSize, eyeSize); // Left eye (to viewer)
    fill(colorR);
    ellipse((width/2) + faceWidth*eyeSpaceR, height/2, eyeSize, eyeSize); // right eye

    //eye line
    line(width/2 - faceWidth/2, height/2, width/2 + faceWidth/2, height/2);


    // mouth
    fill(mColor);
    //I'm trying to isolate the mouth in a certain area between nose and chin
    triangle((width/2) - mouthWidth, height/2 + faceHeight/4 + faceHeight/8,
        width/2, height/2 + faceHeight/4 + faceHeight/12,
        (width/2) + mouthWidth,  height/2 + faceHeight/4 + faceHeight/8);

    //chin
    push();
    noFill();
    stroke(noiseVal * 255);
    ellipse(width/2, chinHeight, mouthWidth, mouthWidth);
    pop();

    // plumbline
    line(width/2, height/4, width/2, (height - height/4));


}

function mousePressed(){

    // generating random values
    eyeSize = random(20, 50);
    eyeSpaceR = random(0.20, 0.30);
    eyeSpaceL = random(0.20, 0.30);
    faceHeight = random(190, 250);
    craniumWidth = random(150, 200);
    craniumHeight = random(150, 200);
    mouthWidth = random(10, 30);

    var from = color(200,200,200,200); //interpolate from this color
    var to = color(0,0,0,200); // to this color

    // random numbers to be used in lerpColor
    var lerpNumber = random(0,1);
    var lerpNumber2 = random(0,1);
    var lerpNumber3 = random(0,1);

    mColor = lerpColor(from, to, lerpNumber); // random shade for mouth

    colorR = lerpColor(from, to, lerpNumber2); //random shade for right eye
    colorL = lerpColor(from, to, lerpNumber3); // random shade for left eye
}

yoonyouk-project02-variablefaces

yoonyouk-project02-variablefaces

//Yoon Young Kim
//Section E
//yoonyouk@andrew.cmu.edu
//Project-02

var eyeLxSize = 40;
var eyeRxSize = 20;
var faceWidth = 240;
var faceHeight = 200;

var nose1x = 300;
var nose1y = 250;
var nose2x = 350;
var nose2y = 250;
var nose3x = 325;
var nose3y = 285; 
var earpoint = 100;
var smileWidth = 25;
var smileHeight = 25;

//colors
var R1 = 232;
var R2 = 64;
var B1 = 33;
var B2 = 91;
var B3 = 237;
var Ye1 = 323;
var Ye2 = 204;
var Ye3 = 74;

var RW = 4 //weight of any red line


function setup() {
    createCanvas(640, 480);
    background(220);
    angleMode(DEGREES);
}

function draw() {
    background (B1, B2, B3);
    
    //face
    noStroke();
    fill(Ye1, Ye2, Ye3);
    ellipse (width/2, height/2 - 10, faceWidth, faceHeight)
   
    //eyes
    stroke(R1, R2, R2);
    strokeWeight(RW);
    noFill();
    var eyeLX = width / 2 - faceWidth * 0.25;
    var eyeRX = width / 2 + faceWidth * 0.25;
    ellipse(eyeLX, height / 2, eyeLxSize, eyeLxSize);
    ellipse(eyeRX, height / 2, eyeRxSize, eyeRxSize);
    
    //triangle nose
    noStroke();
    fill(R1, R2, R2);
    triangle(nose1x, nose1y, nose2x, nose1y, nose3x, nose3y);

    //ears
    //left ear
    fill(Ye1, Ye2, Ye3);
    triangle(220, 190, 225, earpoint, 310, 150);


    //right ear
    triangle(420, 190, 420, earpoint, 340, 150);

    //mouth
    stroke(R1, R2, R2);
    strokeWeight(RW);
    noFill();
    arc(width / 2, height / 2 + faceWidth * 0.20, smileWidth, smileHeight, 0, 180);
}

function mousePressed() {
    faceWidth = random(200, 300);
    faceHeight = random(200, 275);
    eyeLxSize = random(20, 80);
    eyeRxSize = random(30, 60);
    R1 = random(200, 250);
    R2 = random(50, 100);
    B1 = random(20, 70);
    B2 = random(90 ,140);
    B3 = random(200, 250);
    Ye1 = random(300, 350);
    Ye2 = random(180, 230);
    Ye3 = random(50, 100);
    RW = random(7, 15);
    smileWidth = random(40, 80);
    smileHeight = random(20, 50);
    nose1x = random(270, 300);
    nose1y = random(240, 260);
    nose2x = random(320, 350);
    nose3x = random(300, 325);
    nose3y = random(270, 285); 
    earpoint = random(50, 120);
}

 

Much like the first project, I felt that this project required a lot of playing around and trial and error. I wanted to use primary colors in order to range the different color values to evict a childish type of project. In order to experiment with different shapes, I decided to go with a cat face. I like how each image generates a new combination/a new cat face upon each click.

yoonyouk-LookingOutwards-02

http://www.memo.tv/pattern-recognition/

This project is Memo Akten’s “Pattern Recognition,” created in collaboration with choreographer Alexander Whitley.

This generative art project involves a series of lights learning and observing how two dancers move and interact. The lights thus move accordingly to the dancers’ movements and placement within the space.

I appreciated how much the light is fully integrated with the dance art form. Rather than merely being a light show that would provide an aesthetic value to the performance, the lights serves a cognitive purpose for it assists dancers with recognition and learning with routines and dance. This project made me realize that generative art is not limited to media art but also performance art and music. Like most generative art, the outcome depends on the interactions and so the outcomes always seem to be unique.

The algorithm of Pattern Recognition relies heavily upon the movement and the space of the subjects: the dancers. The artist made the decision to use strong, harsh strobe lights that would beam lights across a stage. I thought this artistic decision was appropriate as a generative component since the light would be empathetic to the dancer’s form and movements.

mjnewman LookingOutwards-02, Section A

Roman Verostko is an American artist, who uses code and machinery to create meticulous flowing forms on paper. Verostko has produced works like Cyberflower VII (2000)

OLYMPUS DIGITAL CAMERA

using a pen attached on a point plotter powered by algorithms coded on the computer. What drew me (pun intended) to Verostko’s work is the effort he took to convert digital products into the physical ultimately creating an organic form constructed of many delicate, yet precise lines all through machinery. He first learned how to create form through algorithms with FORTRAN in 1970. He now uses elementary BASIC with DMPL to drive a Houston Instruments plotter (with inkwell pen attached) hooked up to a PC in order to translate the digital into physical. Verostko initially turned to art of the early 20th century for inspiration. He is specifically drawn to the “pure form” that artists such as Suprematist Kazimir Malevich and De Stijl artist Piet Mondrian were able to capture in their paintings. It personally makes sense to me that someone interested in algorithms to create artwork would be attracted to the simple and mathematical approach painters like Malevich and Mondrian had with their paintings. This video below demonstrates the precision and care Verostko places on his work, similar to his inspirations.

 

 

 

daphnel-Project02-VariableFace

Variable Face

//Daphne Lee
//15-104::1 (9:30am)-Section D
//daphnel@andrew.cmu.edu
//Project-02

var eyeSize = 20;
var faceWidth = 200;
var faceHeight = 200;
var faceInnerW= 150;
var faceInnerH= 150;
var nosePosX=50;
var nosePosY= 20;
var blush= 30;

function setup() {
    createCanvas(480, 640);
}

function draw() {
    //head outside
    background(255, 230, 238);
    fill(155, 155, 155);
    ellipse(width / 2, height / 2, faceWidth, faceHeight);

    //innerface
    fill(256);
    ellipse(width / 2 - 40, height / 2, faceInnerW/1.5,  faceInnerH);
    ellipse(width / 2 + 40, height / 2, faceInnerW/1.5,  faceInnerH);
    ellipse(width / 2, height / 2 +43, faceInnerW/1.5 +25,  faceInnerH/1.7);

    //eyes
    var eyeLX = width / 2 - faceWidth * 0.25;
    var eyeRX = width / 2 + faceWidth * 0.25;
    fill(77,51,25);
    ellipse(eyeLX, height / 2, eyeSize + 20, eyeSize + 20);
    ellipse(eyeRX, height / 2, eyeSize + 20, eyeSize + 20);

    //corner white eyes
    fill(256);
    ellipse(eyeLX-9, height / 2 -17, eyeSize/1.5, eyeSize/1.5);
    ellipse(eyeRX-9, height / 2 -17, eyeSize/1.5, eyeSize/1.5);

    //blush
    fill(255, 204, 204);
    ellipse(width/2 -50, height/2 +21, blush, blush/2);
    ellipse(width/2 +50, height/2 +21, blush, blush/2);

    //nose
    strokeWeight(0);
    fill(255, 204, 0);
    ellipse(width/2, height/2 + 20,nosePosX, nosePosY);

}

function mousePressed() {
    faceWidth = random(190, 250);
    faceHeight = random(190, 250);
    faceInnerW= random(150,165);
    faceInnerH= random(150,165);
    eyeSize = random(20, 30);
    nosePosX = random(30, 60);
    nosePosY = random(10, 40);
    blush= random (20,55);

}

I started with an idea of a penguin in my mind. It was hard for me to try to get the beak down since I wanted to use a triangular shape as the base but I ended up with an ellipse. It’s quite similar to my first project in the way I used the shapes and tried to overlap them in order to get another shape that I wanted. Overall, I think the end result wasn’t that bad.

kyungak-lookingoutwards-02

Artists: French contemporary digital artists and multimedia choreographers Adrien M & Claire B (2015)

“XYZT: Abstract Landscapes” is a combination of 10 different works, each consisted of its own mathematical algorithm that responds to the movement of visitors. Once contact is made to the touch screen either by hand or foot, the sensors trigger the cloud of pixels to form certain shapes. For instance, the pixels might mimic the person that is standing in front of it.

One of the most famous work from the XYZT series is called the “Letter Tree.” The algorithms form a letter tree that drift its letters across the touch panel as the algorithmically formulated wind blows on them. The artists tried to recreate the question of whether wind can compose poems or not into an artwork.

The most fascinating aspect about XYZT is the interactivity of the piece. The artist’s aim to express distortion of shapes and meanings really came alive through the digital medium. Now that the mathematical algorithms could respond to the thousands of distortions viewers touch to make, it was a matter of time for the artists and the viewers to understand the different ways the shapes got distorted and most importantly have fun.

rgroves – LookingOutwards-02

Dreamlines is an interactive artwork created in 2005 by Leonardo Solaas. Unfortunately, it is no longer functional due to changing technologies. However, a user was once able to enter a series of words that describe a dream they would like to dream, and this program would search for these words in google images and use the results to generate what the artist called an “ambiguous painting, in perpetual change, where elements fuse into one another, in a process analogous to memory and free association.”

One thing I admire about this project is how the use of found images adds to the uncanny quality of the moving paintings that makes them so dreamlike. We don’t know when the photos were taken or anything about the subjects or the photographer. We don’t know if they’re dead or alive – and if they’re dead, how they died. Like in dreams, our imagination can turn something innocent (a woman standing on a porch) into a nightmare.

The artist describes the algorithm for the videos on his website. The video is comprised of 1500 autonomous particles in perpetual movement – movement whose velocity is determined by the color value of the pixel it is “stepping” on. The hue, saturation and brightness of the pixel somehow translate to angle and speed values for the particle.

The code combines the unpredictable data presented by the keywords that users enter, an algorithm that randomly selects images from Google, and a logical, predictable function on the color values of the pixels to create perpetual motion and chaos. This method is meant to reflect the processes that take place in our heads – in some ways unpredictable and emotional but actually just a series of electrical impulses, each one determined by physical conditions.

Video: https://vimeo.com/136955826

Website: http://solaas.com.ar/dreamlines/

ashleyc1-Section C-Looking Outwards-02

Waves is a generative art installation created in 2016 by LIA. It was shown in part with a live concert, “Blades,” by @c. There is little information about this particular work but it’s given that the waves are generated through sin functions that slowly build on top of each other to create more complex forms and movements. What I personally am attracted to in this piece is how abstract it is. Even though there is the color scheme and motions that reference waves, it doesn’t seem like a literal representation. There are moments where the lines almost look pixel-y and scale-like because of how sharp the edges of the line become due to distortion of the functions.

Waves by LIA. Photo provided by LIA-something on Flickr.

This distortion is part of the artistic style. I think it’s a conceptual nod to the characteristics of generative art: how we are now using technology to replicate nature and abstract beauty. While it’s impressive to create completely generative installations, it’s also nice to be reminded that technology can’t perfectly replicate nature so characteristics like pixels are highlighted. I do not however agree with the choice for it to serve as a literal backdrop to the band. I felt like it cheapened the value of the artwork itself as it went from having a conceptual purpose to being a background to a main performance. But maybe I wouldn’t think this if the documentation included the music of the band to create a more atmospheric experience.

Links:

Waves

http://www.at-c.org/

http://www.at-c.org/?p=173

https://www.flickr.com/photos/lia_lia/sets/72157675567646274/with/32267611511/

 

rmanagad-Section E-lookingoutwards-02

Creator: Hiroshi Ishii, Xiao Xiao

Title of Work: Andante

Year of Creation: 2014

Link to Project Work: http://tangible.media.mit.edu/project/andante/

Link to Artist Bio: http://tangible.media.mit.edu/person/hiroshi-ishii/ , http://portfolio.xiaosquared.com/About

Andante is an augmented reality program intended for developing beginner skills in piano playing. In developing this learning program, Hiroshi Ishii and Xiao Xiao regarded human experiences and histories from the somatic perspective — constant consideration was put towards recognizing the rhythmic flow of the human gait, and applying this consideration towards learning musical notations. This emphasis on the body-mind is reflected in Ishii’s and Xiao’s other new media art work, as they develop spaces, messages, and tools that rely on direct human interaction. As a designer focused on the human sciences and transitions to futures, I’m interested in Andante‘s potential as a person-based contribution towards futures-learning — does the human soma need increased priority in the design process?

According to the paper, Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion, the program runs both MIDI and the walking animations on Java, and in response to the player hitting each of the keys.

Looking Outwards-02 Section D

Still from Dreamlines at Minute 1:46

This is a video of a capture of one of the search strings used to create Dreamlines, an interactive net-art work created by Leonardo Solaas in 2005. The search word for this particular video was “plum.”

Leonardo is both an artist and a programmer. He states that, “The user enters one or more words that define the subject of a dream he would like to dream.” At the same time, one cannot be sure whether they are observing their dream, or the computer’s dream and thought process. There is no definitive answer to this uncertainty, just as dreams can be very general and not make logical sense.

I find it beautiful that even an electronically generated work can mimic patterns in nature. This artwork reminds me of spider webs, bacteria, mold, and other organic forms. However I did notice that the more I picked out single curves and lines in the work, I became more aware of its unnaturalness. I realized that enjoyed it when I saw it as an organism rather than what looked like an imitation of a living thing.