Project 1 – Self Portrait

This is me. 100% real-to-life, unedited, me.

my-true-portrait
/* Lance Yarlott
   Section D */

function setup() {
    createCanvas(600, 600);
    background(0);
    text("p5.js vers 0.9.0 test.", 10, 15);

    licenseAgreement = false;
    lightsOn = false;
    eyeLocXL = (width / 2) - 90;
    eyeLocXR = (width / 2) + 90;
    eyeLocY = (height / 2) - 90;
}

// big sorry for magic numbers lol

function draw() {
    if (licenseAgreement === false) {
        background(0);
        textAlign(CENTER);
        fill(255);
        textSize(20);
        text("Do you agree to the EULA? Press any key to continue.", 300, 300);
        if (keyIsPressed === true) licenseAgreement = true;
    } else {
        if (mouseIsPressed) {
            if (mouseButton === LEFT) lightsOn = true;
            if (mouseButton === CENTER) lightsOn = false;
        }

        /* I tried to bound mX and mY with the circumference of the eye and 
        failed miserably because p5 calculates the angles in a weird way? */ 
        
        mX = mouseX + 1 < 600 ? mouseX + 1 : 600;
        mY = mouseY + 1 < 600 ? mouseY + 1 : 600;
        
        mX = ((mX / 600) * 36) - 18;
        mY = ((mY / 600) * 36) - 18;

        if (lightsOn) {
            background(255, 244, 176); // light yellow

            strokeWeight(2);
            stroke(0);

            fill(0, 255, 0); // green
            triangle(width / 2, height / 3 + 140, 0, 600, 600, 600);
            fill(0);
            text("i am trgl", width / 2, 500);

            fill(245, 205, 149); // sort of a tan-ish color
            ellipse(width / 2, height / 3, 360, 360); // head shape

            fill(255); // eye whites
            arc(eyeLocXL, eyeLocY, 80, 80, 
                0, PI + QUARTER_PI, CHORD); // taken from p5 ref site
            arc(eyeLocXR, eyeLocY, 80, 80, 
                -QUARTER_PI, PI, CHORD);

            fill(77, 54, 21); // irises
            ellipse(eyeLocXL + mX, eyeLocY + mY, 40, 40);
            ellipse(eyeLocXR + mX, eyeLocY + mY, 40, 40);

            fill(0); // pupils
            ellipse(eyeLocXL + mX, eyeLocY + mY, 20, 20);
            ellipse(eyeLocXR + mX, eyeLocY + mY, 20, 20);

            strokeWeight(40);
            stroke(245, 205, 149);
            noFill();
            ellipse(eyeLocXL, eyeLocY, 120, 120);
            ellipse(eyeLocXR, eyeLocY, 120, 120);
            strokeWeight(2);

            fill(245, 205, 149); // lids
            noStroke();
            arc(eyeLocXL, eyeLocY, 82, 82, 
                PI + QUARTER_PI, 0, CHORD);
            arc(eyeLocXR, eyeLocY, 82, 82, 
                PI, -QUARTER_PI, CHORD);

            fill(77, 54, 21);
            arc(width / 2, height / 3, 360, 360, -PI + QUARTER_PI, -QUARTER_PI);

            fill(0);
            triangle(width / 2, eyeLocY + 30, width / 2 - 25, eyeLocY + 80, 
                     width / 2 + 25, eyeLocY + 80);

            stroke(0);
            strokeWeight(10);
            line(eyeLocXL, height / 3 + 120, eyeLocXR, height / 3 + 120);
            strokeWeight(2);

        } else {
            background(0);
            textAlign(CENTER);
        fill(255);
        textSize(20);
        text("Click, friend.", 300, 300);

            fill(255); // eye whites
            ellipse(eyeLocXL, eyeLocY, 80, 80); 
            ellipse(eyeLocXR, eyeLocY, 80, 80);

            fill(0); // irises
            ellipse(eyeLocXL + mX, eyeLocY + mY, 40, 40);
            ellipse(eyeLocXR + mX, eyeLocY + mY, 40, 40);

            strokeWeight(40);
            stroke(0);
            noFill();
            ellipse(eyeLocXL, eyeLocY, 120, 120);
            ellipse(eyeLocXR, eyeLocY, 120, 120);
            strokeWeight(2);
        }
    }
}

One difficulty was trying to get the eye tracking working. JS calculates angles in an odd way, so moving in a circular motion was mostly impossible to get right for the time being.

Project 1: My Self Portrait

self-portrait
//Jessie Chen
//Section D
function setup() {
    createCanvas(1000, 1000);
    background(252, 247, 135);
    text("p5.js vers 0.9.0 test.", 10, 15);
}

function draw() {
    //shirt
    noStroke();
    fill(250, 221, 187);
    rect(200, 700, 600, 600, 150);
    fill(128, 36, 25);
    rect(650, 700, 40, 300);
    rect(360, 700, 40, 300);
    arc(600, 1000, 150, 100, -PI, 0);
    arc(450, 1000, 150, 100, -PI, 0);
    //hair
    fill(49, 29, 22);
    rect(230, 190, 530, 650, 300, 280, 50, 50);
    //face
    noStroke();
    fill(250, 221, 187);
    beginShape();
        curveVertex(400, 480);
        curveVertex(400, 480);
        curveVertex(430, 600);
        curveVertex(525, 672);
        curveVertex(610, 680);
        curveVertex(685, 610);
        curveVertex(705, 470);
        curveVertex(680, 360);
        curveVertex(625, 285);
        curveVertex(550, 295);
        curveVertex(485, 400);
        curveVertex(400, 450);
        curveVertex(400, 480); 
        endShape();
    //ear
    fill(250, 221, 187);
    ellipse(375, 475, 55, 100);
    ellipse(372, 525, 30, 40);
    quad(350, 490, 380, 560, 420, 580, 700, 435);
    //neck
    fill(49, 29, 35);
    rect(400, 425, 30, 100, 50);
    fill(250, 221, 187);
    rect(440, 580, 155, 400, 0, 50, 50, 50);
    stroke(49, 29, 22);
    strokeWeight(23);
    strokeJoin(ROUND);
    line(345, 517, 405, 573);
    line(405, 573, 446, 645);
    line(446, 645, 445, 829);
    //eyebrow
    noFill();
    stroke(49, 29, 22);
    strokeWeight(7);
    strokeJoin(BEVEL);
    line(635, 400, 685, 393);
    line(685, 393, 720, 405);
    line(560, 400, 515, 390);
    line(515, 390, 480, 405);
    //eyes
    noStroke();
    fill(255);
    ellipse(672, 455, 53, 25);
    ellipse(657, 455, 53, 28);
    stroke(49, 29, 22);
    strokeWeight(5);
    line(695, 460, 680, 469);
    line(694, 462, 699, 466);
    line(685, 466, 690, 471);
    line(676, 471, 679, 476);
    strokeWeight(7);
    strokeJoin(ROUND);
    line(635, 450, 655, 442);
    line(655, 442, 660, 442);
    line(660, 442, 700, 452);
    fill(255);
    fill(49, 29, 22);
    ellipse(658, 455, 20, 20); //iris
    noFill();
    noStroke();
    fill(255);
    ellipse(530, 455, 55, 28);
    ellipse(515, 460, 60, 28);
    strokeWeight(5)
    stroke(49, 29, 22);
    line(480, 460, 500, 470);
    line(481, 462, 476, 466);
    line(490, 466, 485, 470);
    line(499, 470, 496, 474);
    strokeWeight(7);
    strokeJoin(ROUND);
    line(560, 451, 532, 442);
    line(525, 442, 528, 442);
    line(528, 442, 480, 452);
    fill(49, 29, 22);
    ellipse(528, 455, 20, 20); //iris
    //face shadow
    stroke(221, 132, 101);
    line(628, 425, 615, 450);
    line(615, 462, 615, 480);
    fill(221, 132, 101);
    noStroke();
    ellipse(645, 422, 35, 25);
    ellipse(555, 422, 35, 25);
    arc(375, 470, 30, 50, PI, PI / 6, CHORD);
    ellipse( 385, 480, 10, 55);
    fill(128, 36, 25);
    ellipse(640, 425, 15, 10);
    ellipse(560, 425, 15, 10);
    stroke(221, 132, 101);
    line(459, 648, 490, 665);
    line(587, 688, 563, 685);
    //nose
    noStroke();
    fill(221, 132, 101);
    ellipse(605, 520, 25, 25);
    fill(49, 29, 22);
    ellipse(618, 535, 15, 10);
    ellipse(585, 535, 15, 10);
    //mouth
    fill(221, 132, 101);
    ellipse(600, 585, 55, 25);
    ellipse(600, 605, 50, 35);
    stroke(49, 29, 22);
    line(570, 593, 623, 593);
    line(623, 593, 628, 595);
    noStroke();
    fill(128, 36, 25);
    ellipse(605, 588, 30, 5);
    //face highlight
    noStroke();
    fill(255);
    ellipse(595, 520, 10, 5);
    ellipse(600, 573, 25, 5);
    ellipse(586, 600, 25, 5);
    ellipse(610, 463, 5, 40);
    //glasses
    noFill();
    stroke(128, 36, 25);
    ellipse(520, 465, 135, 120);
    ellipse(685, 465, 135, 120);
    line(585, 440, 623, 440);
    line(456, 440, 403, 455);
    line(745, 440, 708, 455);
    stroke(255);
    arc(515, 465, 120, 100, 0, QUARTER_PI);
    arc(680, 465, 120, 100, 0, QUARTER_PI);
    //earrings
    line(375, 520, 375, 600)
    noStroke();
    fill(255);
    ellipse(375, 600, 30, 30);
    noLoop();
}

It was very hard to figure out how to translate and rotate shapes and I ran out of time so I had to scratch that idea. But overall this project was really fun : D

LO-First Week Inspiration

About this topic, I would like to talk about one of my classmates’ game project in 2018.
This project was called Waddlation. The basic gameplay was to find the real player among all the AIs. It was a 5 person project which took 3 weeks to complete. Among the group, there are 2 artists, 2 programmers and one sound designer who was also their producer.
As far as I knew, they used Unity and Maya without any off-the-shelf add one. I think the only add-on they used was one which could help character rigging.
To my knowledge, there are some similar ideas which tell one player to play an object and all the other players to figure out which one is a fake item.
This project was selected into the 2018 ETC festival which was player by a lot of guests and received very positive feedback.
I found the link in one member’s portfolio:
https://healthy.artstation.com/projects/JlvnJ0
Healthy Moeung-Character artist, rigger
Dong Hyun ‘Shawn’ Kang – Environment Artist
Tracy Chen – programmer
Yu-Kai Chiu – programmer
(I didn’t find out the fifth member’s name)

LO1- My Inspiration

One of my first introductions to computational art/design was through my
interest in ceramics. I started following @turn.studio(Kenny Sing) in high
school because I was interested in his geometrically patterned ceramics
and starting to create my own. I would create my patterns through analog
measurements and sketched designs, but Sing’s use of Illustrator to create
perfectly measured designs inspired me to think about how I could improve
my overall craft and streamline my working process. Sing throws his
pieces, uses its measurements and images of the pieces to create a stencil
for the design, which is then laser cut and then applied to the dried or
fired piece for carving or glazing. This work method is documented on his
social media as a necessary progression from working analog due to his
need to produce pieces more efficiently to keep up with demand. Not only
has his work has become increasingly complex with the utilization of
computation to create a more accurate and effective process, but he also
stepped from simply creating patterns on his ceramics to creating
animations on his pieces.

Although this is more of the use of computation in physical art rather
then purely computational art, his art is what inspired me to look
outwards from traditional mediums to more modern arts.

One of Sing’s bowls, image posted March 6th.

LO 1 – My Inspiration

Last winter, I traveled to Japan with my family. One of the places that I had really wanted to visit was teamLab Borderless, a digital art museum experience in Tokyo. TeamLab is a large team of designers, artists, programmers, and other specialists that collaborate on large-scale, immersive projects. The museum was unlike anything I had ever seen before, from its incredible projections to its unconventional layout and interactive spaces.

One part of the exhibition that I found most enjoyable and inspiring was the tea house experience. A tea master and Japanese cultural brand collaborated on “En Tea” House, making use of the transformative qualities of tea to create reactive graphics. They use computer programs, sensors, and projection to alter the location/size of flower blossom graphics according to the tea level. The creators of this project were hoping to share the power, effects, and taste of green tea, and were likely inspired by interactive installations. This exhibit showcases the limitless possibilities of technology and is a prime example of the innovative work that can result from interdisciplinary collaboration.

teamLab Borderless exhibit
En Tea House exhibit

teamLab; teamLab Borderless

LO 1 – My Inspiration

Kyuha Shim: Designer in Residence


Computational design created using Facebook emojis

Kyuha Shim is a professor at Carnegie Mellon School of Design, teaching the Communications track. He is an active member of the graphic design community, incorporating computer algorithms to explore how computation and designers can work together. He states that “code is just a medium. Whether using it in a creative manner or efficiency-focused manner is up to a designer. ”In this project, Kyuha is using data and code to recreate images with Facebook emojis. 

I admire this project because the use of computational tools was not required. Kyuha could have just iterated a few compositions instead of writing code that generates “thousands of compositions in seconds.” I admire him as a designer. He doesn’t limit himself to just the conventional tools of design but also codes. 

Kyuha was the only designer involved in completing this project. Unfortunately, I do not know how long the project took but I believe that this project required him to use “off-the-shelf” software to streamline the process. Kyuha might have been inspired by data visualization because this project embodies that method at its core. This project (for me) was a realization that coding can really enhance the design process, and work harmoniously as a toolset.

View video here

LO 1 – My Inspiration

The Dancing Salesman, Problem (2011) by Simon Colton

The project I find interesting is The Dancing Salesman, Problem (2011) by Simon Colton. Colton wrote his own software that creates a photograph into an artistic piece by sourcing information and materials online, called The Painting Fool—the goal of this software is to be taken seriously as its own creative artist. The figures in The Dancing Salesman Problem were generated using context-free design grammar. I found this piece extremely interesting because of the debate about whether artificial, inhuman software can be considered a “real” artist, especially since The Painting Fool requires very minimal assistance and direction to create and run on its own. It also begs the question of what is considered “art?” Is it dependent on who or what the creator is? For me, art evokes some form of emotional reaction, and The Painting Fool has actually been able to do this after being paired with Maja Pantic’s emotion-detection software.

LO 1 Inspiration

A little over a year ago I went to Paris with my mom and we visited the Musée du Quai Branly – Jacques Chirac. There was this installation there called “The River” that generated in real time the names of all the people, cultural groups, and locations, that are named/appear in the museum into a river that flowed down this winding ramp. All of the words would randomly generate at the beginning of the stream and flow as if pulled by gravity to the end bumping into each other and clumping together along the way. The forward for the piece says that it was made by Charles Sandison the programs used to make the projected river were made in C++ language. I would say that the piece contributes to the possibility of more generative art within museums in the future so that more life/movement can be brought to still artifacts and words which make up history museums. I can’t say who might’ve inspired the artist because I’m not very familiar with generative art/artists. This piece caught my attention because it was very mysterious. You had to follow the river and look closely at all the words, which were difficult to see, to understand what they meant. It was very different from everything else in the museum, but at the same time tied everything together by documenting all of its contents in this unique way.

http://www.quaibranly.fr/fileadmin/user_upload/1-Edito/6-Footer/5-Les-espaces/4-Rampe/A-foreword-by-Charles-Sandison-EN.pdfhttp://www.quaibranly.fr/en/public-areas/the-river/

https://www.youtube.com/watch?v=u_LAOv4ZqSQhttps://www.youtube.com/watch?v=H1dOk2nKSJI

LO-01-My Inspiration

The piece of art I’m writing about is the work of Giada Sun (media designer), Sean B. Leo (assistant media designer), and SooA Kim (media engineer) on A/B Machines (a School of Drama production in 2018).

I did assist the dramaturg on this show, but I wasn’t involved in the media design or engineering at all so I experienced it as an audience member. It’s also just the first thing that comes to mind when I think of technology and art, even though I’m sure I’ve seen other examples of it.

It was about a 7 month process. Essentially, there were selfie camera stations set up for audience/actor use prior to the start of the show, live cameras capturing footage onstage, and projection screens and television screens streaming all of this footage throughout the show. It was insane. Every performance, the media team had to use their technological magic (not sure if they developed software for this) to compile the audience selfies taken at the beginning of the show for the big reveal at the end of the show.

The characters were also continually operating handheld cameras that were wired to TV screens throughout the set. The work itself was largely inspired by Andy Warhol. But the media design shared a lot of attributes with the show Network. I think it’s brilliant to be able to see so many different views of this world at the same time, especially in an art piece all about overexposure and the digitization of personality. In another world and another venue, the entire production would’ve been immersive.

This is Giada’s website on the production: https://giada1198.github.io/Giada-Portfolio/works/ABMachines/

A/B Machines. By Phillip Gates, Dir. Phillip Gates. November 2018. The Helen Wayne Rauh Studio Theater, Pittsburgh, PA.