LO-02 Generative Art

Glenn Marshall created this generative art piece as a visual rendering for the song Codex by Radiohead, in 2015. Marshall created this piece using Python programming language inside Cinema 4D. It is a combination of three of his previous pieces, Particle Man, Star Girl, and Universe Hand. I admire the power this piece has to captivate viewers. Once you see the first few seconds you are drawn in by the movement of each element and compelled to continue watching. Each shot starts off as a chaos of dots and lines that draw the viewer’s eyes to follow along until the main image is created.

I am also inspired by his attention to detail to tell a story. For example, the Universe Hand, Marshall sets the “particles” to expand and contract in sync with the Indian breathing technique “Pranayama” to parallel the “scientific model of an infinitely expanding and contracting cycle of the universe.” This also helps achieve his goal is to create meditative and philosophical narratives. He manifests this into the algorithm by syncing audios and visuals to create a calming effect to spark that meditative state.

Looking Outwards 02: Generative Art

Design has different mediums and some even unable to be seen by the human eye, but by using code, Joshua Davis used music to create art. He is a designer with a MoMa class on designing with sound, called Painting with Sound, and each piece is done differently or reacted to differently. Specifically, The Lament Configuration (2015) is a sporadic example of his computational design. The video is a frantic collage of abstract shapes and their relationships with the music. Incorporating an acoustic aspect to the art gave it transitions and dynamic of each shape a purpose. The smoothness of and the matching of the beat to the piece gave it a coherent flow when watching the video. Not only does the viewer see and hear the beats but the way the shapes are moving touches the viewer.

If coded the animation was reacting to something specific about the music. There could’ve been a variable that changed everything the music changed. When the beat was louder, or more intense the shape grew larger. In addition, the shape was constantly spinning. The spinning component can be a variable that changes the coordinates of the shape and the enlargement can be a variable that is affected by sound. The red color of the background stayed constant but the shape affected by the music is changing perspectives, which changed how light affects the shape. Maybe this is possible when the shapes react a certain size of coordinate the color changes, using an if else condition.

Joshua Davis has a different piece for every song he analyzes, but not only changing colors or shapes but the format of his piece. From the way the shapes are organized to how they move, each piece he has emits a different emotion, the same way music channels different emotions in everyone.

13
https://joshuadavis.com/MoMA-Classes-Painting-with-Sound

LO-02: Generative Art

#Deepdream is blowing my mind.

Memo Akten 

2015.

Memo Akten’s #Deepdream is blowing my mind intrigued me by the way he used this project to reveal our confirmation biases and strong urge to reflect something familiar onto something that is simply “noise with particular distributions.” The combination of art, perception, and machine learning makes this project philosophical yet compelling and interactive. By watching the algorithm produce new combinations of patterns, it pushed me to seek familiarity in the array of colors and designs. Akten uses a video of his face, an unfamiliar image to the machine, which then uses familiar images to generate patterns that it perceives and recognizes in his face. In this way we can see the similarities and differences between machine biases and our human confirmation biases when presented with an unfamiliar image. 

Akten manifests his ideas in the algorithm by revealing the poetry of the algorithm’s processes in comparison to how we as humans perceive and reflect on our world. Akten’s exploration of machine learning and its relationship to human learning allows us to understand different modes of thinking and how the process can be quite similar with different results. 

The deepdream algorithm gives further insight to how we make meaning of the things around us by using the algorithm to show clearly how we engage and understand the images created by it, and how our biological neural network arrives at the conclusions it does. This project uses machine learning to help us better understand our own human learning process and consciousness.

Memo Atken’s #Deepdream is blowing my mind.

Project 2 – Variable Face

sketch
//Joseph Kim
//Section D
var eyeWidth = 45;
var eyeHeight = 40;
var faceWidth = 200;
var faceHeight = 260;
var mouthWidth = 60;
var mouthHeight = 30;
let eColor = 180;
//eyes
let fColor = 80;
//face
let bgColor = 0;
//background
let mColor = 120;
//mouth
let earColor = 40;
//ear
let hColor = 20;
//hair
let bColor = 0;
//eyebrow
let iColor = 0;
//iris

function setup() {
    createCanvas(640, 480);
    background(220);
    text("p5.js vers 0.9.0 test.", 10, 15);
}

function draw() {
    background(bgColor);
    noStroke();
    fill(fColor);
    rect((width / 2) - (faceWidth / 2), (height / 2) - (faceHeight / 2), faceWidth, faceHeight / 2);
    ellipse(width / 2, height / 2, faceWidth, faceHeight);
    //face

    fill(eColor);
    var eyeLX = width / 2 - faceWidth * 0.25; 
    var eyeRX = width / 2 + faceWidth * 0.25;
    arc(eyeLX, (height / 2) - 10, eyeWidth, eyeHeight, 0, PI, CHORD); 
    arc(eyeRX, (height / 2) - 10, eyeWidth, eyeHeight, 0, PI, CHORD);
    //eyeball

    fill(earColor);
    var earLX = (width - faceWidth) / 2;
    var earRX = width - ((width - faceWidth) / 2);
    arc(earLX, height / 2, 30, 30, PI / 2, radians(270), CHORD);
    arc(earRX, height / 2, 30, 30, radians(270), PI / 2, CHORD);
    //ears

    stroke(bColor);
    strokeWeight(5);
    strokeCap(SQUARE);
    var eyebrowLX = width / 2 - faceWidth * 0.35;
    var eyebrowRX = width / 2 + faceWidth * 0.35;
    line(eyebrowLX, (height / 2) - faceHeight / 6, eyebrowLX + (faceWidth * 0.2), (height / 2) - faceHeight / 6);
    line(eyebrowRX, (height / 2) - faceHeight / 6, eyebrowRX - (faceWidth * 0.2), (height / 2) - faceHeight / 6);
    //eyebrow
    
    fill(iColor);
    noStroke();
    circle((width / 2) - (faceWidth * 0.25), height / 2, 10);
    circle((width / 2) + (faceWidth * 0.25), height / 2, 10);
    //iris
    
    noStroke();
    fill(hColor);
    arc((width - faceWidth) / 2, ((height - faceHeight) / 2), faceWidth, 90, 0, PI / 2, PIE);
    arc(width - ((width - faceWidth) / 2), ((height - faceHeight) / 2), faceWidth, 90, PI / 2, PI, PIE);
    //hair

    noStroke();
    fill(mColor);
    arc((width / 2), (height / 2) + (faceHeight / 4), mouthWidth, mouthHeight, 0, PI);
    //mouth

}

    function mousePressed() {

    faceWidth = random(120, 300);
    faceHeight = random(200, 400);
    eyeWidth = random(20, 70);
    eyeHeight = random(40, 70);
    mouthWidth = random(10, 100);
    mouthHeight = random(10, 80);
    mouthColor = random(30, 200);
    eColor = color(random(255), random(255), random(255));
    fColor = color(random(255), random(255), random(255));
    bgColor = color(random(255), random(255), random(255));
    mColor = color(random(255), random(255), random(255));
    earColor = color(random(255), random(255), random(255));
    hColor = color(random(255), random(255), random(255));
    bColor = color(random(255), random(255), random(255));
    iColor = color(random(255), random(255), random(255));
    


    
} 

I first began with the sample code provided on the website. Then, I began customizing the shapes, and added facial features that I thought would be interesting. I found that using variables instead of specific coordinate points are actually easier to work with because I don’t have to be attentive of decimal points. A good part of my process was from trial and error, learning from mistakes. In order to add more visual interest, I had to do some research on how to apply variable color.

Looking Outwards 2: Generative Art

A project I chose to highlight is Robert Hodgin’s Traffic simulation. What I found so interesting about this particular project is how it was able to really capture the human element through a computer program. Typically Hodgin’s work reflects animal phenomenons but the traffic simulation really stood out to me.


Unlike his other portfolio pieces, it captures the unique human aspect of personality. The “drivers” each have their own aggressiveness which is reflected in their driving decisions among the whole simulation. Unlike other visual traffic simulations, the cars aren’t moving in a uniform stream. Some drive faster than others and some cut others off or rush yellow lights.

In his portfolio, Robert Hodgin mentions what went into dictating the algorithm and what traffic rules were taken into account. Apart from the expected traffic lights, some drivers choose to turn left or right and shift lanes. Still what I think is the artistic element of this generative artwork is the aggressiveness factor that drivers have.

Section C, Looking Outwards-02

3D VectorField Animation – Step 3. Mathematic from JeongHo Park on Vimeo.

The work I found to be inspirational was done by an artist named JeongHo Park. The two works I will be referring to were actually done very recently, September 2020. Inspired by science and mathematics, he created a program that is composed of grids, lines, and dots. Most of his work reflects 3D space and while the dots and lines seem chaotic at first, all move together to suggest space and form. There is rhythm and cohesion in the way the dots and the lines all move together all while catching the viewer’s eye individually. I really admire how Park was able to control all the lines and dots and give each part of the program personality and flair. Park used openframework to create this project and he used 3D models to get the general idea of the movements. I think the small things like color, stroke weight, and even the momentum of the dots rushing past the canvas are all artistic choices Park made to personalize his work more.

gravity_photon from JeongHo Park on Vimeo.

Project-01-Face

This is what I look like when I wonder when covid-19 will end.

sketch jq
//Jiaqi  Self-Portrait
function setup() {
    createCanvas(500, 500);
    background(220);
    text("p5.js vers 0.9.0 test.", 10, 15);
}

function draw() {
	scale(0.3);
	background(224,164,151);
	fill(96,76,63);
	noStroke();
	//me
	//hair 
	ellipse(700,800,710,840);
	rect(200,750,840,850);

	//shadow
	fill(57,39,21);
	ellipse(657,820,498,588);
	rect(320,820,588,762);
	fill(244,211,158);
	ellipse(540,1538,172);
	rect(454,1292,172,352);
	fill(216,175,142);
	ellipse(556,1296,204,142);

	//face
	fill(244,211,158);
	ellipse(626,1021,650,650);

	//shirt
	fill(0,105,52);
	quad(255,1516,817,1516,951,1699,155,1699,);
	fill(216,175,142);
	rect(404,993,478,223);

	//mask string
	noFill();
	stroke(46,167,224);
	strokeWeight(12);
	ellipse(362,1069,247,183);
	noLoop();
	noStroke();

	//ear
	fill(244,211,158);
	ellipse(271,1048,121,226);
	ellipse(540,1518,172);

	//mask
	fill(175,198,235);
	rect(422,960,550,236);

	//eyes
	fill(0);
	ellipse(510,897,32,75);
	ellipse(833,898,32,75);

	//eyelash
	stroke(0);
	strokeWeight(3);
	line(486,875,505,870);
	line(858,877,834,874);
	noFill();

	//eyebrows
	stroke(96,76,63);
	strokeWeight(5);
	arc(500,804,80,50,PI,TWO_PI);
	arc(830,780,80,50,0,PI);
	noStroke();
	
	//bird
	fill(248,235,224);
	ellipse(915,422,190,157);
	ellipse(965,353,88);
	triangle(758,438,839,380,849,480);
	fill(0);
	ellipse(943,352,10);
	ellipse(1002,352,10);
	fill(224,164,151);
	triangle(965,353,987,353,976,372);

	




}

LO 1 – Inspiration

A photo taken by me in the infinite crystal universe around 2016.

About 4 years ago, my family and I visited a TeamLab exhibit in Menlo Park, California. TeamLab are a group known for interactive spaces, sculptures, and exhibits, many of which immerse gallery visitors into an ethereal dimension.

The TeamLab exhibition in Pace Gallery featured unique exhibits such as a “flower room” where flowers were projected onto the floor with ceiling projectors, and every now and then some would disappear and pop up elsewhere. We ventured into a room filled with strands of crystal lights hanging straight down from the ceiling, meant to portray an infinite-looking space. Although this was one of the more well known exhibits of TeamLab, it wasn’t among my favorites.

A small exhibit meant for little kids actually intrigued me the most. Kids colored in templates of boats, buses, and cars and then fed them through a scanner. A giant projected screen, moments later, showed the kids’ drawings being transformed into digital models of cars bumbling about in a busy city.

There is very little information about the processes the members of TeamLab use to create these augmented-reality works, but their website mentions their interdisciplinary skills including programmers, CG animators, and mathematicians. Yayoi Kusama, a contemporary Japanese sculptor and artist, creates work similar to that of TeamLab. The work they produce has serious potential for the future, especially if virtual or augmented reality continue to grow in popularity and accessibility.

Project 01 – Self Portrait

sketch
function setup() {
    createCanvas(400, 400);
    background(220);
    text("p5.js vers 0.9.0 test.", 10, 15);
}

function draw() {
	background(158, 231, 234);
	//noStroke();
	stroke(25);
	strokeWeight(3);
	fill(221, 190, 140);
	circle(285, 210, 50); //r ear
	rect(110, 125, 180, 200, 20, 20, 80, 80); //face
	fill(230, 210, 203);
	circle(160, 195, 50);// l eye
	circle(240, 195, 50);//r eye
	fill(96, 81, 76);
	rect(110, 65, 165, 60, 60, 10, 0, 0); //main hair
	rect(280, 130, 15, 80, 0, 0, 15, 0); //lower hair
	fill(221, 155, 140);
	arc(200, 230, 30, 30, 1.57, -1.57); //nose
	line(180, 270, 220, 270); //mouth
	noStroke();
	fill(255, 255, 255);
	arc(160, 195, 40, 40, -2.356, 0.785);//left eye reflection
	arc(240, 195, 40, 40, -2.356, 0.785);//right eye reflection
	//line(200, 0, 200, 400);
	//line(0, 200, 400, 200);

}

Week01_Self-Portrait

test

function setup() {
    createCanvas(600,600);
    background(220);
    text("p5.js vers 0.9.0 test.", 10, 15);
}

function draw() {
	strokeWeight(0)

	fill(20,20,20)
	rect(65,300,450,280);

	fill(239,192,136)
	ellipse(300,300,450,500);

	fill(39,25,150)
	triangle(210,210,140,300,360,280)
	triangle(460,210,370,300,500,320)
	triangle(370,300,320,370,360,400)
	fill(0)
	circle(200,280,15)
	circle(450,280,15);

	fill(237,220,191)
	triangle(380,290,370,400,400,400);

	fill(175,125,84)
	ellipse(350,480,30,15);

	fill(16,11,30)
	triangle(230,50,50,310,150,220)
	triangle(230,50,60,280,250,140)
	triangle(230,50,470,60,510,230)
	triangle(260,40,60,110,50,320);

	fill(39,16,103)
	rect(140,350,150,100)
	fill(223,93,194)
	rect(180,330,70,60)
	fill(237,155,207)
	rect(230,320,45,45)
	fill(248,172,240)
	rect(240,330,50,12)
	fill(39,25,150)
	rect(400,360,60,70)
	fill(223,140,220)
	rect(420,350,80,50)
	fill(248,172,250)
	rect(420,330,45,60);

	fill(39,16,80)
	rect(50,230,30,120)
	fill(45,70,50)
	rect(60,200,50,200)
	fill(39,16,90)
	rect(470,100,50,200)
	fill(45,70,50)
	rect(490,120,60,220);

	noloop();



}