Jonathan Liang – Looking Outwards-02

digital DADA

DATA-DADA is a series of algorithms created by Holger Lippmann that reads different color map arrays and represents them in circles and lines. DATA-DADA has created three series of maps. I found these maps interesting because in my architecture studio right now we are studying mapping and these maps can serve as inspiration for me and my projects. One thing that could be more better about these series of projects is if they could specify what the data is about or what they are representing.  But I do love the ambiguity that these circles and lines create. I do not specifically know what inspired Mr. Lippmann, but his DADA series reminds me a lot of Takeshi Murakami’s superflat style. Both artists have a very similar aesthetic, even though Lippmann utilizes data rather than made-up characters.

http://www.lumicon.de/wp/?p=3397

 

 

 

Grasshopper and Generative Art

John Locke’s emoticon-constructed skull

If you look closely, this skull is made up of little faces. Some are happy, exposing the reds of their mouths. Others are unhappy, pursing their lips to be mostly yellow circles. What I admire is that each of these faces has individual parameters for their mouth and eyes based on how happy they are. Take a greyscale image and remap value to happiness and you get.. a skull made up of faces.

Parametric controls for each face

These were generated in Grasshopper, a plugin for the CAD and Surface Modeling program Rhinoceros, and that short summary above was the workflow. Since Grasshopper is a visual programming language, like parts of the Unreal Engine, you just plug actors into actors.

The Grasshopper program

This specific project was made by architect  John Locke March 2013, and his blog post was titled “=)”. What I admire is that the faces are very expressive with very little, and what’s there is well-articulated. The eyes and mouth aren’t pinned in place, they shift. It also shows a less serious side of the artist. A lot of his work relates to architecture, and this is a fun side project.

KadeStewart-Project02-VariableFaces

sketch

//Kade Stewart
//Section B 9:30
//kades@andrew.cmu.edu
//Assignment-02-B

//randomized variables
var facewidth = 10;
var faceheight = 10;
var eyex = 5;
var eyey = 5;
var eyewidth = 5;
var eyeheight = 5;
var pupilsize = 5;
var r = 0;
var g = 0;
var b = 0;
var fr = 30;
var mouthsize = 0;
var mouthx = 0;
var mouthy = 0;
var hat = -1;
var mouth = -1;
var frame = 0;

//non-randomized variables
var ang = 0;
var ang2 = 0;


function setup() {
    var height = 400;
    var width = 400;
    createCanvas(width, height);

    rectMode(CENTER);
    noStroke();
}

function mousePressed() {
	fr = 30;
	//reset the animation variables
	ang = 0;
	ang2 = 0;
}

function draw() {
	//reduces the frames to slowly stop "loading screen"
	if (fr > 0) {
		fr--;
		frameRate(fr);
		hat = -1;
		mouth = -1;
	}

	//only redraw new face if the mouse has been clicked recently
	if (fr > 0) {
		//"reset" the page
		noStroke();
		fill(170,180,210);
    	rect(width/2, height/2, width, height, 20);

		//randomize face, eyes, mouth
	    facewidth = random(150,250);
	    faceheight = random(150,250);
	    r = random(200, 225);
	    g = random(200, 225);
	    b = random(200, 225);

	    eyewidth = random(25, 35);
	    eyeheight = random(25, 35);
	    eyex = random(5, 35);
	    eyey = random(-35, 35);
	    pupilsize = random(10, 12);

	    
	    if (mouth < 1) {
	    	mouthsize = random(10, 20);
	    	mouthx = random(width/2 - facewidth/4, width/2);
	    	mouthy = random(height/2 + eyeheight/2 + eyey + 5, height/2 + faceheight/4);
	    }
	    if (mouth >= 1) {
	    	mouthsize = random(10, 20);
	    	mouthx = random(width/2, width/2 + facewidth/4);
	    	mouthy = random(height/2 + eyeheight/2 + eyey + 5, height/2 + faceheight/4);
	    }


	    //draw head
	    fill(r, g, b);
	    rect(width/2, height/2, facewidth, faceheight, faceheight/5);

	    //eyes
	    fill(255);
	    rect(width/2 - 15 - eyex, height/2 - 20 + eyey, eyewidth, eyeheight, 15);
	    rect(width/2 + 15 + eyex, height/2 - 20 + eyey, eyewidth, eyeheight, 15);
	    fill(0);
	    rect(width/2 - 15 - eyex, height/2 - 20 + eyey, pupilsize, pupilsize, 5);
	    rect(width/2 + 15 + eyex, height/2 - 20 + eyey, pupilsize, pupilsize, 5);

	}



	//ending animation
	if (fr == 0) {
		//restore the frame rate
		frameRate(30);
		frame++;

		//create the yellow lines
		noFill();
		stroke(253, 253, 150);
		strokeWeight(4);
		for (i = 1; i <= 16; i++) {
			line(width/2 + (ang * cos(2 * PI * (i/16))), 
				 height/2 + (ang * sin(2 * PI * (i/16))),
				 width/2, height/2);
		}
		
		//speed for the lines in the animation
		if (ang > -width) {
			ang -= 12;
		}

		noStroke();
		fill(170,180,210);
		rect(width/2, height/2 + faceheight/2 + 30, 250, 40, 10);
		if (frame >= 10) {
			fill(255);
			textAlign(CENTER);
			textStyle(BOLD);
			textSize(32);
			text("It's party time!!", width/2, height/2 + faceheight/2 + 40);
		}
		if (frame == 18) {
			frame = 0;
		}

		//start to delete the lines when the lines are drawn
		if (ang <= -width) {
			//circle that covers/"deletes" the lines
			noStroke();
			fill(170,180,210);
    		ellipse(width/2, height/2, 2*ang2, 2*ang2);

    		//deletion speed
			if (ang2 > -width*(2/3)) {
				ang2 -= 8;
			}
		}

		noStroke();

		//make the background nice after the animation
		if (ang2 <= -width * (2/3)){
			background(255);
			fill(170,180,210);
    		rect(width/2, height/2, width, height, 20);
		}

		//draw head
	    fill(r, g, b);
	    rect(width/2, height/2, facewidth, faceheight, faceheight/5);

	    //eyes
	    fill(255);
	    rect(width/2 - 15 - eyex, height/2 - 20 + eyey, eyewidth, eyeheight, 15);
	    rect(width/2 + 15 + eyex, height/2 - 20 + eyey, eyewidth, eyeheight, 15);
	    fill(0);
	    rect(width/2 - 15 - eyex, height/2 - 20 + eyey, pupilsize, pupilsize, 5);
	    rect(width/2 + 15 + eyex, height/2 - 20 + eyey, pupilsize, pupilsize, 5);

	    if (ang2 <= -width/2) {
	    	//randomizes the side the hat & mouth are on
	    	if (hat == -1) {
	    		hat = random(0, 2);
	    	}
	    	if (mouth == -1) {
	    		mouth = random(0,2);
	    	}

	    	//makes sure the mouth isn't too far down
	    	if (mouthy + mouthsize/2 > height/2 + faceheight/2) {
		    		mouthy = height/2 + faceheight/2 - mouthsize/2 - 5;
		    }

		    //draw party hat
	    	if (hat >= 1) {
	    		fill("red");
	    		triangle(width/2 + facewidth/2 - faceheight/5, height/2 - faceheight/2,
		    		 width/2 + facewidth/2, height/2 - faceheight/2 + faceheight/5,
		    		 width/2 + facewidth/2 + 20, height/2 - faceheight/2 - 20);
	    		fill("yellow");
		    	ellipse(width/2 + facewidth/2 + 20, height/2 - faceheight/2 - 20, 25);

	    	} else {
	    		fill("red");
	    		triangle(width/2 - facewidth/2 + faceheight/5, height/2 - faceheight/2,
		    		 width/2 - facewidth/2, height/2 - faceheight/2 + faceheight/5,
		    		 width/2 - facewidth/2 - 20, height/2 - faceheight/2 - 20);
	    		fill("yellow");
		    	ellipse(width/2 - facewidth/2 - 20, height/2 - faceheight/2 - 20, 25);

	    	}

	    	//draw mouth on left side of face
	    	if (mouth < 1) {
	    		fill(25);
		    	rect(mouthx, mouthy, mouthsize, mouthsize, 
		    		 mouthsize/5, mouthsize/5, mouthsize/2, mouthsize/5);

	    	} else {
	    		//draw mouth on right side of face
				fill(25);
		    	rect(mouthx, mouthy, mouthsize, mouthsize, 
		    		 mouthsize/5, mouthsize/5, mouthsize/5, mouthsize/2);
	    	}

	    }
	    
	}
	

    
}

In the beginning, I knew that I wanted to add an interesting transition because randomization makes transitions look cool. After I finished the transition, I realized that my project had an overall happy feel. I added a party hat and text to make sure that the user knew that they were supposed to be having a fun time with all the faces they generated.

Justin Yook – Project 02 – Variable Faces

jyook_VariableFaces

//Justin Yook
//Section C
//jyook@andrew.cmu.edu
//Project-02

//aspects of variability
var faceWidth = 150;
var faceHeight = 150;

var faceColorR = 255;
var faceColorG = 255;
var faceColorB = 255;

var topHeadWidth = 150;
var topHeadHeight = 150; 

var eyeWidth = 15;
var eyeHeight = 15;

var eyeColorR = 0;
var eyeColorG = 0;
var eyeColorB = 0; 

var pupilWidth = 5;
var pupilHeight = 5;

var pupilColorR = 0;
var pupilColorG = 0;
var pupilColorB = 0; 

var eyebrowThick = 3;

var noseWidth = 8; 
var noseHeight = 8;

function setup() {
    createCanvas(640, 480);
}

function draw() {
    background(faceColorR * (1/6), faceColorG * (1/6), faceColorB * (1/6));

    //draw general face
    noStroke();
    fill(faceColorR, faceColorG, faceColorB);
    ellipse(width/2, height/2, faceWidth, faceHeight);

    //draw top of head
    noStroke();
    fill(faceColorR, faceColorG, faceColorB);
    ellipse(width/2, height/2.5, topHeadWidth, topHeadHeight);

    //draw eyes and eye color
    noStroke();
    var eyeLx = width / 2 - faceWidth * 0.25;
    var eyeRx = width / 2 + faceWidth * 0.25;
    fill(eyeColorR, eyeColorG, eyeColorB);
    ellipse(eyeLx, height / 2, eyeWidth, eyeHeight);
    ellipse(eyeRx, height / 2, eyeWidth, eyeHeight);

    //draw eyebrow and its thickness
    noStroke();
    var browLx = eyeLx - 13;
    var browRx = eyeRx - 13;
    fill(faceColorR * (1/2), faceColorG * (1/2), faceColorB * (1/2));
    rect(browLx, height / 2.2, 25, eyebrowThick);
    rect(browRx, height / 2.2, 25, eyebrowThick);

    //draw pupils
    noStroke();
    fill(eyeColorR * 2, eyeColorG * 2, eyeColorB * 2);
    ellipse(eyeLx, height / 2, pupilWidth, pupilHeight);
    ellipse(eyeRx, height / 2, pupilWidth, pupilHeight);

    //draw nose
    noStroke();
    fill(faceColorR * (1/2), faceColorG * (1/2), faceColorB * (1/2));
    ellipse(width / 2, height / 1.85, noseWidth, noseHeight);

    //draw mouth
    noFill();
    stroke(faceColorR * (1/2), faceColorG * (1/2), faceColorB * (1/2));
    strokeWeight(2);
    arc(width/2, height/1.85 + 13, 30, 5, 0, PI, OPEN);

}

function mousePressed() {
    //randomize dimensions of face
    faceWidth = random(70, 200);
    faceHeight = random(80, 200);

    //randomize color of face
    faceColorR = random(0, 255);
    faceColorG = random(0, 255);
    faceColorB = random(0, 255);

    //randomize dimensions of eyes
    eyeWidth = random(10, 30);
    eyeHeight = random(10, 30);

    //randomize dimensions of third eye
    thirdEyeSize = random(0, 30)

    //randomize color of eyes
    eyeColorR = random(0, 255);
    eyeColorG = random(0, 255);
    eyeColorB = random(0, 255);

    //randomize eyebrow thickness
    eyebrowThick = random(1, 8);

    //randomize dimensions of nose
    noseWidth = random(5, 30);
    noseHeight = random(10, 30);

    //randomize dimensions of pupils
    pupilWidth = random(5, 10);
    pupilHeight = random(5, 10);

    //randomize dimensions of top head
    topHeadWidth = random(90, 180);
    topHeadHeight = random(90, 180);

}

When I was starting out with the project, I didn’t know what features to add to the faces other than the basic eyes, head, nose, and mouth. After some time playing around with primitive shapes, I placed a new ellipse that overlapped with the original face ellipse, which made it look like a weird species. So I decided that the second head’s size would be fun to manipulate. In the end, the theme of my project was about showing the many ways an alien can look like in our imagination.

Justin Yook – Looking Outwards 02

“Reaction-Diffusion” by Nobutaka Kitahara is a generative art piece. The CG art presented on screen is made from mathematical algorithms to simulate chemicals’ reaction-diffusion patterns in response to sounds; in this project, he used the song titled “Astravel”. According to information on some websites, this reaction-diffusion simulation is created based on the Gray-Scott model. One can see Kitahara’s artistic sensibilities manifest in the algorithm from his use of 3D space instead of 2D, making the viewer’s perspective much more interesting because there are more angles to see the simulation from. I admire this project because it is an amazing way to bind art, science and math together. There are many times when people assume that art and math or science are in separate categories, but “Reaction-Diffusion” represents that phenomenons of the natural world actually are made of both art and math or science; almost like one cannot exist without the other.

 

Alexandra Kaplan-Looking Outwards-02

Image from Memo Atken’s “All watched over by machines of loving grace: Deepdream edition”

This is a work by the Generative Art artist Memo Atken called “All watched over by machines of loving grace: Deepdream edition”. It takes a birds-eye image of the GCHQ (Government Communications Headquarters, a British intelligence agency) which is then changed through a deep learning artificial intelligence program called Deepdream. This project caught my eye because it used technology to make a statement on the subject. The algorithm causes the distinct building to look like an infinite number of eyes staring back at you, which is an interesting and insightful comparison considering the purpose of the GCHQ.

I am very unfamiliar with AI and any algorithms/processes one can use to create art within them, so I did some further reading on the subject, and I came across a broken down version of the artistic process in an article Atken wrote:

“At a high level here’s what’s happening in #deepdream:

  1. An artificial neural network (i.e. the AI’s ‘brain’) has already been trained on over a million images (to recognise objects in the images)
  2. We show the trained network a brand new image
  3. While the network is processing this new image, we take a snapshot from a particular group of neurons inside the network
  4. We feed that new snapshot image back in, i.e. show it to the network (We can optionally apply small transformations, like zooming in etc.)
  5. Repeat”

Dani Delgado – Looking Outwards 02

The head of the Phantom Terrains website

The project I choose to look into was one called “Phantom Terrains”. This 2014 project aimed to capture the wireless data that consistently streams from routers to our devices and then use an algorithm to transform it into almost music-like audio. Not only does this project allow us to more fully understand the plethora of data that is constantly streaming around us, but it also does so in a beautiful and elegant way by combing the audio and visual representations of these sound waves. I cannot begin to comprehend the amount of complex coding that was required to create such an algorithm, and I sincerely admire how the creators, Frank Swain and Daniel Jones, were able to apply it in such a beautiful way. Daniel Jones, specifically, has been working on combing art, sound, and technology to give us a better understanding of the world we live in.

How sound waves are visualized using Phantom Terrain

Phantom Terrains was developed using Nesta funds from the U.K. and can be interacted with by using bluetooth hearing aids (along with being sampled on their website). This implementation of the hearing aids sprouted from the idea of re-working this prosthetic technology into enhancement tech, allowing the user to be able to hear a wider range of sounds than the average human ear ever could.

This project is vastly intriguing and has piqued my wonder in terms of just how much invisible data is swirling around us at all times and just how much coding can do to help us to visually show that.

Link to the website: http://phantomterrains.com/

Jenni Lee – Looking Outwards – 02

Video demo of flight patterns

Colorized flight patterns

The project I chose is “Flight Patterns” by Aaron Blink, created in 2006. In this project, Aaron Blink visualizes paths of air traffic through color and form. This project is a graphic display of the patterns of generated airplane paths. Flight patterns was made to be a planetarium-based artwork installation that visualizes the data and algorithms of manmade aerial technologies. I was drawn to not only the nuanced, intricate visuals of the project but also the way in which it blends together data visualization, science, and art through the presentation of these mechanical algorithms. I find it most intriguing how the flight paths ultimately combine to create the shape of the United States, demonstrating the interconnectedness within the various flight paths. Additionally, I admire how Blink executed this project with artistic sensitivity, as the the pacing and texture of the white motion lines within the video demonstrate an expertise in motion graphics. Ultimately, I highly admire this project for its effective usage of generative art.

Link

Alessandra Fleck – Looking Outwards – 02

I find the project, Unnumbered Sparks ,presented at TED2014, by Aaron Koblin particularly inspiring because it takes what is one of the most prominent, yet unseen movements in the modern day, and projects it onto a visual depiction that fills the sky when looked upon from below. Stretching 745 feet across, this piece blanketing above ground takes in cell phone signals to draw and play with the color mapping details being projected onto it. Utilizing the Javascript interactive rendering API called WebGL, the user’s motion is able to come to life as it is rendered through shaders and javascript. To get the drawings/movement-associated art projected onto the “blanket,” websockets pass the data from a mobile to rendering browser. Just as the work flows along the lines of bringing a community together, it also does so in a similar language in the way the algorithms utilized connect to one another. What makes this work so interesting is that it is a piece of art that inspires those who are viewing the sculpture to also engage in art. In other words, an art piece that leaves room to branch out and become even more elaborate with the viewers art included into the piece. As a common theme of technological developments today is about sharing information across a larger span, it is neat to see such theme come out in a semi tangible application.

Victoria Reiter-Looking Outwards-02

“Galapagos” by Karl Sims, 1997

A project I find extremely interesting is one titles “Galápagos” by Karl Sims, produced in 1997. Sims attended MIT and earned a degree in life sciences, then continued on to earn a graduate degree in visual studies from MIT’s Media Lab. He combines these areas of study very gracefully in his installation.

Twelve monitors with weight-sensitive pads of Karl Sims’ installation “Galapagos,” exhibited in the InterCommunication Center, Tokyo, 1997-2000.

Sims’ installation “Galapagos” seeks to mimic the process of evolution through natural selection through a combination of computer-generated code and human interaction. Twelve monitors are set up, each with a foot pad in front. Each screen displays a very simple “creature” generated by code. The exhibition viewers are invited to step on the foot pad of whichever creature they find the most visually appealing, and the creatures not “selected” by the viewers disappear, and from the remaining creatures a new generation is produced using code to create random variations in such aspects as size, movement, color, number of arms, etc. As the process continues on, the creatures “evolve” to be more and more complex.

Top left creature is the “parent,” and the eleven remaining boxes demonstrate possible variations of offspring
Top left is parent, eleven others are possible variations of offspring

I would assume that this code contains many variables which are randomly assigned values in response to the weight of a person standing on the foot pad (much like how a code can respond to such things as a mouse click or a key being pressed on the keyboard). This concept clearly reflects Sims’ background, as he studied biological development, and integrated this into his master in visual studies, thus turning a scientific concept into interactive art.

This work inspires me, because it seems as though oftentimes there is a stark distinction between nature and computer science. Sims was able to bridge this gap by writing code which behaves the way nature behaves, in a way making empirical a concept that seems so vast and out of the hands of human control.

Link to full information about “Galapagos” here.
Link to further reading about the project as reviewed by Art New England and Wired also included.