mecha-project09-portrait

sketch

//maddy cha
//section e
//mecha@andrew.cmu.edu
//project-09

var underlyingImage;

function preload() {
    //loads picture of friend in trash can
    var trashCan = "https://i.imgur.com/tAUXO6y.jpg";
    underlyingImage = loadImage(trashCan);
}

function setup() {
    createCanvas(480,480);
    background(0);
    underlyingImage.loadPixels();
    //frameRate(10);
    fill(255);
    
    //tells user what to do
    text("drag mouse to start",180,height/2);
    noLoop();
}

function mouseDragged(){
    //will start drawng pixel rectangles at 0
    var rX = 0;
    //will start drawing pixel rectangles in row at mouseY
    var rY = mouseY;
    var imageX = constrain(floor(rX), 0, width-1);
    var imageY = constrain(floor(rY), 0, height-1);
    
    //take color of image at mouseX, mouseY
    var imageColor = underlyingImage.get(imageX, imageY);
    var incriment = 10;

    if (mouseDragged){
        incriment += 10;
    }
    
    //draws pixels at random width and height from 1-10
    var rWidth = random(1,10);
    var rHeight = random(1,10);
    noStroke();
    
    //for loop allows for pixels to be drawn through entire width
    //rectangles are updated with new color depending on imageX and imageY
    for(var i = imageX; i < width; i++){
        imageColor = underlyingImage.get(imageX, imageY);
        fill(imageColor);
        rect(imageX,imageY,rWidth,rHeight);
        imageX+=10;
    }
    
}

For this project, I decided to use an image of my friend sitting in a trash can.

I started with a similar approach as the sample code, using rectangles at random widths and heights of the color of the background image. I decided that I wanted to display the image using the mouseDragged function, but was concerned with the issue that it would take too long for the image to appear if I had it load rectangle by rectangle. In order to combat this, I decided to load the image in the form of rows. I had issues at first with my rectangle rows all being the same color as the pixel loaded at mouseX, mouseY, but I was able to resolve this by updating the color as mouseX and mouseY updated.

sntong-Looking Outwards 09: on Looking Outwards

I viewed the Botanicus Interacticus project by Disney Research (in collaboration with Philipp Schoessler, Jonas Loh/Studio NAND, Munehiko Sato) as an interesting project that aims to bridge the gap between our ability to understand, and communicating, to nature and to understand how nature will react. By projecting images around the plants due to the physical interaction by other objects around it starts to allow us to read, empathize the world plants experience in their perspective. It also suggests ideas of Deep Ecology that strive for a more responsible stance of human activity in this era of Anthropocene. As I did not focus on much of the technical implications the project which Jamie did in her post, I am fascinated by the cultural implications the project begins to tease out.

Interactions of the person with the plant produces amazing visual images

 

 

cduong-Project 09-Portrait

sketch

//Name: Colleen Duong
//Email: cduong@andrew.cmu.edu
//Section: D
//Project-09

var jathryn;


function preload() {
  jathryn = loadImage("https://i.imgur.com/ALuxxJ8.jpg");
}

function setup() {
  createCanvas(600, 800);
  background(0);
  jathryn.loadPixels();
  imageMode(CENTER);
  frameRate(2000);

}

function draw() {
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var color = jathryn.get(ix, iy);
  	var d = random(5, 20);

//Draws the rain-like lines automatically
    push();
    stroke(color);
    strokeWeight(2);
    line(px-random(10), py-random(10), px + d, py + d);
    pop();
    //ellipse(px, py, d, d);

//Draws the ellipses with your mouse
    var mouseColor = jathryn.get(mouseX, mouseY); //changes color depending on where mouse is located based on picture colors
    noStroke();
    fill(mouseColor); //fills the circle colors according to picture color
    var ellipsesize = random(5, 15);  //randomizes the size
    ellipse(mouseX, mouseY, ellipsesize, ellipsesize);  //creates circles when drawing over
}

I wanted to somewhat mimic rain falling on a window so i tried to make it look random (since rain goes crazy in the wind) and then I allowed the viewer to draw randomly sized circles anywhere your mouse goes on the canvas, which was mostly for me because I was getting a little impatient.


What it looks like when I just let the rain fall


What it looks like after I started drawing circles to get the completed image

mjeong1-Looking Outwards-09-Section A

Link to Hae Wan Park’s Looking Outwards06

Link to Maius Watz

Marius Watz, “Arcs04-01” and “Arcs-04-01″
Jer Thorp,” RGB – NYT Word Frequency, 1981 – 2010″ and “Hope/Crisis – NYT Word Frequency, 1981 – 2010”

Random Number Multiples by Marius Watz and Jer Thorp

What I admire about Marius and Jer’s “Random Number Multiple” series is that the artist never neglect the beauty of traditional craft while embracing new digital interface through computer generated design. They used computer software to generate complicated geometry and screen-printed the final work. In Haewan’s post, she admired Marius Watz’s “Arc” series for its intentionality given to it random shape. The artists does not just randomly shape the geometry but they precisely calculate the geometry they want.

I agree with her that the randomness is generated by certain rules through a definite computer software to satisfy a statistical test. It seems random but it is actually generated by mathematical processes. I really admire their process of working. They effectively take advantage of computer generated design to visualize complicated statistical data and they also used traditional way to print it. Its nice combination expands more possible ways of executing art expression.

 

Looking Outwards 09 – Yugyeong Lee

‘A Musical Wall where Little People Live’ (2017) by teamLab is an interactive projection on a wall which reacts to physical objects that are placed on the wall. I agree that this project is “effective in its simplicity” which allows children to play and easily interact with the game-like aspect of the project. In addition, the project evoke a “sense of delight” not only for children but also for an adult as the projected plot and the characters stimulate genuine atmosphere for any age to enjoy. Exploring the relationship between virtual and physical world, the project blends the two different worlds through generative and creative way in which the users wander off and explore to find patterns. As interactive installations “allow for unlimited possibilities of expression and transformation,” the project definitely opens up possibilities for filed of art. As Fon assessed the possibility of this technology that “helps us rethink and expand the field of art,” its appliance in architecture could also be an interesting adaptation where architecture can incorporate interactive designs to explore different experiences of a single space.

link: https://courses.ideate.cmu.edu/15-104/f2017/2017/09/01/keuchuka-looking-outwards-01/

 

ljkim looking outward 09

I am reviewing the work of: ssharada from looking outward 05. Their cited project was on an Absolut Amber ad.

Absolut Amber from ZEITGUISED on Vimeo.

I appreciated their point in stating that although this looks like natural materials, it is in fact algorithms and computer generated art. “but in reality are created using technology and synthetic non-physical matter – usually from coded algorithms.

Although this is computer generated, I consider this more on the design side. Its more likely that this was in works with an advertising ad. There was a creator who directed the vision and visuals, but had an engineer code what they wanted. I appreciate that music was the inspiration for this work because I can clearly see the correlation.

ablackbu-Looking-Outwards-09

Georgia Tech’s Shimon robot writes and plays its own music using machine learning.

___________________________________

For this looking outwards post, I am getting inspiration from svitoora’s post on sept. 18:

svitoora – 04 Looking Outwards

___________________________________

As svitoora mentions in his post, this robot was given more than 5,000 songs and riffs and is able to, with these, compose its own music. What this robot does is so human like and the focus on the implications of this were stated in his reflection very thoroughly. The fact that you could walk into a room and hear this playing and think it is a human is crazy. Not only does it emit music, but it creates its own non-random but incredibly calculated notes. The most mesmerizing part of this project to me is that it takes from other music. Like a human, it takes cues from different genres and mixes them to make something with its “taste.”

___________________________________

http://www.wired.co.uk/article/ai-music-robot-shimon

Bettina-Project09-SectionC

sketch

// Bettina Chou
// yuchienc@andrew.cmu.edu
// Sectin C
// Project 9: Pixelated Image

var img;

function preload() {
    img = loadImage("https://i.imgur.com/H1lewvj.jpg");
}

function setup() {
    createCanvas(320,480);
    imageMode(CENTER);
    background(255);
    img.loadPixels();
}

function draw() {
      scale(.5,.5); //image is double size of canvas, just scaling down to fit
      var x = floor(random(img.width));
      var y = floor(random(img.height));
      var col = img.get(x, y); //retrives RGBA value from x,y coordinate in image
      //fill(col);
      var weight = 640/(brightness(col));
      stroke(col);
      strokeWeight(weight);
      strokeCap(SQUARE);
        //ellipse(x,y,30);
      line(x, y, x-25, y-25);
      }

I wanted to build off the brightness indicators we learned last week with the eye-tracking project. In art, darker colors have more weight, so I made the thickness of the strokes depend on the darkness. I chose diagonal lines to mimic the motion in the original picture, in which my friend stands within the famous LACMA spaghetti installation. Initially, I tried drawing each line in order by calling each pixel by row and column. However, that method not only was inefficient (taking at least 10 seconds to compile) but the order felt rigid. Instead, I call each pixel randomly. Not only is it more efficient, but the audience can watch the image unfold and the randomness adds a sense of movement to the piece that honors the original photo.

Top: An early iteration. Below: Screenshot of what the final image may look like.

Project-09 Portrait

sub portrait

//Yoonseo(Dave) Choi
//Section B
//yoonseo1@andrew.cmu.edu
//Project 09

var portrait; // global variable for the image
var px = []; // array for storing previous x value
var py = []; // array for storing previous y value
var pp = []; //array for storing previous pixel 

function setup() {
    createCanvas(275,480); //canvas size
    background(0); //set background 0
   
}

function preload(){
    portrait = loadImage("https://courses.ideate.cmu.edu/15-104/f2017/wp-content/uploads/2017/10/yoonseo1_bro-172x300.jpg"); //load image from the asset folder
}
function draw() {
var x = floor(random(portrait.width)); // create x point in random place of image
var y = floor(random(portrait.height)); //create y point in random place of image
var pix = portrait.get(x,y); //get color of image at x,y coordinate
pp.push(pix); //push the color into the array
noStroke(); //no stroke
px.push(x); //append the x coordinate to the array
py.push(y); //append the y coordinate to the array 

for(var i = 0; i < px.length; i ++){
    strokeWeight(0.5); // stroke weight of 0.5
    stroke(pp[i],1); //create stroke color to be color of x,y location. 
    line(px[i],py[i],px[i-1],py[i-1]); //create line between previous x,y and current x,y
}
}

For this project, I have used my brother’s photo for the portrait to use. I wanted to see what are ways to draw a portrait without illustrating direct same image at the location, but rather something that connects with previous point with current. For this project, I have made code to create random lines using randomly generated points. x and y coordinates are randomly generated and lines are created based on current points and previous points with color of the the pixel at the current x and y. By having code to do such a motion allows the portrait to be abstract image of color but end product shows the resemblance of the original image

left is original and right is portrait

myoungsh-project-09-Emma

sketch

var Emma;
var z = .5;
function preload() {
  Emma = loadImage("https://i.imgur.com/pWLCrQJ.jpg"); //image from imgur
}
function setup() {
  createCanvas(400, 400);
  background(0);
  for (var i = 0; i < width/2; i+=z) { 
    for (var j = 0; j < height/2; j+=z) {               //nested for loop image sized
      var pixelColor1 = Emma.get(i, j);                 //storing pixels info
      var pixelBrightness1 = brightness(pixelColor1);   //sampling brightness of pixles
      noStroke();
      fill(256, 0, 256, pixelBrightness1 + 30);              //solid color brightness from image
      ellipse(i, j, z, z);                              //draw new image using colors
    }
  }
  for (var k = 0; k < width/2; k+=z) {
    for (var l = 0; l < height/2; l+=z) {
      var pixelColor2 = Emma.get(k, l);
      var pixelBrightness2 = brightness(pixelColor2);
      noStroke();
      fill(256, 256, 256, pixelBrightness2 + 50);
      ellipse(k + width/2, l + width/2, z, z);
    }
  }
  for (var m = 0; m < width/2; m+=z) {
    for (var n = 0; n < height/2; n+=z) {
      var pixelColor3 = Emma.get(m, n);
      var pixelBrightness3 = brightness(pixelColor3);
      noStroke();
      fill(0, 256, 256, pixelBrightness3 + 30);
      ellipse(m + width/2, n, z, z);
    }
  }
  for (var x = 0; x < width/2; x+=z) {
    for (var y = 0; y < height/2; y+=z) {
      var pixelColor4 = Emma.get(x, y);
      var pixelBrightness4 = brightness(pixelColor4);
      noStroke();
      fill(256, 256, 0, pixelBrightness4 + 30);
      ellipse(x, y + width/2, z, z);
    }
  }
}