Ammar Hassonjee – Project 09 – Computational Portrait

Ammar’s Portrait

/* Ammar Hassonjee
    ahassonj@andrew.cmu.edu
    Section C
    Project 09
  */

var underlyingImage;
var theColorAtLocationXY;
// Variable for adjusting the frame rate
var frames = 100;

function preload() {
    var myImageURL = "https://i.imgur.com/o4lm5zF.jpg";
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(480, 480);
    background(0);
    underlyingImage.loadPixels();
}

function draw() {
    // Changing the frame rate each time draw is called
    frameRate(frames);
    // Initial variables declared to return pixel from image at random location
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    // Getting the color from the specific pixel
    theColorAtLocationXY = underlyingImage.get(ix, iy);

    // Varying the triangle size and rotation for each frame based on
    //         the pixel distance from the mouse
    var triangleSize = map(dist(px, py, mouseX, mouseY), 0, 480 * sqrt(2), 1, 10);
    var triangleRotate = map(dist(px, py, mouseX, mouseY), 0, 480 * sqrt(2), 0, 180);
    noStroke();
    fill(theColorAtLocationXY);
    // rotating the triangle at each pixel
    push();
    translate(px, py);
    rotate(degrees(triangleRotate));
    triangle(0, 0, triangleSize * 2, 0, triangleSize, triangleSize);
}

function keyPressed() {
    // reducing the frame count each time a key is pressed
    frames = constrain(int(frames * .5), 1, 10);
}

function mousePressed() {
    // increasing the frame rate each time the mouse is pressed
    frames = constrain(int(frames * 2), 1, 1000);
}

I was inspired to make an interactive self portrait that is based from my favorite shape, the triangle! By using the mouse to generate values, my program allows users to make my portrait more realistic or more distorted depending on where they keep the mouse. I think the combinations of rotation and triangle size create really interesting renditions of my selfie.

Original photo used for image generation.
An almost finished version of my computational portrait using the photo above. Image produced after 2 minutes.

Jai Sawkar – Looking Outwards – 9

Video Explaining Taxi

For this week’s looking outwards, I am looking at Taisai Manheim’s Looking Outwards from week 2, in which he took a look at Taxi, Taxi! Taxi, Taxi is a generative project ny Robert Hodgin. It is a digital installation in the SEC, or Samsung Experience Center, in NYC. The project focuses on NY taxis, using Open Data to create real-time simulations of all the different routes different Cabbies can take. It uses a mix of coding, and also an Open Street Map that models NYC in 3d, to create this installation. I think this is super cool to see something iconic in NYC, the yellow taxis, and have the information these Taxies generate to see the flow of life and individuality in a completely different light. Thanks for the read Tai!

Jai Sawkar – Project 9 – Computational Portrait

Sketch

//Jai Sawkar
//jsawkar
//Project 9: Portrait
//Section C
 
function preload() {
    var imgURL = "https://i.imgur.com/4e3vWfI.jpg?1" //image of jaclny & sophia
    bImage = loadImage(imgURL); 
}

function setup() {
    createCanvas(480, 480);
    background('black')
    bImage.loadPixels(); //loads pixels from image
    frameRate(1000000); //speed of picture
}

function draw() {
    var px = random(width);
    var py = random(height);

    var sx = random(0, 240); //'sophia' on X axis
    var sy = random(height); //'sophia' on Y axis

    var jx = random(240, 480); //'jac' on X axis
    var jy = random(height); //'jac' on y axis


    ////ix & iy calulates the closest int value that is less than or equal to the value of the parameter for the pixels
   
    var ix = constrain(floor(px), 0, width - 1); 
    var iy = constrain(floor(py), 0, height - 1);


    var colXY = bImage.get(ix, iy); //determines color based on x & y pixels of image

    noStroke(); 
    fill(colXY); //fill based on x & y pixels of image
    ellipse(px, py, 5);

    textFont('FUTURA');
    textSize(2);
    text("Sophia", sx, sy) //randomly prints Sophia on Sophia's side of Canvas
    textSize(3);
    text("Jac", jx, jy) //randomly prints Jac on Jaclyn's side of Canvas

}

For this project, I used a picture I took 2 years ago of two of my best friends, Jaclyn & Sophia. Along with having the picture generated with dots, I wanted some aspect of it to be more personalized as well, so I split the canvas to allow their names to make up the picture on their side of the canvas. Below are two screenshots of the process, and the original photo as well!

About 3/4ths Through
Final Computational Portrait
Original Photo

Paul Greenway – Project 09 – Portrait

sketch

// Paul Greenway
// pgreenwa
// pgreenwa@andrew.cmu.edu
// Project-09-Portrait

var originalPortrait;

function preload() {
    
    //original portrait image from imgur
  
    var portraitUrl = "https://i.imgur.com/mqXKE8q.jpg";
    originalPortrait = loadImage(portraitUrl);
}


function setup() {
    createCanvas(700, 1200);
    background(255);
    originalPortrait.loadPixels();
    frameRate(1000);
}

function draw() {
  
    var px = random(width);
    var py = random(height);
    var circSize = random(5,20);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var theColorAtLocationXY = originalPortrait.get(ix, iy);
    
    
    //fill circles with color based on base image
    noStroke();
    fill(theColorAtLocationXY);
  
    //draw circle with dimension based on mouse position
  
    ellipse(px, py, circSize*(mouseX*0.01), circSize*(mouseY*0.01));
      
}

For this project I wanted to create portrait generator that would adjust the size / resolution of the dots based on user input. While I found it hard implement the exact control over the image that I wanted, the final result was a portrait made up of circles with dimensions based on the mouse position.

Paul Greenway – Looking Outwards – 09

Screenshot from the Architecture of Radio App

The Looking Outwards post I decided to look into further was Steven Fei’s week 7 post. The project this post covers is called Architecture of Radio and is by Richard Vijgen and his studio. The design studio, based in the Netherlands, focusses on contemporary information culture and how such data can be visualized through digital art as well as physical installations. This specific project visualizes the world of radio waves, collecting live signals from a variety of different sources and compiling them into one cohesive “infosphere” experience. The infosphere is then viewed by the user through a real time phone application.

In his post, Steven was specifically interested in the first person representation of the radio wave visualizations that the app provides and how it could allow the user to view the infrastructure of such signals in a unique and immersive way rather than the typical top down system plans. I too found this aspect of the project interesting and in addition thought it managed to very successfully visualize a system critical to our everyday lives that would normally be invisible to us.

Architecture of Radio by Richard Vijgen

lee chu – project 09

lrchu

// lee chu
// section c
// lrchu@andrew.cmu.edu
// project - 09

var underlyingImage;
var px = [];
var py = [];
var directionX = [];
var directionY = [];
var ix = [];
var iy = [];

function preload() {
    var myImageURL = "https://i.imgur.com/QpJ7uJf.jpg";
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(600, 300);
    background(0);
    underlyingImage.loadPixels();
    frameRate(60);

    // starting arrays for painters
    px.push(random(width));
    py.push(random(height));
    directionX.push(1);
    directionY.push(1);
}

function draw() {
    underlyingImage.resize(width, height); 

    // looping for individual painters
    for (i = 0; i < px.length; i ++) {
        ix = constrain(floor(px[i]), 0, width-1);
        iy = constrain(floor(py[i]), 0, height-1);
        var theColorAtLocationXY = underlyingImage.get(ix, iy);

        noStroke();
        fill(theColorAtLocationXY);
        rectMode(CENTER);
        rect(px[i], py[i], 10, 10);

        // random movement
        px[i] += directionX[i] * random(-2, 10);
        py[i] += directionY[i] * random(-2, 10);

        // keeping painters on the canvas
        if (px[i] > width) {
            px[i] = 0;
        }
        else if (px[i] < 0) {
            px[i] = width;
        }
        if (py[i] > height) {
            py[i] = 0;
        }  
        else if (py[i] < 0) {
            py[i] = height;
        }
    }
}

// adding painters with click
function mousePressed() {
    px.push(random(width));
    py.push(random(height));
    directionX.push(1);
    directionY.push(random(-1, 1, 2));
}

I miss Heath Ledger. I had wanted to create a bunch of little rectangular painters which make their way across the canvas.

what the end result should look like – CLICK on the canvas to introduce more painters

Xu Xu – Looking Outwards – 09

For this looking outwards post, I decided to focus on Looking Outwards 06 by Nadia Susanto, which focuses on the Silk Pavilion created by the Mediated Matter Group at the MIT Media Lab. This project explores the relationship between digital and biological fabrication, and uses a collaboration of an algorithm-generated base structure and silkworms to create a layered structure, in order to determine the shape and material optimization of fiber-based surface structures.

I agree with Nadia’s statement of how the combination of real-life insects and computation perform art, rather than purely using computers to randomly generate shapes. The thought of learning from silkworms, adapting the methods to computational design, and re-using silkworms to complete the final pavilion is a really interesting approach to me. However, when I first read Nadia’s post, I wondered why the two “printing” methods were combined, because it seemed quite redundant. After reading the MIT Media Lab’s website on the Silk Pavilion and watching the video, I discovered that the process of having silkworms generating structure was also a research on how the environment would affect the mapping of silkworms, which could be manipulated to create an ideal pavilion structure:

“Affected by spatial and environmental conditions, including geometrical density as well as variation in natural light and heat, the silkworms were found to migrate to darker and denser areas. Desired light effects informed variations in material organization across the surface area of the structure. A season-specific sun path diagram mapping solar trajectories in space dictated the location, size, and density of apertures within the structure in order to lock in rays of natural light entering the pavilion from south and east elevations.”

Mediated Matter, MIT Media Lab

Mihika Bansal – Looking Outwards – 09

The looking outwards post I want to look at for this post is Angela Lee’s post from week 6. This post deals with the artist Matt Deslauriers. The piece of work she specifically analyzes is a generative art piece that creates random colors and lines that generate based on the click of the mouse. I agree with Angela’s analysis of the importance of color in the artist’s piece. The artist used color palettes that pre-exist and are already established as well liked. This move is particularly smart as it is something that will appeal to the masses which is important in the pieces that he is creating. Angela also notes this in her own post. She also touches upon the importance of finding a balance in his piece which can be hard with the randomness of it, but she addresses this situation well as well.

Image of the piece that is created when you click the canvas.

Fanjie Mike Jin- Project 09- Portraits

53

/*  Fanjie Mike Jin
    fjin@andrew.cmu.edu
    Section C
    Project-09*/

var baseimage

function preload() {
//load in the picture of myself
    var myImageURL = "https://i.imgur.com/gIuXiAy.jpg";
    baseimage = loadImage(myImageURL);
}

function setup() {
//drawing the image
    createCanvas(500, 500);
    background(0);
    baseimage.loadPixels();
// makes the pixels load faster
    frameRate(1000);
}

function draw() {
//Enable mouse interactions to gerate stroke elements
    var mousecolor = baseimage.get(mouseX, mouseY);
    var x = random(width);
    var y = random(height);
    var ix = constrain(floor(x), 0, width-1);
    var iy = constrain(floor(y), 0, height-1);
    var color = baseimage.get(ix, iy);

    noStroke();
    fill(mousecolor);
//paint the canvas with the mouse using smaller ellipses
    ellipse(mouseX,mouseY,random(4,20),random(4, 20));
    fill(color);
//Use polygons as pixels with the randomized number of sides and dimensions
    polygon(x,y,random(4,20),random(4,9));

}
//draw the polygons
function polygon(x, y, r, n) {
    var angle = TWO_PI / n;
    beginShape();
    for (var i = 0; i < TWO_PI; i += angle) {
        var a1 = x + cos(i) * r;
        var a2 = y + sin(i) * r;
        vertex(a1, a2);
    }
    endShape(CLOSE);
}

// reset the canvas to blank once the mouse is clicked 
function mousePressed() {
    clear();
}

In this project, I am trying to make the portrait in a rigid pointillism style as I really like the impressionist paintings. By varying the size of the randomized polygons, I am managing to simulate the feelings of that the protrait is being painted by rigid brushstrokes. Also, at the same time, I am able to make some touches to the image as the mouse interaction is enabled to digitally draw part of the portrait.

protrait after 1 min
protrait after 20 second
base image

Raymond Pai – Looking Outwards – 09

Variant #3, Neri Oxman’s ‘Wanderers’

Danny Cho’s Looking Outward 03 looked at Neri Oxman’s ‘Wanderers’. The 2014 project creates very high-quality renderings of computer-generated growth of organic forms. I’m drawn to it because of the unsettling appearance of these forms, which appear infectious and bacterial. I agree with Danny’s assumption that trigonometry is used. More specifically, I understand that realistic video games use extremely high polygon counts and textures, which might explain the high definition of this project. I’m interested in Danny’s suggestion that Cinema 4D was used in this project. If so, it might’ve been a separate plugin that was developed by the artist to manipulate objects in the Cinema 4D software. I relate to Danny’s concern of computational generation not usually appearing very organic, because of the ‘uncanny valley’ of computed organic objects is very unsettling to me. I’m not sure if I’ll ever be okay with computers o