Carley Johnson Looking Outwards 10

Precarious was created for the National Portrait Gallery exhibition Black Out: Silhouettes Then and Now, which opened in May, 2018.

This is a screen shot of the project, which tracks and visualizes silhouettes of people as they move through the museum. What’s different about this, though, is that the camera looks down on figures and tracks them from above. ( you can see in this screen shot, someone has figured this out and spread their arms wide.) The project is meant to show boundaries and how people push them and track over them. Camille Utterback works in installation art, meant to be interacitve and create dialogue about physicality. Utterback combines various sensing and display technologies with the custom software she writes. I like her work and this project specifically because bodies and patterns are really interesting, but other projects I’ve seen based on movement are sometimes really cluttered and not visually appealing, because movement is so unpredictable and abundant. This project however, is both visually pleasing and looks super fun.

Alice Fang – Looking Outwards – 10

An example from the Human Race Machine
Nancy Burson is an artist and photographer who uses morphing technology and digital manipulation in her works, which are often politically and/or socially charged. One project of hers that I am particularly interested in is Human Race Machine, which I think is extremely relevant in the current political climate as well. Originally, she created a software that would ‘age’ an image in order to track missing children for law enforcement; using similar software in facial recognition and facial alteration, this project is an interactive display which allows the user to see what they could look like if they were of a different race. As described on its website, “the concept of race is not genetic, but social”– her project is an extremely powerful tool for social reflection, and discussing diversity and issues of race of ethnicity. Although this was originally created in 2000 for the London Millennium Dome, the elegant software used to shift and change a human face and place oneself in the literal ‘face’ of another is still pertinent today, as human compassion and understanding is challenged by polarization and radical idealism.

Emily Zhou – Looking Outwards – 10

Sputniko! is a Japanese/British artist and MIT Media Lab Assistant Professor who is known for tech-inspired film and multi-media installation works. She studied Mathematics and Computer Science at Imperial College, London, and continued to pursue a Masters in Design at the Royal College of Art. Her project, Tranceflora – Amy’s Glowing Silk (2015) is an exhibition piece in the Gucci Gallery, Tokyo.

Motion photograph capturing fluorescent kimono dress.

The exhibit displayed a Nishijin-Kimono dress designed by Sputniko!, that incorporated transgenic glowing silk. The silk was created by injecting the genes of a glowing coral and jellyfish into silkworm eggs. The dress was showcased around a large-scale installation of 3000 transgenic silkworm cocoons.


Video documentation of exhibition view.

I admire her work in combining art and technology in a way that can be shared with many people. The exhibition attracted 10000 visitors in 3 weeks and can be appreciated across cultures.

JasonZhu-Project-09-Portrait

sketch

/* Jason Zhu
Section E
jlzhu@andrew.cmu.edu
Project-10
*/

var terrainSpeed = 0.0003;
var terrainDetail = 0.0008;
var flags = [];

function setup() {
    createCanvas(480, 300);
    frameRate(50);
    for (var i = 0; i < 10; i++){
        var rx = random(width);
        flags[i] = makeflag(rx);
    }
}

function draw() {
    background(246,201,116)
    push();
    beginShape(); 
    noStroke();
    fill(104,176,247)
    vertex(0, height);
    for (var x = 0; x < width; x++) {
        var t = (x * terrainDetail) + (millis() * terrainSpeed);
        var y = map(noise(t), 0,1, 0, height);
        vertex(x, y - 50); 
    }
    vertex(width, height)
    endShape();
    pop();
    displayHorizon();
    updateAndDisplayflags();
    removeflagsThatHaveSlippedOutOfView();
    addNewflagsWithSomeRandomProbability(); 
}
function updateAndDisplayflags(){
    // Update the flag's positions, and display them.
    for (var i = 0; i < flags.length; i++){
        flags[i].move();
        flags[i].display();
    }
}

function removeflagsThatHaveSlippedOutOfView(){
    var flagsToKeep = [];
    for (var i = 0; i < flags.length; i++){
        if (flags[i].x + flags[i].breadth > 0) {
            flagsToKeep.push(flags[i]);
        }
    }
    flags = flagsToKeep; // remember the surviving flags
}

function addNewflagsWithSomeRandomProbability() {
    // With a very tiny probability, add a new flag to the end.
    var newflagLikelihood = 0.007; 
    if (random(0,1) < newflagLikelihood) {
        flags.push(makeflag(width));
    }
}

// method to update position of flag every frame
function flagMove() {
    this.x += this.speed;
}
    

// draw the flag
function flagDisplay() {
    var floorHeight = 10;
    var bHeight = this.nFloors * floorHeight;
    noStroke();
    // pole
    push();
    translate(this.x, height - 30);
    fill(30, 37, 35);
    rect(0, -bHeight * 1.03, this.breadth, bHeight);
    // flag 
    fill(12, 36, 112);
    triangle(5, -bHeight * 1.03, 40, 20-bHeight, 5, 30 - bHeight);
    pop();
}

function makeflag(birthLocationX) {
    var bldg = {x: birthLocationX,
                breadth: 6,
                speed: -.75,
                nFloors: round(random(1,10)),
                move: flagMove,
                display: flagDisplay}
    return bldg;
}


function displayHorizon(){
    noStroke();
    fill(55,222,153)
    rect (0,height-30, width, height-30); 
}

For this project, I wanted to look at recreating a scene from an old film. I created flags underwater to get the look and feel of what I was going for. It was a bit hard to get what I wanted to happen so I had to simplify quite a bit. This project was definitely a struggle for me compared to past projects.

MirandaLuong-Project-09-Portrait

sketch

/* Miranda Luong
Section E
mluong@andrew.cmu.edu
Project-09
*/

var underlyingImage;

function preload() {
    var myImageURL = "https://i.imgur.com/Lq0vGxO.jpg";
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(480, 480);
    background(0);
    underlyingImage.loadPixels();
    frameRate(10000);
}

function draw() {
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var theColorAtLocationXY = underlyingImage.get(ix, iy);

    noStroke();
    fill(theColorAtLocationXY);
    rect(px, py, 10, 10);

}

This is Mimi. She is eating pizza. I thought it’d be a fun creating this build up to see a girl munching on pizza.

Original Photo

Mid-way through Rendering

Nearly finished Rendering

Project 09: Portrait

sketch

var underlyingImage;

function preload() {
    var myImageURL = "https://i.imgur.com/24gq2P9.jpg"
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(480, 480);
    background(0);
    underlyingImage.loadPixels();
    frameRate(10);
}

function draw() {
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var theColorAtLocationXY = underlyingImage.get(ix, iy);

    noFill();
    stroke(theColorAtLocationXY);
    line(px, py, px, py+24);
    
    var theColorAtTheMouse = underlyingImage.get(mouseX, mouseY);
    fill(theColorAtTheMouse);
    ellipse(pmouseX, pmouseY, 6, 6);

}

So like the code below wasn’t showing up and I literally did not understand what was wrong. I straight up have tried multiple methods of coding this, even duplicated the sample code(which should’ve worked) and it is still not returning the correct image. After a long period of troubleshooting I had to relent. I tried essentially the same code on a mac using the p5.js text editor and it finally worked. Still don’t really know what was wrong.

sketch try 2

// Samantha Ho
// sch1
// Project-09
// Section E

var underlyingImage;

function preload() {
    var myImageURL = "http://saho.studio/img/2018-06-28%2017_41_16.194.jpg";
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(480   , 480);
    background(0);
    underlyingImage.loadPixels();
    frameRate(10);
}

function draw() {
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var theColorAtLocationXY = underlyingImage.get(ix, iy);

    noStroke();
    fill(theColorAtLocationXY);
    ellipse(px, py, 6, 6);

   
}

MirandaLuong-LookingOutwards-09

I will be reviewing Veronica Wang’s project review of Intr(e)Scapes, a landscape installation piece that can sense and react to visitors’ movement with reactive LED-based animations. Built in 2015 by SHO Architecture, Intr(e)Scapes is an installation that takes advantage of natural elements. I personally enjoy the project greatly. As someone who grew up in a crowded city without a lot of nature, this project was the perfect combination of both the natural and human-generated. I personally think it would fit great in where I grew up. As for Veronica’s analysis of the project, I think she did a stellar job. I could not agree with her more in her interpretation of the analysis as playful, tangible, and sensorial in effect. In fact, there is little to none that I disagree with as I think she’s done a neutral analysis that keeps to the facts. The one comment that I have of the project, overall, is that I think it would have been nice to incorporate further natural elements. The project uses artificial stalks that attempt to mimic natural grass and I would have thought it be more interesting if more natural elements were incorporated-perhaps real grass.

A video of the project installed in Georgetown (2015).

 

 Installation view in Georgetown.

An image of the project on site at Georgetown’s Business Improvement GLOW conference.

JasonZhu-LookingOutwards-09

This week, I reviewed Rachel Lee’s Week 3 Assignment, Mushtari. A collaborative work by MIT Media Labs and Stratasys, Mushtari is a wearable that mimics biological growth patterns. I thought the project was both insightful and intriguing. I agree with much of Rachel’s commentary. I think she hit the nail on the head when she said that living Mushtari allows for “existing biological structures to take place naturally and respects its processes. Adding to that thought, I think the piece brings much attention to not only the natural elements that permeate our daily lives, but the elements that are unseen. In taking such a unique approach, I think the piece really distinguishes itself from similar projects at a micro scale. The one part of Rachel’s commentary that I took issue with is when she discusses how the project harnesses biological processes and adapts them for greater human use in a non destructive way. While the same thoughts are reiterated in the official MIT posting, I do not see eye to eye on the non destructive aspect of the project.


Video of the project by MIT Media Labs


Closeup of some of the elements in the wearable from Living Mushtari

Yingying Yan-LookingOutwards-09

sketch

/*
Yingying Yan
Section E
yingyiny@andrew.cmu.edu
Project - 09
*/

var zoey; //a picture

function preload() {
	var zoeyPic = "https://i.imgur.com/QcSx6L4.jpg"
    zoey = loadImage(zoeyPic);
}

function setup() {
    createCanvas(480, 320);
    background(0);
    zoey.loadPixels(); //get pixels from the picture
    frameRate(30);
}

function draw() {
    var x = random(width); //x position of the storke
    var y = random(height); //y postotion of the stroke
    var px = constrain(floor(x), 0, width-1); //x position of each pixel
    var py = constrain(floor(y), 0, height-1); //y position of each pixel
    var theColorAtLocationXY = zoey.get(px, py); //get the color of each pixel
    stroke(theColorAtLocationXY); //draw out zoey with small stokes
    strokeWeight(random(1,5));
    line(x, y, x - random(0, 9), y - random(0, 9));
}

I regret using my little sister’s picture. She is so cute but I am not good enough to render her picture to something better. But that is ok because she will never see this. Overall I think this project is interesting. It is like making a filter for photoshop. I started the project by trying to do something fancy, but I failed. I will try to fix that after I turn this in.

add
final result

Mimi Jiao – Looking Outwards 9 – Section E

Ryoji Ikeda, data.path in Madrid

I really enjoyed Sophie Chen’s Looking Outwards 4, where she talked about computational musical artist Ryoji Ikeda. He specializes in creating sounds based on data and his works are a form of data visualization. In Ikeda’s work data.path, I agree with Sophie’s point that audio and visual perception are intertwined and are integral to each other’s perceptions. I, too, am interested in how the two tie together to create an environment. I am really intrigued by data.path specifically because the darkness of the installation space, paired with the visuals and audio, creates a sense of spatial dimension even when the user is standing still. The lack of light in the space dulls the viewer’s sense of perception and they become more receptive to the audio and visual installation. I am really interested in this experience and I would love to experience it first hand. Ikeda’s work demonstrates the strength audiovisual elements have on the experience and perception of a space, and his work perfectly demonstrates ways computation can be applied to artistic rendering.