CJ Walsh – Looking Outwards 09

Mass Windwalk in Sydney 2013

After looking through the reflections of several other students, I found a post from Julia Nishizaki that she wrote for the Week 6 Looking Outwards on randomness. It was the series of images that she included that really attracted me to wanting to learn more about this project.

The project is called Windwalks, created by British artist Tim Knowles. A lot of his work is focused on movement and creating images through the movement of forms and objects that he can not control. This project utilizes the movement of the wind to create line drawings. Participants in the project are made to wear helmets with large arrows affixed to them. As the participant moves, the arrow is pushed by the wind, and the direction of the arrow determines where they will move next. Knowles collects data from the helmet about the path that the wearer took and uses that to create drawings.

I appreciate that Julia mentions that the project is not directly related to computation, but it definitely has to do with random image generation. The path that the wind creates cannot be expected. But it is not just the wind that is random. The wind then informs random movement of the wearers and then informs a randomly generated line drawing. I find the different layers of this project to be super intriguing. The other element that I really enjoy about the project is that we often dont think about the winds affect in our daily movement (except when its super strong…then people notice). But generally, the movement of the wind doesnt usually determine our pathways through an environment, so it is interesting to see the wind being the controlling force in this instance. I also really appreciate the line drawings. You can see the map like qualities in them, but they are also simple line forms that draw attention.

One point of Julia’s that I thought was very interesting was pointing out the ways that the line drawings suggest the environment that the walks take place in. The buildings and other structures inform the movement of the wind and therefore also determine the map that is translated for the drawings.

Overall I found this project super intriguing and Im glad I was able to find something really interesting by looking through someone else’s reflections. Im really interested in learning more about this artist and his other work.

looking outward – Ilona Altman – 09

I agree very much with Yoshi that this project is very beautiful and incredibly in they way it is effective for a specific user.  I think it is interesting how something so useful could also be formally beautiful, and that there would be a commercial incentive toward computer generated forms because of the ease in which they can be individualized.  It is interesting to me that the same forms which occur in nature can be used in the design process, and that there is a warmth inherent to structures that. Resemble natural forms.. I think it is beautiful that there is such a deep relationship between growing natural forms and a sort of. Geometry that unifies that which exists within nature. This makes me think about fractals and sacred geometry. 

I think it could have been interesting if the entire shoe, instead of just the sole, was constructed in this way… 

a video of the project, new. balance generated soles based on pressure distribution in the body
Yoshi’s response I was inspired by

Nadia Susanto – Project 09 – Computational Portrait

sketch

// Nadia Susanto
// nsusanto@andrew.cmu.edu
// Section B
// Project-09-Computational Portrait

var underImage;

function preload() {
    //preloading image from imgur
    //var myImage = "https://i.imgur.com/R80wzCp.jpg";
    var myImage = "https://i.imgur.com/uezsOBb.jpg";
    underImage = loadImage(myImage);
}

function setup() {
    createCanvas(480, 480);
    //resize the image so it fits the canvas
    underImage.resize(480, 480);
    background(0);
    underImage.loadPixels();
    frameRate(100);

}

function draw() {
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    //gets colors from specific image location
    var colorXY = underImage.get(ix, iy);

    //random rectangles
    stroke(colorXY);
    strokeWeight(random(1, 10));
    noFill();
    rect(ix, iy, 20, 10);
}

//random ellipses when mouse is pressed
function mouseDragged() {
    ellipse(mouseX, mouseY, random(5, 50), random(5, 30));
}

For my portrait picture I used a picture of myself in a bamboo forest in Japan. I decided to use this picture because it has many different shades of green to the image. Since there was only one main color, I wanted to incorporate different shapes. The main shape is a rectangle, and when the mouse is dragged it switches to ellipses.

A few seconds
30secs – 1 minute
Original image

Hyejo Seo-Project 09 – Portrait


sketch

/*
Hyejo Seo
Section A
hyejos@andrew.cmu.edu
Project - 09 - Portrait
*/
var myPicture;

function preload() {
    var myImageURL = "https://i.imgur.com/1g3A3AE.jpg";
    myPicture = loadImage(myImageURL);
}
function setup() {
    createCanvas(myPicture.width, myPicture.height);
    background(0);
    // loading pixels
    myPicture.loadPixels();
    // setting the rate letter 
    frameRate(200);

}

function draw() {
    var px = random(width);
    var py = random(height);
    var xx = constrain(floor(px), 0, width);
    var xy = constrain(floor(py), 0, height);
    // getting color of each pixel 
    var theColorAtXY = myPicture.get(xx, xy);
    noStroke();
    fill(theColorAtXY);
    // writing the letter R in each pixel randomly
    textSize(20);
    textFont('Avenir');
    text("R", px, py);  

}

For this project, I decided to use a picture of my bestfriend, Raphael. I set the frame rate fairly high so the letter “R” fills up fast enough. This was an interesting project overall, and I got more comfortable with playing around with pixels. 

Mari Kubota- Project-09- Portraits

sketch

/*
Mari Kubota
Section D
mkubota@andrew.cmu.edu
Assignment-09
*/

var lines = []; // Empty array to store ellipses

// Load image
function preload() {
    underlyingImage = loadImage("https://i.imgur.com/23aauRel.jpg");
}

function setup() {
    createCanvas(550, 420);
    underlyingImage.loadPixels(); 
}

// Determines the properties
function makelines() {
    var ix = constrain(floor(this.x), 0, width-1);
    var iy = constrain(floor(this.y), 0, height-1);
    var theColorAtLocationXY = underlyingImage.get(this.x, this.y); //gets the color value of the image at (x,y)
    var linethickness= 5;//thickness of lines

    fill(theColorAtLocationXY) // Fills the ellipse with the color of the image at (x,y)
    noStroke();
    ellipse(this.x, this.y, linethickness); // Draws the ellipse at (x,y) with the width and the height dimension of 'ballsize' which is a random number between 2 and 8
}

// Makes the lines move
function linespeed() {
    this.x += this.dx; // MouseY will be later assigned as 'y'
}

// function to create properties of ellipse to be called on later
function drawPortrait(originalx, originaly, originaldx) {
    p = {x: originalx, 
         y: originaly,
         dx: originaldx,
         speed: linespeed,
         lines: makelines,
        }
    return p;
}

//draws the lines
function draw() {
    newLines = []; // Creates an empty array for new lines
    for (var i = 0; i < lines.length; i++) { 
        var p = lines[i];
        p.speed(); //returns the function speed from drawPortrait
        p.lines(); //returns the function balls from drawPortrait
        newLines.push(p); 
    }
    lines = newLines;
}


// When the mouse is moved it replaces the originalx, originaly, and originaldx values with mouseX, mouseY and random(-20,20) and make the drawPortrait function actually happen and to be stored in the newball array
function mouseMoved() {
        var newLines = drawPortrait(mouseX, mouseY, random(-20, 20));
        lines.push(newLines);
        this.x += random(this.x-3, this.x+3);
}

For this project I used a picture of me from over the summer when I was learning how to drive. The image appears when you mouse over the picture and the picture appears with horizontal strips which were made using ellipse().

Sarah Choi – Looking Outwards – 09

A series of walks directed by the wind using an array of mechanisms, filmed and plotted by GPS.

50 wind walks simultaneously depart Taylor Square, Sydney and walk guided solely by the wind for 1 hr, dispersing through the city their routes [recorded live via a tailor-made smartphone app] create a live drawing.

Browsing through the Looking Outwards assignments, I chose Julia Nishizaki’s Looking Outwards-06 about Tim Knowles’ Windwalks project. This London based artist focuses on using different mechanisms to form wind patterns directed by a series of walks. Filmed and plotted with a GPS, the line drawing below was the outcome of the design. This immediately stood out to me given my interest in art portraying natural forms of our world. I completely concur with my peer’s assessment of Knowles’ art as she explains the incorporation of the randomness of wind patterns and how this creates a more meaningful experience for the audience.

However, I feel this creates more than just “deeper relationships between individuals, their cities, and the wind itself.” I believe Knowles was trying to recreate and appreciate impalpable aspects of nature such as the wind by involving people in his project. He creates a series of walks starting in Taylor Square, Sydney and allowing people to disperse and walk through city routes for a full hour. By creating this interactive but calculated schema, he was able to bring people not only closer to their environment and nature but also gave them more an appreciation of art in general. Most of the time, people are unaware of the empirical research involved in creating art forms to a wider audience. Knowles was able to use his greatest passion to depict aspects of life that often gets forgotten and underappreciated in the nature of a physically focused group of people.

7 channel video projection, mixed media object and route plot as wall drawing, 2009

http://www.timknowles.co.uk/Work/Windwalks/tabid/496/Default.aspx

Fallon Creech-LookingOutwards-09

Video demonstrating the wind map’s abilities, including examples of updates every hour

I am reflecting on Crystal Xue’s study of a wind map created by collaborators Martin Wattenberg and Fernanda Veigas from Looking Outwards-07. The collaborators both co-lead Google’s PAIR initiative and specialize in information visualization, which is highly evident in their work. Their wind map generates delicate strokes of wind across a flattened map of the U.S., informing the direction and intensity of wind; the map updates every hour and even allows users to zoom in, allowing the wind to describe smaller locations more easily.

As Crystal mentioned, the map visualizes complex information in a simplistic and legible manner, giving a wide spectrum of users with different interests useful information. Wattenberg explained, “Bird watchers have tracked migration patterns, bicyclists have planned their trips and conspiracy theorists use it to track mysterious chemicals in the air.”

While the map is highly informative and demonstrates several layers of information through lines, I appreciate its ability to also act as a purely artistic piece. The changes in wind patterns and stroke thickness have the ability to evoke different emotions, which in many ways directly correlate with the weather experienced during that hour and in certain locations.

Snapshot of wind map in zoomed-in location, demonstrating wind patterns at a point in time

Alec Albright – Project 09 – Portrait

sketch

// Alec Albright
// aalbrigh@andrew.cmu.edu
// Section B
// Project 09

var original; // original image
var words; // bank of words to be displayed

function preload(){
    // original image URL
    var imageURL = "https://i.imgur.com/pfJvLUW.jpg";
    // load original image
    original = loadImage(imageURL);

    // populating the words array
    words = ["Years", "from", "now", "our", "past", "will", "be", "a", "story"];
}

function setup(){
    createCanvas(384, 480);

    // resize image to fit in constraints
    original.resize(384, 480);
    // load pixels
    original.loadPixels();

    background("black");
    frameRate(20);
}

function draw(){
    // initializing random places to place words
    var px = random(width);
    var py = random(height);

    // to be used for finding the color from the original image
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);

    // getting color from original image
    var theColorAtLocationXY = original.get(ix, iy);

    // text size dependent on mouseX
    textSize(map(mouseX, 0, 384, 8, 14));


    // displaying words dependent on where they're placed
    // top row
    if(py < 160) {
        // "years"
        if(px < 128) {
            fill(theColorAtLocationXY);
            text(words[0], px, py);
        // "from"
        } else if(px >= 128 & px < 256) {
            fill(theColorAtLocationXY);
            text(words[1], px, py);
        // "now"
        } else {
            fill(theColorAtLocationXY);
            text(words[2], px, py);
        }
    // middle row
    } else if(py >= 160 & py < 320) {
        // "our"
        if(px < 128) {
            fill(theColorAtLocationXY);
            text(words[3], px, py);
        // "past"
        } else if(px >= 128 & px < 256) {
            fill(theColorAtLocationXY);
            text(words[4], px, py);
        // "will"
        } else {
            fill(theColorAtLocationXY);
            text(words[5], px, py);
        }
    // bottom row
    } else {
        // "be"
        if(px < 128) {
            fill(theColorAtLocationXY);
            text(words[6], px, py);
        // "a"
        } else if(px >= 128 & px < 256) {
            fill(theColorAtLocationXY);
            text(words[7], px, py);
        // "story"
        } else {
            fill(theColorAtLocationXY);
            text(words[8], px, py);
        }
    }

    // fill in dots depending on where the mouse is
    var mouseColor = original.get(mouseX, mouseY);
    noStroke();
    fill(mouseColor);
    ellipse(mouseX, mouseY, 5);
}

In approaching this project, I immediately knew I wanted to create a portrait of my long-time girlfriend, displaying our favorite quote: “Years from now, our past will be a story”. I wasn’t quite sure how to go about displaying the quote, so I decided to section off areas of the canvas for each specific word so it read like a book from left to right and top to bottom. Once this was ironed out, the difficult part was implementing text size so that the quote was readable but also allowed for a decently quick creation of the image. Thus, I used this as an opportunity to implement a cool feature, so I mapped text size to the x coordinate of the mouse.

30 seconds
1 minute
A couple minutes
Approximately 7 minutes
Original Image!

Nadia Susanto – Looking Outwards – 09

I looked at Ammar Hassonjee’s LO 2. He studied a generative art project called “Breathing Wall II” by Behnaz Farahi in 2014, a USC architecture professor. The installation is made out of wood, PVC, and fabric. To capture the hand motions and signals they use many systems like Leap Motion system and DC systems then project contour lines on the wall.

Video demonstrating the Breathing Wall installation

Ammar liked the relationship between movement, light, and color and how it gave power to users to interact with that environment. He also says his favorite types of art are ones that are adaptive and involves user activity. I can agree that this is my favorite type of art as well because involving human behavior brings extra beauty to the artwork.

What I love about this project is that it seems so simple and has a deeper meaning to it. Farahi mentioned how mobile devices used touch and gesture-based languages like swiping, clicking, or dragging for natural control. With the rise of technology and social media, we are entranced by this environment of likes and scrolling through people’s photos with meaningless connections. We forget the real surrounding environment around us, so this project sets a great reminder that those basic controls can be used to control the surrounding environment.

Farahi interacting with installation
A closer look at a user interacting with the wall. Notice also the contour lines on the wall.

To learn more about this installation, click the link below:

http://behnazfarahi.com/breathing-wall-ii/

Nawon Choi— Looking Outward 09


Mark Kirkpatrick’s commission piece for Apple, Inc. 2017

For this week’s Looking Outward I will be responding to CJ Walsh’s post from week 5.

He analyzes works by digital artist and designer Mark Kirkpatrick. These works also caught my eye because of how serene and beautifully they represent scenes in nature through a highly digital medium. I agree with CJ’s noting the cohesive color palettes that really do a great job at setting the overall mood of each image. To add to this point, the use of the soft gradients in the background (sky and sun) in contrast with the geometric shapes also creates interesting textures.

I also really appreciate how CJ researched and mentioned the artist’s background (or lack of) in the arts. Kirkpatrick has received no formal training in artistic practices, yet he was commissioned by Apple in 2017 to create the above image. I think this point goes to show how far discipline and self-learning can take a person.