dayoungl Looking Outwards – 10

Caroline Sinder’s Nudge Nudge

Idea sketch
Diagram of her methods
Prototype in a form of a lamp.

For this week’s Looking Outwards, we are focusing on women practitioners in the field of computational design. From a list given from the professor, I randomly selected an artist named Caroline Sinders. She introduced herself as many things but among them, I think how she described herself as a machine learning designer and a digital anthropologist was quite interesting. Looking through her website, I realized that a lot of the things she do and experiments with are things that are done frequently in CMU such as Human Computer Interaction (HCI) and merging of technology and design. Especially, I made a strong connection looking at one of her work “Nudge Nudge”, which is a wearable watch that doesn’t tell time but tells something in relation to time — Google Calendar Events. The idea itself was so clever to begin with. Her wearable tells time in relation to a meeting, class, or anything marked in the calendar and tells you how much time you have in between through variation of colour. Another thing that draw me in was her ideation process; it was very similar to what we do in research methods class as Design majors. She not only thought about the aesthetics and the delivery of the idea of changing colour but also about people’s response to the colour change through experimenting with putting stickers on participants and how distracted they felt when they were reminded of having stickers on their shirts; this experiment was done to measure the level of distraction people not wearing the Nudge would get when they are near an individual wearing Nudge. Little considerations like this revealed something I learned in class and through her post, I was able to see my learnings put into practice.

yoonyouk-project10-landscape

sketch

//Yoon Young Kim
//Section E
//yoonyouk@andrew.cmu.edu
//Project10


// Simple demonstration of the noise() function. 
// Change these for different effects:
var terrainSpeed = 0.0005;
var terrainDetail = 0.003;

var cacti = [];


function setup() {
    createCanvas(480, 300);
    background(255);

    for(var i = 0; i <5; i++){
        var rx = random(width);
        cacti[i] = makeCacti(rx);
    }
    frameRate(10);

}
 


function draw() {
    background(255, 192, 141);
    
    stroke(255, 231, 101, 80);
    strokeWeight(20);
    fill(250, 212, 87);
    ellipse(width/2, height/2 - 30, 200, 200);




    fill(196, 100, 76);
    noStroke();
    beginShape(); 
    for (var x1 = 0; x1 < width; x1++) {
        var t1 = (x1 * terrainDetail/2) + (millis() * terrainSpeed/3);
        var y1 = map(noise(t1), 0,1, 75, height/2+100);
        vertex(x1, y1); 
    }

    vertex(x1, height);
    vertex(0, height);
    vertex(0, y1);
    endShape();



    fill(102, 36, 39);
    noStroke();
    beginShape(); 
    for (var x = 0; x < width; x++) {
        var t = (x * terrainDetail*1.5) + (millis() * terrainSpeed*1.5);
        var y = map(noise(t), 0,1, 170, 3*height/4);
        vertex(x, y); 


    }
        vertex(x, height);
        vertex(0, height);
        vertex(0, y);
        endShape();



    fill(25, 7, 5);
    noStroke();
    beginShape(); 
    for (var x2 = 0; x2 < width; x2++) {
        var t2 = (x2 * terrainDetail*2) + (millis() * terrainSpeed*3);
        var y2 = map(noise(t2), 0,1, height/2 + 50, height);
        vertex(x2, y2); 

    }
    vertex(x2, height);
    vertex(0, height);
    vertex(0, y2);


    endShape();


    updateAndDisplayCacti();
    addCacti();

}




function updateAndDisplayCacti(){
    // Update the building's positions, and display them.
    for (var i = 0; i < cacti.length; i++){
        cacti[i].move();
        cacti[i].display();
    }
}




function addCacti() {
    // With a very tiny probability, add a new building to the end.
    var newCactiLikelihood = 0.007; 
    if (random(0,1) < newCactiLikelihood) {
        cacti.push(makeCacti(width));
    }
}


// method to update position of building every frame
function cactiMove() {
    this.x += this.speed;
}
    

// draw the building and some windows
function cactiDisplay() {
    fill(25, 7, 5);
    push();
    translate(0, 120);
    rect(width/2+this.x, height/2 -60, 20, 70, 200, 200, 0 ,0);
    rect(width/2+15+ this.x, height/2 - 20, 20, 10);
    rect(width/2+25+ this.x, height/2-30, 10, 20, 200, 200, 0, 0);
    rect(width/2 - 15+ this.x, height/2 - 35, 15, 10);
    rect(width/2-15+ this.x, height/2-50, 10, 20, 200, 200, 0, 0);
    pop();
}


function makeCacti(birthLocationX, birthLocationY) {
    var plant = {x: birthLocationX,
                speed: -1.0,
                r: random(0, 50),
                move: cactiMove,
                display: cactiDisplay,
                }
    return plant;
}


For this week’s landscape project, I decided to do a desert scene. I used the noise part in order to create different layers of the landscape essentially creating a foreground, midground, and a background. The front most layer, or the foreground, displays the most detailed landscape and with cacti silhouettes. I found it a bit challenging to create the randomly generated cacti that would move across the screen. I have yet to figure out how to actually place the cacti on top of the terrain.

katieche-looking outwards 10

NEUROTiQ

The sensoree NEUROTiQ spa was headed by Kristin Neidlinger, a biomedia concept designer. It consists of a headpiece that “animates” your brain by illuminating and mapping brain waves using color. It was applied in the setting of a yoga class, as an experimental project. The headpiece uses the Muse Brain Sensing Headband to identify activity and brain wave patterns of the user, and then animated the headpiece to display colors according to the user’s cognition levels, attention, resting rate, meditation and sense of deep sleep.

This technology would be used in notifying people of their more relaxed states of mind, which would promote mental well being.

katieche-project-10

katieche-10

/*
katie chen
katieche@andrew.cmu.edu
project 10
section e
*/

var terrainSpeed = 0.0002;
var terrainDetail = 0.002;
var backdetail = 0.003;
var clouds = [];
var cacti = [];
var camelfr = [];
var cam = [];

function preload() {
	var camelfile = [];
		camelfile[0] = "https://i.imgur.com/bDUcYTm.png";
		camelfile[1] = "https://i.imgur.com/6dVVrob.png";
		camelfile[2] = "https://i.imgur.com/hbSKaEk.png";
		camelfile[3] = "https://i.imgur.com/7mLCzwN.png";
		camelfile[4] = "https://i.imgur.com/ajswkv9.png";
		camelfile[5] = "https://i.imgur.com/5PYiIL8.png";
		camelfile[6] = "https://i.imgur.com/izwJZyn.png";
		camelfile[7] = "https://i.imgur.com/bHlNbyH.png";

		for(var i =0; i < 8; i++){
			camelfr[i] = loadImage(camelfile[i]);
		}
}


function setup() {
    createCanvas(480, 480);

    // making initial collection of objects
    for (var i = 0; i < 10; i++){
    	var rx = random(width);
        clouds[i] = makeCloud(rx);
        cacti[i] = makeCactus(rx);
        cam[i] = makeCam(rx);
    }

  	frameRate(10);
}
 
function draw() {
    background(185, 174, 176);
    push();
    noStroke();
    fill(188, 177, 178);
	rect (0, 140, width, height-140);
	fill(195, 180, 176);
    rect (0, 170, width, height-170);
   	fill(200, 185, 176);
    rect (0, 230, width, height-230);
    fill(207, 187, 172);
    rect (0, 260, width, height-260);
    pop();
    
    ground();

	updateAndDisplayCacti();
	cactusAdd();

	updateAndDisplayCam();
    camAdd();
    camDisplay();

    updateAndDisplayClouds();
	cloudAdd();
    
}

function ground() {
	// background
    push();
    beginShape(); 
    noStroke();
	fill(200, 164, 140); 
    for (var x = 0; x < width; x++) {
        var t = (x * backdetail) + (millis() * terrainSpeed);
        var y = map(noise(t), 0,1, 200, height-50);
        vertex(0,480);
        vertex(480,480);
        vertex(x, y); 
    }
    endShape();
    pop();

    // foreground
	push();
    beginShape(); 
    noStroke();
	fill(181, 121, 78); 
    for (var x = 0; x < width; x++) {
        var t = (x * terrainDetail) + (millis() * terrainSpeed);
        var y = map(noise(t), 0,1, 270, height);
        vertex(0,480);
        vertex(480,480);
        vertex(x, y); 
    }
    endShape();
    pop();
}

function updateAndDisplayClouds(){
    // Update the clouds' positions, and display them.
    for (var i = 0; i < clouds.length; i++){
        clouds[i].move();
        clouds[i].display();
    }
}


function cloudAdd() {
    // With a very tiny probability, add a new cloud to the end.
    var newcloudLikelihood = 0.02; 
    if (random(0,1) < newcloudLikelihood) {
        clouds.push(makeCloud(width));
    }
}

// moving the clouds
function cloudMove() {
    this.x += this.speed;

}
    
// drawing the clouds
function cloudDisplay() {
	push();
	translate(this.x, 50);
	noStroke();
	if (this.z < 30) {
		fill(225, 210, 192);
	}

	if (this.z > 30 & this.z < 50) {
		fill(222, 202, 182);
	}
	if (this.z > 50) {
		fill(218, 194, 174);
	}
	
	rect(23+this.x, 50+this.z, 100+this.l, 25+this.h, 200, 200, 200, 200);
	rect(60+this.x, 25+this.z, 50, 50, 200, 200, 200, 200);
	rect(50+this.x, 35+this.z, 30, 30, 200, 200, 200, 200);
	pop();

}

// making the clouds
function makeCloud(cx) {
    var cloud = {x: cx,
    			z: random(0, 150),
    			l: random(0,20),
    			h: random(0,20),
 				speed: -1.0,
                move: cloudMove,
                display: cloudDisplay
            }
    return cloud;
}

// CACTUS
function updateAndDisplayCacti(){
    // Update the cacti positions, and display them.
    for (var i = 0; i < cacti.length; i++){
        cacti[i].tmove();
        cacti[i].tdisplay();
    }
}

function cactusAdd() {
    // With a very tiny probability, add a new cactus to the end.
    var newcactusLikelihood = 0.02; 
    if (random(0,0.5) < newcactusLikelihood) {
        cacti.push(makeCactus(width));
    }
}

// moving the cactus
function cactusMove() {
    this.mx += this.mspeed;

}

// draw the cactus
function cactusDisplay() {
    push();
    noStroke();
    translate(this.mx, 200);
    fill(131-this.cr, 170, 124-this.cr);
    rect(50+this.mx,50+this.cacter,25+this.wid, 90+this.hei, 200, 200, 0,0);
    rect(50+this.mx+this.wid,80+this.cacter,40,10,200,200,200,200);
    rect(80+this.mx+this.wid,60+this.cacter,10,30,200,200,200,200);
    rect(30+this.mx,90+this.cacter,40,10,200,200,200,200);
    rect(30+this.mx,70+this.cacter,10,30,200,200,200,200);
    pop();
}

// making the cacti
function makeCactus(tx) {
    var cactus = {mx: tx,
 				mspeed: -2.5,
 				hei: random(-10,20), // tallness of main cactus body
 				wid: random(0,5), // fatness of main cactus body
 				cr: random(0,50), // color 
 				cacter: random(70, 180), // y value
                tmove: cactusMove,
                tdisplay: cactusDisplay
            }
    return cactus;
}

// CAMEL
function updateAndDisplayCam(){
    // Update the camel positions, and display them.
    for (var i = 0; i < cam.length; i++){
        cam[i].cmove();
        cam[i].cdisplay();
    }
}

function camAdd() {
    // With a very tiny probability, add a new camel to the end.
    var newcamLikelihood = 0.02; 
    if (random(0,1) < newcamLikelihood) {
        cam.push(makeCam(width));
    }
}

// moving the camel
function camMove() {
    this.camx += this.cspeed;

}

function camDisplay() {
    push();
    noStroke();
    scale(0.3);
    for (var i = 0; i < 10; i++) {
    var num = frameCount % 8;
        image(camelfr[num],600, 900);
    }
    pop();
}

// making the camel
function makeCam(ax) {
    var camel = {camx: ax,
 				cspeed: -1.0,
                cmove: cactusMove,
                cdisplay: cactusDisplay
            }
    return camel;
}

I had a lot of fun working on this project. I wanted to create a desert landscape, so I selected colors from an image of the grand canyon. I made the background sort of have a gradient to give more of a “never ending stretch of land” horizon line feeling. The cactuses were a fun addition that I created in p5.js before realizing that I could’ve made them in illustrator and uploaded them as image files. Upon realization, I decided to add the moving camel. The original gif is from here, but I took each frame and edited it in illustrator to make it look more uniform in my code.

agusman-LookingOutwards-10

Reverb
Reverb Video Documentation

REVERB is a project by new media artist and robot tamer Madeline Gannon that bridges the virtual and physical contexts of fabrication and wearable design.

This 3D modeling environment allows the artist or designer to guide the movement of a 3D asset across the curvature of the body, leaving a digital trace that is then 3D printed and transformed into wearable jewelry. The gestures of the artist’s hand are what the 3D asset uses as a guiding path as it leaves it’s trace. Madeline refers to this method as a sort of digital chronomorphology, an offshoot of chronophotography- which is a composite recording of an object’s movement.

The pipeline of Madeline’s chronomorphological process begins with a depth sensor (e.g. Kinect) that records the depth data of the person who will be fitted for the wearable model. A point cloud is then generated and imported into a 3D workspace. Once positioned in there, the same depth sensor will detect the motion of the artist’s hands as they move around, forming the path on which a small 3D asset moves. Finally, the generated chronomorphological 3D model is printed and worn by the person.

Reverb is one of Madeline’s most famous and reputed pieces, covered by virtually every important technology magazine and conference in America. Other successful works include Mimus, a whimsical robot with childlike wonder and curiosity for other humans, as well as Tactum, another exploration in 3D printed wearables that could be custom fit and distributed online. While her education background is predominantly in architecture, she has studied computational design, art and robotics in more recent years. She is currently attending Carnegie Mellon and will be graduating in May with a PhD in computational design.

gyueunp – Looking Outwards 10

Diesel: New Natures SS16 project (2016) by FIELD

Vera-Maria Glahn is a co-founder of FIELD, a specialised creative studio in London that combines art and technology to create immersive audio-visual experiences. She worked as the executive producer for FIELD’s Diesel: New Natures SS16 project, one of my favorite works created by the studio. Commissioned by Diesel, it is a series of films that was created for the company’s NYC flagship store retail installation. Through the minimal graphical interventions in tropical scenery, the short films discuss the obsessions of our digital culture and our failure to recognize the natural beauty of our surroundings. The work has not only made me reflect on the issue, but also successfully engaged me with its visual elements. I especially loved how point-of-view shots were used to allow the viewer to enter the space created by the project. However, I am confused as to how this project relates to the Diesel and its products. A brief explanation would have been beneficial.

Interview with Vera-Maria Glahn

rmanagad-lookingoutward10-sectione

Creator: Anton Reponnen

Title of Work: UP Memory Tower

Year of Creation: 2015

Link to Project Work: http://work.repponen.com/UP-Memory-Tower

Link to Artist Bio: http://work.repponen.com/About


Anton Reponnen is an interaction designer focused on developing visual identities, forms of interaction, and livable spaces. This is evident in her redesign of the hardware memory system, called UP Memory Tower.

Interactive memory hub.

Anton found a design opportunity within the product ecosystem of external hard drives — she saw the process of buying new and/or upgrading to better hard drives as “not-connected.” Likewise, she identified that the web-based cloud system has its own limitations, such as needing internet access and having to log in to an account.

The UP Memory Tower personalizes the external hard drive as users add to a base-memory dock when more storage is needed — all memory is still stored in one central location. Likewise, only a portion of the hard drive can be take with you — it still acts a portable storage like a usb or SD card. I find this product fascinating for its versatility, and its accordance to social conventions — it sustains the expected nature of existing products while introducing new behavioral concepts towards an established object.

juyeonk-LookingOutwards-10

^Shantell using the motion sensor device to leave the trail.

^ Rhino screen showing the trail of the device.

Title: Drawing Trails

Artist:  Shantell Martin

Year of Creation: 2016

Link to the Article of the Project:  https://shantellmartin.art/work/drawing-trails/

Link to the Bio of the Artist: https://shantellmartin.art/info/

 

Drawing Trails was a collaboration between visual/digital artist Shantell Martin and architect Maria Yablonina as a way to visualize the trail that we leave behind our motions. The concept was to explore the idea that whatever we do and wherever we go we leave an inevitable traces behind us and what they would look like if they were to be digitally captured and 3-d printed.

First, the motion sensor device will detect the movement of the object and transfer the information to the computer, which will use Rhino and Grasshopper with a customized program to visually map the data on the screen. The visual information on Rhino workspace will be then 3-d printed, giving us a chance to visually and sensibly observe the invisible trails that we have left behind for the first time.

The idea that we could observe our own trails is very interesting, but I am still a little dubious if it could be practical in capturing a larger degree of motion. However, simply as a new means of creating a sculptural art, this was a very unique attempt.

^ Sculptures created as Drawing Trails

dayoungl Looking Outwards -09

yoonyoung’s looking outwards 1

For this week’s Looking Outwards, I looked into my friend Yoonyoung’s first Looking Outwards post which highlights the BMO200 Fountain. (The topic of the first looking outwards post was investigating technological art or design that inspires you.) BMO is a bank in Montreal and this fountain was made in celebration of the 200th anniversary of the bank. I found her post on this piece of object interesting because the project was made with not only the artists but also the members and the employees under BMO. Although there isn’t much written about the specific contributions the employees of the bank made to the final piece, the idea of interactive imagery of water falling into a pond is stunning; with this piece, the users can “throw” a coin of wish on their mobile devices. I think the interactive portion of the project is a great representation of how machines and technology is able to copy or iterate human motion. The idea behind this is that interaction can happen separately from just a single visual space but also the individual users’ devices too. The YouTube video above is a good compilation of the process of the project from the beginning to end.

alchan-Looking Outwards 10

(promotional image for Ooblets, by Rebecca Cordingley)

Rebecca Cordingley (or nonplayercat) is an indie game designer/ developer/ artist. She’s currently making Ooblets (alongside Ben Wasser), a game described as “Harvest Moon meets Pokémon meets Animal Crossing” due to be released sometime in 2018. Cordingley is primarly using Unity, Maya, and Mixamo to develop the game and 3d-model, along with programs like Photoshop and Illustrator.

(gifs from the in-progress game, by Rebecca Cordingley)

I really like the simplified visual style of the game, which I think works particularly well with the game’s focus. The main aspect of the project I really appreciate though is how open Cordingley has been about the game’s development, as both she and Wasser regularly share screenshots, gifs, and progress posts on the work they’re doing on the game.