Yoshi Torralva-LookingOutwards-02

Title: Nervous System for New Balance Data-Driven Midsoles
Date of Creation: 2015

Peoples running forms are often none two alike. For this reason, there are hundreds of shoe brands and types designed for specific running styles. New Balance asked nervous System to develop a shoe that had personalized sole designs that were auto-generated from an algorithm by pressure point data. In terms of data collection, points of pressure and impact strikes would be taken into account. The overall design of the shoe generates into an organic design that represents natural structures. Nervous System approaches designs that look like cell structures, plant leaves, or spider webs to evenly distribute the impact of the runner. What I enjoy about this project is how Nervous System develops structures that are both visually pleasing and extremely effective to the specific user. It’s interesting to see how New Balance saw an opportunity to build custom shoes just like other sports have for gear. By partnering with a company like Nervous System, they were able to develop a shoe that’s inclusive of all running forms based on a generative algorithm.

Yoshi Torralva-Project-02-Variable-Face

sketch

//Yoshi Torralva
//Section E
//yrt@andrew.cmu.edu
//Project 02 Variable Face
    var eyeSize = 100;
    var eyemvmnt = 50;
    var faceWidth = 400;
    var faceHeight = 600;
    var hatbottom = 320;
    var hatbottomvmt = 320;
    var Qhat = 160;
    var Qhatx1y1x2y2 = 160;
    var windw = 400;
    var windh = 80;
     
    function setup() {
        createCanvas(640, 480);
    }
    function draw() {
        background(232, 253, 255);
        //wind movements 
        fill(181, 246, 255);
        noStroke();
        rect(windh, windw/4, windw, windh);
        rect(windh * 2, windw/2, windw *2, windh/2);
        rect(windh - windh, windw, windw, windh);
        rect(windh * 5, windw/3.5, windw * 3, windh/2.5);
        //face
        fill(255, 199, 15);
        ellipse(faceWidth * 0.8, faceHeight, faceWidth,  faceHeight);
        //two parts of the hat. The base and top.
        fill(5, 59, 66);
        rect(hatbottom/2, hatbottom/1.1, hatbottomvmt, hatbottom/4);
        quad(Qhatx1y1x2y2, Qhatx1y1x2y2/4, Qhatx1y1x2y2 * 3, Qhatx1y1x2y2/4, Qhat * 2.6, Qhat * 2, Qhat * 1.3, Qhat * 2);
        //variables for eyes
        var eyeL = width / 2 - faceWidth * 0.20;
        var eyeR = width / 2 + faceWidth * 0.20;
        //eye
        fill(255);
        ellipse(eyeL, eyeSize * 4.5, eyeSize, eyeSize);
        fill(255);
        ellipse(eyeR, eyeSize * 4.5, eyeSize, eyeSize);
       //pupil
        fill(0);
        ellipse(eyeL, eyeSize * 4.6, eyemvmnt,eyemvmnt);
        ellipse(eyeR, eyeSize * 4.6, eyemvmnt,eyemvmnt);
    }
     
    function mousePressed() {
        faceWidth = random(370, 450);
        //only used to animate the width of the hat for perspective 
        //only changes the first two quadrants of the top hat to keep the parts connected
        Qhatx1y1x2y2 = random(200,330);
        eyemvmnt = random(40,60);
        hatbottomvmt = random(350,400);
        windh = random(20,100);
        windw = random(200, 1000);
    }

With this project, I focused on using random() to create movement and slight perspective shifts within the piece. With the face, only certain variables were animated to create a similar perspective shift to the hat. Rectangles in the background add depth and additional visual cues of wind as the hat jolts to the right side of the screen.

Yoshi Torralva-Project-01-Face

When approaching this project, I wanted to create my self-portrait that was angular with similarly toned hues. Using mostly quadrilaterals, I needed to make sure points lined up with each other to create a unified facial structure. Additionally, I added mouse tracking to a square in the background to add an extra element of depth.

sketch

//Yoshi Torralva
//Section E
//yrt@andrew.cmu.edu
//Project-01
function setup() {
    createCanvas(600,600);
}
function draw() {
    background(249,33,33);
//background with mouse tracking for added depth
    fill(234,24,24);
    rect(mouseX,mouseY,500,500);
//shirt
    fill(51,3,14);
    noStroke();
    quad(122,517,374,408,374,600,91,600);
    quad(374,408,507,432,579,600,292,600);
//right ear
    fill(160,2,18);
    noStroke();
    quad(384,267,392,277,377,318,340,326);
//hair behind ellipse
    fill(114,7,18);
    noStroke();
    ellipse(269,172,112,112);
//side hair
    fill(114,7,18);
    noStroke();
    quad(200,194,225,177,218,244,197,274);
//hair
    fill(114,7,18);
    noStroke();
    ellipse(290,163,112,112);
    ellipse(330,224,125,174);
//neck shadow
    fill(89,4,20);
    noStroke();
    quad(224,363,360,314,350,385,237,433);
//head
    fill(191,16,25);
    noStroke();
    quad(256,322,350,331,326,362,274,368);
    quad(207,346,257,322,274,368,224,363);
    quad(207,241,257,323,207,346);
    quad(197,273,207,241,207,346,199,323);
    quad(225,273,297,283,256,322);
    quad(250,323,353,227,357,270,324,329);
    quad(305,359,357,270,370,292,350,331);
//ellipse for head
    fill(191,16,25);
    noStroke();
    ellipse(281,229,146,135);
//neck
    fill(191,16,25);
    noStroke();
    quad(254,390,334,371,374,408,220,477);
    quad(334,371,357,328,351,387);
    quad(219,477,374,407,345,459,269,483);
//hair in front of head
    fill(114,7,18);
    noStroke();
    ellipse(324,172,106,76);
//nose shadow
    fill(160,2,8);
    noStroke();
    quad(276,237,283,245,286,279,278,269);
    quad(261,241,263,238,254,266,246,276);
//sunglasses
    fill(114,7,18)
    noStroke();
    quad(206,225,264,233,250,268,203,260);
    quad(279,235,338,240,334,275,284,273);
    quad(240,230,361,242,360,249,239,237);
//eyebrows
    fill(114,7,18)
    noStroke();
    quad(221,213,236,210,231,219,214,223);
    quad(236,210,261,217,260,226,231,219);
    quad(286,220,326,218,327,227,289,229);
    quad(326,218,342,233,327,227);
//nostrils
    fill(160,2,8);
    noStroke();
    ellipse(254,280,8,4);
    ellipse(274,282,8,4);
//upper lip
    fill(160,2,8)
    noStroke();
    quad(243,309,257,306,257,313,233,316);
    quad(257,306,260,307,260,314,257,313);
    quad(260,307,262,306,263,313,260,314);
    quad(262,306,279,310,288,319,263,313);
//lower lip
    fill(135,3,22);
    noStroke();
    quad(233,316,257,313,256,321,245,320);
    quad(257,313,260,314,259,321,256,321);
    quad(260,314,263,313,262,321,259,321);
    quad(263,313,288,319,262,321);
}

Yoshi Torralva-Looking Outwards-01

ID Tags

Pentagram’s newest design partner, Giorgia Lupi, is at the forefront of furthering data-driven design as an essential pillar to design methods. Throughout Lupi’s career, she has used data as the foundation to nurture empathy towards important issues surrounding politics, technology, health, and above all, forging impactful human interactions.

TED conferences are renowned for showcasing inspiring speakers that span all over the world. When one is privileged with the opportunity to attend an event, it lends itself numerous networking opportunities. For many, the process of becoming comfortable with networking is often a hurdle. What Giorgia Lupi accomplishes is making the barrier to transgress from surface-level introduction to meaningful conversations easier through generative conference ID tags. This project included Giorgia Lupi and David Stark Design for Targets interaction space at the 2017 TED Conference in Vancouver. A timespan of the development process is not provided, but I would predict it took a couple of months.

App developed to generate ID tags instead of hand drawing them.

Giorgia Lupi creates a system of different hand-drawn symbols, colors, and accents that identified someone’s personality or interests. Although these ID’s provide personability to a tag, it would be lengthy to replicate. Lupi decided to implement her hand-drawn style into an app that auto-generated the ID after filling out a survey. The app itself uses a stock Google Material design to input the person’s data. However, visual elements are of separate layers to that morph at the end of the survey. It would have been nice for these visual elements to be computer-generated as it could be replicated at other conferences. These data-driven ID tags are inspired by physical ID labels and added fabric ribbons that might supply information like location and name. What this tag does is employ visualized data to allow for more interactions among participants. Giorgia Lupi sets the framework for ID’s to become more diverse in information through an auto-generative ID system. An intervention like this sets the standard for other conferences and corporate companies to create ID systems that nurture insightful relationships right after the first hello.