Sarita Chen – Looking Outwards – 1

Lost Ember is an open world, indie video game that is currently in development by Mooneye Studios. The game is set for release in 2018. In this game, the player controls a wolf with the ability to control other animals, and can freely walk around to explore the world. What I found admirable about this game is that the studio only consists of five men from Hamburg, Germany. The game itself has some impressive graphics for an indie game, and I also really like the concept. This project is inspiring to me as I’ve always been a fan of small studio projects like this. I used to be interested in game design, and this game is exactly the kind of game I would’ve liked to have worked on if I had continued down that path. What’s also admirable is that the game developers have a clear cut path for what they want to do with the gameplay and plot, rather than getting lost in the aesthetics. The game developers took inspiration from the popular game series The Legend of Zelda and another indie game, ShelterThe upcoming Legend of Zelda game is also an open world game, and Lost Ember has similar vibes and theme.

Mooneye Studio’s blog provides video guides as to how they did the graphics for their game. http://www.mooneyestudios.com/blog/26-inside-lost-ember-1

wolf_cave

memory_bridge_01

 

Sarah Ransom – Looking Outward – 01

The Automatic Orchestra project is an audio installation created from a collaboration of students of the University of the Arts, Bremen and the Copenhagen Institute of Interaction Design which explores a unique marriage of algorithmic composition and networked music. Before the audio installation was created the project existed as several software prototypes, developed in Processing and Java, designed to simulate the network logic and behavior of what would eventually become the final framework. Now the final product consists of 12 pods/MIDI devices wired together to form a circular network transmitting and responding to musical data uniquely from one another. More specifically, all of the pods are synchronized and exchange encoded musical messages on a shared framework. Each pod, however, interprets and alters the data it receives based on its individual algorithmic rule set.  The result is a unique musical performance where the composition has a lot of room for variability.

This project was so fascinating to me as a musician because of its seemingly random nature. It seems to be based on the Indeterminacy movement of music from the early 1950’s while also recreating the feel of a live ensemble performance where musicians are still collaborating with each other in real time. The responses that each MIDI device constructs, however, seem more solid and well crafted than you might find in most indeterminate music performed by live musicians. While some professional musicians might take issue with this kind of simulation and how it could effect the music industry in the future, I think it presents an exciting new possibility for  music performance where the decisions of the “performers” are more unique, unconventional, and less prone to human error.

More Examples of music produced by The Automatic Orchestra:



Grace-01-Face

*click to see it move 🙂

sketch

//Grace Cha
//Section C (Tuesdays 1:30)
//heajinc@andrew.cmu.edu
//Self Portrait 

function setup() {
    createCanvas(600,600);
    background(170,216,176);
}

function draw() {
    // Hair Top Part
    noStroke();
    fill(255,111,105);
    ellipse(315,245,284,264);
    // rightsidehair
    ellipse(374,275,192,264);
    // leftside
    ellipse(238,368,292,282);
    push();
    translate(162,300);
    rotate(-42);
    ellipse(0,0,186,82);
    pop();

    push();
    fill(170,216,176);
    translate(244,446);
    rotate(13);
    ellipse(0,0,224,125);
    pop();
    
    // head shape
    noStroke();
    fill(255,238,173);
    ellipse(315,274,234,234);
    
    // right ear
    ellipse(436,273,33,55);
    // left ear
    // left eyebrow
    push();
    translate(193,295);
    rotate(40);
    ellipse(0,0,42,49);
    pop();

    stroke(255,111,105);
    strokeWeight(8);
    line(322,219,343,199);
    line(343,199,370,204);
    // right eyebrow
    line(214,237,230,226);
    line(230,226,244,230);
    // bangs
    noStroke();
    fill(255,111,105);
    push();
    translate(276,171);
    rotate(110);
    ellipse(0,0,202,118);
    pop();
    // eyes shape
    stroke(124,171,129);
    strokeWeight(3);
    fill(255,238,173);
    ellipse(262,273,54,68); // left eye
    ellipse(364,256,54,68); // right eye
    
    // pupil
    noStroke();
    fill(124,171,129);
    ellipse(244,271,17,40);
    ellipse(346,255,17,40);
    noStroke();
    fill(124,171,129,90);
    // Rotate the nose ellipse.
    push();
    translate(317,308);
    rotate(12);
    ellipse(0,0,22,16);
    pop();
    
    // mouth
    stroke(255,111,105);
    strokeWeight(5);
    line(274,346,301,350);
    line(344,342,374,327);
    fill(255,111,105);
    // Lips
    push();
    translate(322,347);
    rotate(62.7);
    ellipse(0,0,59,17);
    pop();
    
    // MOVE THE EYES
    if (mouseIsPressed) { // watch the eyes move
        // mask the original pupil
        noStroke(); // deletes the original pupil
        fill(255,238,173);
        ellipse(244,271,17,40);
        ellipse(346,255,17,40);
        
        // eye stroke
        stroke(124,171,129);
        strokeWeight(3);
        fill(255,238,173);
        ellipse(262,273,54,68); // left eye
        ellipse(364,256,54,68); // right eye
        noStroke();
        fill(124,171,129);
        ellipse(279,270,17,40);
        ellipse(382,252,17,40);
    } 
}


Process Work

Processwork

I originally started out with some preliminary sketches and translated that into Illustrator where I simplified it with some rotated ellipses and got the coordinate points.  I wanted to create a mischievous side face smile using a circular theme with muted colors. Using some extensions such as ‘translate();’ ‘push();’ ‘pop();’ and ‘rotate();’ (thanks to Professor Dannenberg correction).

rnayyar self-portrait

click on me and I blink!

rhea portraitSimple p5.js Template


//Rhea Nayyar
//Section C (Tuesdays 1:30)
//rnayyar
//Self-Portrait

function setup() {
    createCanvas(600, 600);

}

function draw(){
  background(193, 85, 174);
  fill(0);
  quad(160,155,350,170,395,470,145,470); //hair foundation
  fill(178, 138, 123); //neck color
  strokeWeight(0);
  rect(190,350,120,150); //neck
  ellipseMode(CORNERS);
  fill(202, 144, 104); //skin color
  strokeWeight(0);
  ellipse(150,150,350,450); //face
  fill(202, 144, 104);
  quad(180,400,324,400,265,465,235,465); //chin and jaw line
  ellipseMode(CORNERS);
  fill(217, 161, 113); //nose color
  ellipse(233,295,270,380); //nose bridge
  ellipse(230,350,273,374); //nostrils
  fill(172, 119, 115); //bottom lip color
  quad(223,415,285,415,265,430,245,430); //bottom lip
  fill(144, 83, 79); //top lip color
  quad(223,415,256,415,250,402,247,402); //top lips and cupid's bow
  quad(250,415,285,415,262,402,258,402);
  fill('black'); //regular eyes eyelashes
  ellipse(179,295,227,310); //eyelash
  ellipse(273,295,322,310); //eyelash
  fill(176, 136, 100); //eyelid color
  ellipse(235,258,270,275); //lid 1
  ellipse(245,263,267,271); //lid 2
  if (mouseIsPressed) { //blinking
    fill(0);
    ellipse(233,270,272,257); //third eye upper lashes
    fill(225);
    ellipse(235,258,270,275); //third eye
    fill(144, 106, 45);
    ellipse(245,260,260,272); //third eye iris

  } else {
    fill(0); //lower lash line 3rd eye
    ellipse(233,263,272,277);
    fill(176, 136, 100);
    ellipse(235,258,270,275); //third eye lid

}

  fill(0); //hair color
  ellipse(179,295,227,310); //eyelash
  ellipse(273,295,322,310); //eyelash
  quad(190,190,344,235,328,175,195,145); //quads are hair and eyebrows
  quad(175,165,360,310,335,160,225,140);
  quad(175,156,165,350,130,250,160,160);
  quad(350,165,365,178,400,190,350,430);
  quad(350,450,400,415,420,540,365,520);
  quad(140,400,175,430,200,520,130,530);
  quad(115,350,150,250,170,450,105,420);
  quad(100,250,200,160,150,400,145,460);
  quad(180,287,222,283,218,272,185,281);
  quad(287,272,322,287,318,281,285,285);
  quad(285,250,360,275,420,350,400,400);
  fill(176, 136, 100); //eyelid color
  ellipse(180,290,225,308); //eyelids
  ellipse(275,290,320,308);


}

mdambruc-Project-01-Face

mdambruc-Portrait

//Mairead Dambruch
//Section C
//mdambruc@andrew.cmu.edu
//Project-01

function setup() {
    createCanvas(600, 600);
    background(108, 182, 185);

}

function draw() {
  fill(185, 142, 149);
  noStroke();
  rect(162, 400, 250, 250);//shirt

  fill(243, 232, 185);
  noStroke();
  ellipse(165, 500, 55, 200); //left arm

  fill(243, 232, 185);
  noStroke();
  ellipse(415, 500, 55, 200); //right arm

  fill(129, 68, 41);
  noStroke();
  rect(180, 205, 240, 200);//hair

  fill(243, 232, 185);
  noStroke();
  rect(263, 355, 60, 60);//neck

  fill(243, 232, 185);
  noStroke();
  ellipse(293, 270, 200, 200);//face

  fill(129, 68, 41);
  noStroke();
  arc(300, 250, 255, 255, PI, TWO_PI, OPEN);//bangs

  strokeWeight(2);
  stroke(0);
  fill(255);
  ellipse(342, 270, 25, 20); //right eye

  strokeWeight(1);
  fill(162, 142, 23);
  ellipse(342, 270, 15, 17);
  strokeWeight(10);
  point(342, 270);// right eye iris pupil

  strokeWeight(2);
  stroke(0);
  fill(255);
  ellipse(250, 270, 25, 20); //left eye
  strokeWeight(1);
  fill(162, 142, 23);
  ellipse(250, 270, 15, 17);
  strokeWeight(10);
  point(250, 270);// left eye iris pupil
  if (mouseX < (width / 2)) {
    strokeWeight(2);
    stroke(0);
    arc(250, 270, 25, 20, TWO_PI, PI);//eye twitch
  }


  fill(164, 32, 82);
  noStroke();
  arc(295, 305, 70, 70, TWO_PI, PI, OPEN);//mouth

  fill(185, 142, 149);
  triangle(293, 306, 300, 306, 297, 320);//uvula

  fill(185, 142, 149);
  ellipse( 297, 320, 7, 7);






}

This project also involved a lot of trial and error for me but was overall a good way to learn about the text editor. The project resulted in a happy accident that led to an eye twitch instead of a wink, which is probably more accurate for my self portrait.

Project-01-Face

janet-portrait

//Janet Lee
//Section C
//janetl2@andrew.cmu.edu
//Self Portrait




function setup() {
  createCanvas (600,600);
  background ("beige");


}

function draw() {
  noStroke()
  //hair
  fill ("gold");
  ellipse (300,300,490,550);
  //head
  fill (255,214,160);
  ellipse (300,250,400);
  //left eye
  fill(0);
  ellipse (200,300,50);
  //right eye
  fill(0);
  ellipse (400,300,50);
  //bangs
 push();
  fill ("gold");
  rotate (75);
  ellipse (150,250,400,210);
  pop();

  //right eyebrow
  fill(0);
  stroke (0);
  strokeWeight (3);
  ellipse(410,250,60,10);

  //mouth
  noFill();
  stroke (0);
  strokeWeight(3);
  arc(300,380,30,10,0,PI);





}

SEckert-Project-01-face

My process involved a great deal of trial and error, as I’m sure is true for most of the class. I found utilizing arcs to be very tricky, and my attempts at curves complete failures. Even with this post elements that showed up in my browser (such as the eyes) have disappeared and a handful of the features (the mouth and eyebrows) have been become yellow instead of black.
Still, it’s uncanny.

SEckert-face

//Stephen M. Eckert
//Section C
//SEckert@andrew.cmu.edu
//Project 01 Face


function setup() {
    createCanvas(400, 400);
}

function draw() {
	background('grey');
  noStroke()
    fill ('black')
      rect (20, (height-100), 360, 150, 50) //body
    fill (255, 234, 196);
      arc (((width/2)-90), ((height/2)-50), 70, 70, 0, PI+PI/2, OPEN); //ears
      arc (((width/2)+90), ((height/2)-50), 70, 70, 80, PI, OPEN);
    fill (255, 211, 164);
      arc (((width/2)-85), ((height/2)-50), 65, 65, 0, PI+PI/2, OPEN); //ears shade
      arc (((width/2)+85), ((height/2)-50), 65, 65, 80, PI, OPEN);
    fill (255, 234, 196);
      ellipse ((width/2), ((height/2)-30), 220, 270); //chin
    fill ('black');
      ellipse (((width/2)-50), ((height/2)-50), 20, 20); //eyes
      ellipse (((width/2)+50), ((height/2)-50), 20, 20);
    fill (255, 211, 164);
      triangle (((width/2)-15), ((height/2)+10), (width/2), ((height/2)-30), ((width/2)+15), ((height/2)+10)); //nose
      ellipse ((width/2), ((height/2)+80), 40, 20);
    fill (255, 214, 68);
      rect (((width/2)-30), ((height/2)+15), 60, 20, 20, 20, 0, 0); //mustache
    stroke(4);
      ellipseMode (CENTER);  // poofy hair
      ellipse ((width/2+25), ((height/2)-145), 60, 60);
      ellipse ((width/2+30), ((height/2)-160), 80, 80);
      ellipse (((width/2)-20), ((height/2)-150), 70, 70);
      ellipse ((width/2), ((height/2)-170), 60, 60);
      ellipse ((width/2+20), ((height/2)-130), 40, 40);
      ellipse ((width/2-30), ((height/2)-165), 40, 40);
    fill ('black');
      rect (((width/2)-10), ((height/2)+40), 20, 5, 20); //mouth
      rect (((width/2)-65), ((height/2)-70), 30, 5) //eyebrows
      rect (((width/2)+35), ((height/2)-85), 30, 5)



      noLoop()
}

LookingOutwards – 01

Love Has No Labels

In homage to the Selma March in 1965, the “Love Has No Labels” campaign by AdCouncil was based on a public service advertisement filmed at a live event in Santa Monica on Valentine’s Day 2015. During the filming, a large X-ray display created by Mindride LLC, a studio based in LA and NYC, was set up to show skeletons kissing, hugging, or dancing behind the screen, after which people stepped out and reveal their faces. The visual effect was not generated by an X-ray machine, however, instead was created by lifelike models powered by Xsens sensors and Maya. Xsens wireless sensors, which were hidden under clothes during the event, were used to capture and perfect the naturalistic motion for the event. As the center technology in the production, Xsens sensors possess promising features such as live-delivery, easy setup, superior compatibility with commonly used software, and affordability.

We are driving through this live motion capture, but we can really control any type of avatar; I feel like the digitization of the human is a very exciting space to be in.

– Yehuda Duenyas, Mindride LLC

In the meantime, AdCouncil is currently expanding the social impact of the “Love Has No Labels” campaign. The campaign this year, “#WeAreAmerica”, continues to circulate around social media newsfeeds, battling implicit biases by pushing videos and images of inclusion and acceptance.

Learn more about the creating process

Learn more about Mindride LLC 

Learn more about #WeAreAmerica

Learn more about other campaigns promoted by AdCouncil here

LydiaJin-LookingOutwards-1

I really liked this idea of saving retail space by placing a huge screen inside stores to give buyers a high tech and visual experience. The idea isn’t new and has been around for a few years. However, CenterStage is unique in that it blurs the lines between broadcasting and interactions without undermining the potential of either by incorporating with Creative Director Lutz Vogela nd visual effects powerhouse Method.
Despite the wonderful concept of CenterStage, one concern that I have for it is the unstableness of technology. In case the machine breaks during store hours then it will be a great damage to the company’s reputation and will be detrimental to their revenue for the period the system is down.

Hannah K-Looking Outwards-01

The Fallen Star installation is the final prototype built by the Architectural Association (AA) DLab Visiting School. This installation can be manipulated by user feedback, and it sits at the intersection of biomimetics, perception, and interaction. Student teams built a 3D-model and used video mapping, as well as algorithms and digital fabrication.

Time lapse of the Fallen Star (Photo: Architectural Association)

This installation caught my eye because I feel that while technology enhances it, it is already quite impressive on its own. With the use of technology, especially because it is interactive, the piece becomes so much more than simply the sum of its parts.

This project is effective at challenging one’s sense of perception about space, and by incorporating music with the project, the overall effect is much more powerful because it is both visual and auditory. It shows the potential for the kind of impact that pieces that create an experience for viewers can have.

In 2012, the year that this installation was created, the DLab chose green as its color in which to carry out experiments. The green color served as the ultimate inspiration for this project, in which creators had to observe the natural world and create their own abstractions and interpretations into designs.