Taisei Manheim – Looking Outwards – 11

Sculpture created from Digital Simulation.

The project that I chose was Liquid Sound Collision by Eva Schindling.  Eva creates hardware and software in the interdisciplinary zone between art, science, technology, and design.  She received a MSc. in Art and Technology from Chalmers University in Sweden and a degree in Interaction and Media Design from FH Joanneum in Austria.  Her work ranges from embodied evolutionary robotics and algorithmic pattern formation to the visualization of otherwise invisible sound waves and has been shown at the Japan Media Arts festival, Hong Kong’s Museum of Art, Moscow’s Biennale of Contemporary Art, Burning Man, the Pixxelpoint festival in Slovenia, and Nuit Blanche Toronto. 

The project is a study of the interactions that occur when recorded voices engage with computer simulated fluids.  Each individual study sends two words that are considered opposites, such as chaos and order or body and mind, and send them as vibration sources through opposite ends of a fluid simulation.  The sounds create waves that collide with each other and the moment of collision is translated into a digital 3D model that is then 3D printed into a sculpture. I find it very interesting how she is using a very scientific technology in order to create these very fluid sculptures.  The words that are chosen are opposites and there is something poetic about capturing the intersection of these opposites.

Jamie Park – LO – 11

Kate Hartman is a professor and lab director at OCAD (Ontario College of Art and Design) and a designer of computational wearables. In 2014, she published a book called Make: Wearable Electronics — Design, prototype, and wear your own interactive garments. 

Cover of Wearable Electronics

This book provides a thorough knowledge of how to create a garment while incorporating electronics. Hartman kindly explains the science and technology behind each tool or concept, such as circuit or conductive thread, at the start of the book for the beginners. By having a circuit inside clothing, one can manipulate the clothing to make sound, record information, or emit light. The possibility of electronic clothing is endless. Although I am unsure if I will ever make my own electric clothing, as I suck at even making conventional clothing, I was entertained by this new concept. I would like to see how people can use this concept to potentially help people who are in needs.

Link to the book on Amazon

Timothy Liu — Looking Outwards — 11

SUGAR, a game experience designed by Heather Kelley.

For this week’s Looking Outwards, I examined the work of Heather Kelley, a game designer, digital artist, and media curator focused on sensory interactions and aesthetics in video games. Heather holds a Masters of Arts from the University of Texas at Austin and has worked in a variety of entertainment-technology realms throughout her career. She’s made stops at Subotron, Quake, Unreal, and even the Entertainment Technology Center at Carnegie Mellon, bringing her expertise and work to the classroom as well as to gamer screens worldwide. As a lifelong video game fan, her name immediately caught my eye when I was browsing the list of accomplished women in the field because of her ties to the video game industry and her incredible success within it. After reading over her bio, I was even more impressed by all she’s accomplished, especially her work on the UNFPA Electronic Game to End Gender Violence. It’s clear that Heather is an inspiration to all game designers out there, as she’s managed to combine her creative talents with her desire to create societal change.

One of Heather’s works that I found most impressive was SUGAR, a “cross media collaborative event featuring an original game, scent-generating networked electronics, and couture fashion” (perfectplum.com). SUGAR is, simply put, an immersive video game experience that light-heartedly satirizes the imperial court and Hapsburg history from a romantic perspective. In the game, two players work together to coordinate the dancing of two horses in order to appease the medieval crowd. The art style and imagery are whimsical and geometric, adding a light-hearted flair to her game design and visuals. But what was most unique about SUGAR is the inclusion of an “Action Olofactorizer,” a device that combines hardware, software, and chemicals to produce scents and smells based on the player’s actions. For example, grass, leather, and even horse poop scents are produced to accompany gameplay! Heather’s ability to fully immerse the player and make SUGAR, as well as the rest of her games, interactive experiences is remarkable, and I really enjoyed learning about her work.

Some of the scents, sights, and visuals from SUGAR, Heather Kelley’s cross-media collaborative gaming experience.

Sources:

perfectplum.com

Hyejo Seo-Looking Outwards 11

Lauren Lee McCarthy’s How We Act Together installation

‘How We Act Together’ is a project done by an artist, Lauren Lee McCarthy. She is a LA based artist who explores “social relationships in the midst of surveillance, automation, and algorithm living.” After looking through her projects, I realized that most of her projects put the viewers out of their comfort zones. For example, her other project called “SOMEONE” is a human version of Amazon Alexa. McCarthy recruited four households across America in which she installed cameras and microphones. When viewers come to the installation at the museum, they get to play the role of human Alexa. When I first read about this project, I was a little creeped out quite frankly because, if people get paid to be strangers’ “someone” behind the screens in the future, it would be creepy. 

Just like her “SOMEONE” installation, McCarthy challenged people to feel somewhat uncomfortable by “asking participants to repeat different gestures until exhausted, to a point where the gesture no long feels natural and its meaning begins to shift” in her “How We Act Together” project. Evidently, she is playing around different gestures and facial expressions that are used in social situations. Using a software, participants are asked to scream, which will be detected by the computer once their gestures conform to the metrics of computer vision algorithms. As participants are screaming to the screen as seen in the video above, the screen displays another person screaming back at them. Looking at strangers screaming back at you eventually triggers a natural response from the current participant. 

I chose to talk about Lauren’s projects because she pushes participants to a point where they feel uncomfortable by manipulating awkward and uncomfortable social situations. I thought it was really interesting that she exposes  her participants to different social phenomenon, which triggers  uncomfortable responses. Her projects make one think deeper into uncomfortable social situations we are constantly exposed to. 

“Greet” – a part of How We Act Together project.

Sammie Kim— Looking Outwards— 10

David Cope is a computational artist who created avant-garde works during the 1970s that blurred music and computer science. Originally a traditional musician and composer, Cope developed great interest in the algorithmic area of commuters, and he began finding ways to apply programming techniques into music. The program he developed is now known as Experiments in Musical Intelligence, also known as EMI, which is an analysis program that uses its output to “compose new examples of music in the style of the music in its database without replicating any of those pieces exactly” (Cope). As such, program would analyze the music that would be inputed into the EMI database, and new works would be created in a similar style. And with this data- driven approach, or EMI’s style, Cope began to create new innovative musical pieces that referenced the style of legendary composers of the past, and he began observing unique patterns that were unknown before. With these generative music pieces, Cope faced many public criticisms, particularly revolving around how creative computers can be. Yet, he still overcame these challenges as he found to interweave his own creative style into the many more albums he produced in later years. 

Tech Closeup: Music Professor David Cope
Cope used actual musicians to play the classical music piece composed by EMI.

CJ Walsh – Project 09

sketch

// CJ Walsh 
// Section D
// cjwalsh@andrew.cmu.edu
// Project 09

var baseI;
// loading the image into my code
function preload() {
    var myImageURL = "https://i.imgur.com/ItvWF6f.jpg";
    baseI = loadImage(myImageURL);
}
function setup() {
	createCanvas(360, 480);
 	background(0);
    baseI.loadPixels();
    frameRate(15); // speed at which the pixels appear
}

function draw() {
	// setting up conditions for pixels to appear
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var baseColor = baseI.get(ix, iy);
    // drawing ellipses and rectangles to respresent the pixels of the image 
    noStroke();
    fill(baseColor);
    ellipse(px, py, random(0, 15), random(0, 15));
    rect(px, py, random(0, 15), random(0, 15));

}

While this was a pretty simple and straightforward project, it was fun to experiment with the different ways to present the color of the image through other shapes and forms. Ultimately I decided to go with both rectangles and ellipses because I liked the combination of the curved and angular forms in the images by code created. It was interesting to see the image build itself and lead up to looking closer to the base image. Overall, a fun project!

ilona altman – project 09

sketch

var theImage;

function preload() {
    //loading my image
    var myImageURL = "https://i.imgur.com/3SFfZCZ.jpg";
    theImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(480, 480);
    background(250,250,250);
    theImage.loadPixels();

    // going through each pixel 
    for (x = 0; x < width+10; x = x+3) {
        for (y = 0; y < height+10; y = y+2) {
            var pixelColorXY = theImage.get(x, y);
            if (brightness(pixelColorXY) >= 0 & brightness(pixelColorXY) <= 20) {
                //light pink
                stroke(255,230,230,70);
                line(x, y, x-1,y-1);
            } else if (brightness(pixelColorXY) >= 20 & brightness(pixelColorXY) <= 50) {
                //orange
                stroke(250,170,160);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 50 & brightness(pixelColorXY) <= 55) {
                //pink
                stroke(230,130,160);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 55 & brightness(pixelColorXY) <= 60) {
                // light green
                stroke(180,195,200);
                line(x, y, x-1,y-1);
            } else if (brightness(pixelColorXY) >= 65 & brightness(pixelColorXY) <= 70) {
                //yellow orange
                stroke(235,180, 100);
                line(x, y, x-2, y-2);
            } else if (brightness(pixelColorXY) >= 75 & brightness(pixelColorXY) <= 85) {
                //blue
                stroke(196,130,130);
                line(x, y, x-1, y-1);
            } else if (brightness(pixelColorXY) >= 85 & brightness(pixelColorXY) <= 95) {
                //dark red
                stroke(220,80,80);
                line(x, y, x-1, y-1);
            } else if (brightness(pixelColorXY) >= 95 & brightness(pixelColorXY) <= 110){
                //pink
                stroke(220,69,90);
                line(x, y, x+2, y+2); 
            } else if(brightness(pixelColorXY) >= 110 & brightness(pixelColorXY) <= 130){
                //medium blue
                stroke(80,130,60);
                line(x, y, x+1, y+1); 
            } else if (brightness(pixelColorXY) >= 130 & brightness(pixelColorXY) <= 160){
                //light orange
                stroke(220,170,130);
                line(x, y, x+1, y+1);
            } else if (brightness(pixelColorXY) >= 130 & brightness(pixelColorXY) <= 160){
                //light orange
                stroke(202,70, 100);
                line(x, y, x+1, y+1);
            } else if (brightness(pixelColorXY) >= 160 & brightness(pixelColorXY) <= 190){
                //white
                stroke(255,255, 255);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 190 & brightness(pixelColorXY) <= 220){
                //yellow
                stroke(150,130, 90);
                line(x, y, x+3, y+3);
            } else if (brightness(pixelColorXY) >= 220 & brightness(pixelColorXY) <= 255){
                //yellow
                stroke(200,60,60);
                line(x, y, x+3, y+3);

            }
         
         }   
    }

}
function draw() {

}

In this project, I was thinking about memories with my family and about my grandma. I took some photos this summer of her teaching the rest of my family how to make Lithuanian dumplings. In my psychology class it is interesting because we have been learning about how distortion happens to what we remember over time. This is why I chose to make my image a bit distorted, and not so clear. I also love gradient maps, and wanted to emulate this with this piece. Yellow, green and red are the colors of the Lithuanian flag as well.

Jacky Tian’s Project 09

sketch

//Project-09
//Yinjie Tian
//yinjiet@andrew.cmu.edu
//Section D
var protrait;

function preload() {
    protrait = loadImage("https://i.imgur.com/tOfQhhw.jpg");
}

function setup() {
    createCanvas(400, 450);
    background(0);
    protrait.loadPixels();
    frameRate(100);
}

function draw() { 
    var x = random(width);
    var y = random(height);
    var pix = protrait.get(x, y);

    stroke(pix);
    var lengthmap = constrain(map(15, 0, y, 0, 20), 5, 50);
    var weightmap = constrain(map(3, 0, 10, 0, x), 0.1, 3);

    push();
    strokeWeight(weightmap);
    line(x + random(-10, 10), y + random(-10, 10), x + lengthmap, y + lengthmap);   
    pop();

   
}

I played with random stroke length and weight to create this portrait I chose.

Timothy Liu — Project 09 — Portrait

I am using 1 grace day on this project.

tcliu-openended-09

// Timothy Liu
// 15-104, Section C
// tcliu@andrew.cmu.edu
// OpenEnded-09

var Eileen; // variable name of the person in my project!

function preload() {
    Eileen = loadImage("https://i.imgur.com/V7NYz2M.jpg"); // preloading the image
}

function setup() {
    createCanvas(300, 450);
    background(255);
    Eileen.loadPixels();
    frameRate(250);
    Eileen.resize(300, 450);
}

function draw() {

    // variables to determine the location of each hamburger
    var burgerX = random(width);
    var burgerY = random(height);

    // variables that ensure the hamburger drawn remains on the canvas
    var burgerOnCanvasX = constrain(floor(burgerX), 0, width - 1);
    var burgerOnCanvasY = constrain(floor(burgerY), 0, height - 1);

    // variables to determine the proportions and shape of each burger
    var burgerW = random(8, 14);
    var burgerH = random(6, 10);
    var meatWStart = burgerW / 2 - burgerW * 0.125;
    var meatW = burgerW * 0.75;
    var meatH = random(2, 4);

    // variable that identifies the pixel color of the underlying images
    var pixelColor = Eileen.get(burgerOnCanvasX, burgerOnCanvasY);

    // applies the pixel color to each hamburger in the foreground
    noStroke();
    fill(pixelColor); 

    // drawing each individual hamburger
    arc(burgerX, burgerY, burgerW, burgerH, PI, TWO_PI);
    rect(burgerX - meatWStart, burgerY, meatW, meatH);
    arc(burgerX, burgerY + meatH, burgerW, burgerH, 0, PI);

}

For this project, I used a photo I took of my girlfriend Eileen at In-n-out burger in California. I really liked the photo because it has a lot of vibrant colors (red, green, yellow, etc) which I thought would be fun to portray through abstract shapes and pixels. I decided to make each building block of the portrait shaped like a hamburger as a reference to the burgers in the foreground of the picture and the restaurant the photo was taken at. Each burger is proportionally built based on a random bun height and width, which means that each burger is a different, randomly generated size. Together, each of the burgers collects to form a portrait!

The early stages of hamburgers…
As more hamburgers are drawn, the color and shape begins to show…
A more complete version of my piece when more of the hamburgers have been drawn. The hamburgers aren’t super small, so it’s hard to depict the fine details, but the shape and figure are definitely there!
The original photograph!

YouieCho-LookingOutwards-11

“The Storm Laboratory ” in London by Loop.pH, 2016

This project is a transparent pneumatic toroiodal form that shows turbulent geophysical air dynamics with thousands of animated carbon particles. This shows the global airflow at a very small scale, at which people can experience the whole scheme. I thought this project was interesting because the idea of physically displaying numerous particles seemed very novel, and I liked that it was rather a realistic portrayal of aerodynamics, not a representation that has been overly manipulated to be aesthetic.

Rachel Wingfield is the female designer, researcher, and educator who founded Loop.pH in 2003. She specialized in responsive environments inspired by the study of living systems while she was in the Royal College of Art in London. She looks at both near and far future scenarios about biological and technological futures. Like in this project, she creates amazing public engagement initiatives, as well as other multidisciplinary visionary environments and experiences.

loop.ph

“The Storm Laboratory ” in London by Loop.pH, 2016