62-362 Fall 2019 https://courses.ideate.cmu.edu/62-362/f2019 Activating the Body: Physical Computing and Technology in Performance Fri, 20 Mar 2020 23:30:58 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.20 The Human Codifier https://courses.ideate.cmu.edu/62-362/f2019/the-human-codifier/ Tue, 17 Dec 2019 01:19:00 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9204 Processing, Kinect for Windows v2
Wood Frame, Mylar
Digital Capture & Projection, Sculpture
Shown at Every Possible Utterance, MuseumLab, December 2019

2019
Individual Project
Ron Chew

A large silver prism stands in the corner, almost as tall as a human – light seems to emanate out of it as it reflects your presence – projecting something onto the facing wall. The projection, along with a set of digital footprints prompt you to stand in front of it. The display reacts to you – and starts to scan your body as it attempts to digitize and codify your body. It seems to be applying something to your body, a story! The words appear within your silhouette, while it is hard to make out the entire story, you move around trying to read it. You leave wondering if the story was chosen for you to be embodied.

Project Statement

Humans are the most complex machines, and every effort has been made to understand, uncomplicate and replicate our brains. Turning our behavior and personality into something that other machines can begin to mimic has been the ultimate goal.

What if we were truly successful? What if we had a Human Codifier that understood every facet of us: our true cornerstones, motivations and beliefs. Could we use that to configure a Human Operating System, that would perfectly recreate us within its properties and subroutines?

FutureTech Corporation, in collaboration with the Library of Babel, invites you to experience the personalities of Every Possible Utterance, as captured by the Human Codifier. In this experimental procedure, codified experiences, thoughts and occasional ramblings of other humans, will be grafted onto your digital shadows. Through this fully non-invasive installation of other humans onto yourself, you may experience euphoria, epiphanies and emotional changes.

This installation seeks to understand the abstraction of people and their personalities through stories and textual representations, by projecting these stories onto a captured silhouette of each guest. Is this how we might embody other people’s stories? How much of its original meaning we apply to ourselves?

Concept

The prompt for this piece was ARROWS, which captured the complexity in computers where there is direction and indirection of meaning, from simple looking code to complex code, and many disparate pieces coming together to create a whole system. One thing unique about this prompt was that our piece would be shown live during a public show on the night of Dec 13 2019.

I initially had two ideas to work along each theme. For the first, “Emergent Complexity” of a system of pieces, I had an idea which would work alongside another artist in the Exploded Ensemble (experimental music) class, who wanted to translate the position of yoga balls in a large plaza into music. I would try to capture the position of balls and generate units in a system that would change over time during the night.

The other idea about abstraction and complexity revolved around trying to abstract humans into the stories and words we use to describe ourselves. I would try to capture the silhouette of the guest and capture a story they would tell – this story would be captured and fill their silhouette, making them “eat their words”. There would be some comparison of what they said and others who were also prompted the same thing, e.g. showing who else who responded “animals” when prompted “What is your biggest fear?”

Due to tight timelines it was not feasible to work on a collaborative piece for the show – I decided to work on the latter idea of presenting an abstraction of a human being as words – simplifying the idea so that there was not a need to capture words live at the show, which would likely have reliability issues.

Building: Software

I decided to learn a new platform in building this experience: Processing. While it is relatively simpler than other software platforms, it was still new to me and had a good repository of Kinect examples. I worked my way through different representations of the depth information, starting with thresholding the depth to specific areas to isolate the human, converting the image to an ASCII representation, then filling up the space with specific words.

One of the issues was that there was no skeletal tracking, which would have been useful for introducing more interesting interactions and having a better isolation of the body shape. Unfortunately, this library SimpleOpenNI, required a full privileged Mac to build, which I did not have at the time. I decided to use the openkinect-processing library which just had depth information.

My first iteration of the display was more simplistic, which only displayed simple words according to a certain theme – this first iteration worked well enough – but the meaning of the piece was still not found in the relatively random words. For the second iteration, I formed actual stories within the body, but the use of small constantly-moving text made it difficult to read.

For the final version of the software, a collection of texts from Cloud Atlas, Zima Blue, and additional quotes/statements from students in the IDeATe classes on show was displayed in a static format in front of the user. The display automatically rotated between different displays and texts – slowly abstracting the user from an image of themselves to the actual story. Additional text prompted users to stand in front of the display.

The following resources were helpful in my development:

Building: Sculpture

In order to hide the raised projector creating the display, I decided to come up with a simple sculpture that would thematically hide the projector. I received a great suggestion from my instructors and classmates to use Mylar as a reflective surface – creating a sci-fi like obelisk that would have a slightly imposing presence.

With the guidance of my instructor Zach, a wooden frame for the prism was built upon which the Mylar was stapled and mounted onto. A wooden mount was also created for the projector so that it could be held vertically without obstructing its heat exhaust vents, which caused an overheating issue during one of the critiques.

Overall, this construction went well and effectively helped to hide the short throw projector and increasing the focus/immersiveness of the display. With a physical presence to the installation, I decided to call the installation “The Human Codifier”, a machine that codified humans and applied stories to them.

Performance Day

In order to provide some guidance to guests during the display, I decided to add a performative aspect to the installation where I played a “Test Associate” from FutureTech Corporation, who was conducting testing in collaboration with the Library of Babel (the thematic location of the show). The character was completed with a lab coat, security badge and clipboard. As the test associate, I would provide some instructions if they were lost, and conducted some one-question interviews at the end of their interaction.

I felt that the character was useful in providing an opportunity to complete the piece with a connection to the real world instead of a fully self-guided experience. It was also great to be able to talk to guests and also figure out how they felt about the installation.

The position I took up in the MuseumLab for the show was a small nook in the gallery space on the 2nd floor, which I found after scouting the space. It was cozy and perhaps private enough for a single person to experience. The only small tweak I would make about the showing would be that my spot ended up a little poorly lit and perhaps a little hard to get to in the space.

Reflection

Although the actual interaction of this piece was simple, I felt the overall scale of the show and achieving an meaningful experience was the most challenging aspects of this project. Having the additional time and iterations did help me hone in on the exact experience I wanted – a good handful of guests tried to read the stories and dissect its meaning. However, some refinement into the content of the stories could have been made. As to the overall show, I felt that it was a great success, and it was a first for me to have presented to a public audience of 100+ people.

Last of all, I was most thankful to be able to learn along this journey with my classmates Alex, Scarlett, Ilona, Padra, Hugh and Eben, and appreciate all the guidance provided from my instructors Heidi and Zach. Having a close group of friends to work together with and bounce ideas off/provide critique was immensely helpful – and I could not have done it without them.

Other mentions: moving day, after a day of setting up, tearing down 🙁

 

Additional thanks to Christina Brown for photography, Jacquelyn Johnson for videography, and my friends Brandon and Zoe for some photographs.

Code

Here is the Processing code that I used, alongside the openkinect library as metnioned earlier.

import org.openkinect.processing.*;

Kinect2 kinect2;

// Depth image
PImage targetImg, inputImg, resizeImg;
int[] rawDepth;

// Which pixels do we care about?
int minDepth =  100;
int maxDepth =  1800; //1.8m

// What is the kinect's angle
float angle;

// Density characters for ascii art
String letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()-=_+{}[];':,./<>?`~";
String asciiWeights=".`-_':,;^=+/\"|)\\<>)iv%xclrs{*}I?!][1taeo7zjLunT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";

String[] stories = {
  "To be is to be perceived, and so to know thyself is only possible through the eyes of the other. The nature of our immortal lives is in the consequences of our words and deeds, that go on and are pushing themselves throughout all time. Our lives are not our own. From womb to tomb, we are bound to others, past and present, and by each crime and every kindness, we birth our future."
  , "I understand now that boundaries between noise and sound are conventions. All boundaries are conventions, waiting to be transcended. One may transcend any convention if only one can first conceive of doing so. My life extends far beyond the limitations of me."
  , "I'll die? I'm going to die anyway, so what difference does it make? Sometimes, it is difficult event for me to understand what I've become. And harder still to remember what I once was. Life is precious. Infinitely so. Perhaps it takes a machine intelligence to appreciate that."
  , "I think I know what I'm passionate about. But is that really true? I don't want to be too much of one thing, in computer science or art, but yet I still want to be an expert at what I can be. They say that I should not be closing too many doors, but it takes so much energy to keep them open too."
  , "I would want the machine to capture the connection I share with others: my family, my friends, my mentors. Ideally, it would describe my flaws as well: my insecurities, my difficulty expressing emotions, my past mistakes. Perhaps if these aspects of me were permanently inscribed in the Library of Babel, a reader would come to understand what I value above all else and what imperfections I work to accept."
  , "Loneliness, the hidden greed I possess, and maybe what happens behind the many masks I have. to the point it might seem like I don’t know who I am."
  , "To know 'mono no aware' is to discern the power and essence, not just of the moon and the cherry blossoms, but of every single thing existing in this world, and to be stirred by each of them."
};

String[] titles = {
  "The Orison"
  , "Boundaries are Conventions"
  , "The Final Work"
  , "Balancing my Dreams"
  , "Inscribing Myself to the Machine"
  , "When Knowing = Not Knowing"
  , "The Book of Life"
};

String[] authors = {
  "Somni-451"
  , "Robert Frobisher"
  , "Zima Blue"
  , "Ron Chew"
  , "Alex Lin"
  , "Lightslayer"
  , "Motoori Norinaga"
};

String[] modeText = {
   "Scanning for:"
  ,"Digitizing:"
  ,"Codifying:"
  ,"Applying Story:"
  ,"Application Complete."
};

String[] targetText = {
   "HUMANS..."
  ,"HUMAN_BODY..."
  ,"DIGITAL_BODY..."
  ,"STORY_HERE"
  ,"STORY_HERE"
};

int drawMode = 0;

// 0 - full color with depth filter
// 1 - pixelize to squares
// 2 - decolorize to single color
// 3 - small text with depth
// 4 - story text without deth

color black = color(0, 0, 0);
color white = color(255, 255, 255);
color red = color(255, 0, 0);
color magenta = color(255, 0, 255);
color cyan = color(0, 255, 255);

color BACKGROUND_COLOR = black;
color FOREGROUND_COLOR = white;

int modeTimer = 0;
int indexTimer = 0;
int index = 0;
String currentText;

Boolean movingText = false;

int largerWidth = 1300;
int yPos = -200;

PFont font;

void setup() {
  size(3000, 1000);

  kinect2 = new Kinect2(this);
  kinect2.initVideo();
  kinect2.initDepth();
  kinect2.initIR();
  kinect2.initRegistered();
  kinect2.initDevice();

  // Blank image
  targetImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);
  inputImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);

  currentText = stories[0];

  drawMode = 0;
}

void draw() {
  background(BACKGROUND_COLOR);

  // text switcher
  if (millis() - indexTimer >= 60000) {
    index++;
    if (index > 6) {
      index = 0;
    }

    currentText = stories[index]; 
    
    indexTimer = millis();
  }

  // mode switcher
  if (drawMode <= 3 && millis() - modeTimer >= 5000) {
    drawMode++;

    modeTimer = millis();
  } 

  if (drawMode <= 4 && millis() - modeTimer >= 40000) {
    drawMode = 0;

    modeTimer = millis();
  }
  
  if (drawMode >= 3) {
     targetText[3] = titles[index] + " : " + authors[index] + "...";
     targetText[4] = titles[index] + " : " + authors[index];
  }
  

  // START DRAWING!
  if (drawMode < 3) {
    targetImg = kinect2.getRegisteredImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = targetImg.pixels[i];
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  } else {
    targetImg = kinect2.getDepthImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = FOREGROUND_COLOR;
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  }

  inputImg.updatePixels();
  resizeImg = inputImg.get();
  resizeImg.resize(0, largerWidth);
  //image(resizeImg, kinect2.depthWidth, yPos);


  switch(drawMode) {
  case 0:
    image(resizeImg, kinect2.depthWidth, yPos);
    break;

  case 1:
    pixelateImage(resizeImg, 10, kinect2.depthWidth, yPos);
    break;

  case 2:
    ASCII_art(resizeImg, 20, 15, kinect2.depthWidth, yPos);
    break;

  case 3:
    rando_art(resizeImg, 25, 20, kinect2.depthWidth, yPos);
    break;

  case 4:
    Story_art(resizeImg, currentText, 30, 40, 17, FOREGROUND_COLOR, BACKGROUND_COLOR, kinect2.depthWidth, yPos);
    break;
  }

  // DEBUG TEXT
  fill(FOREGROUND_COLOR);
  //text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

  // PRINT TEXT
  fill(FOREGROUND_COLOR);
  pushMatrix();
  translate(450, 700);
  rotate(-HALF_PI);
  textAlign(LEFT);
  
  font = loadFont("HelveticaNeue-Bold-48.vlw");
  textFont (font);
  text(modeText[drawMode], 0, 0);
  
  font = loadFont("HelveticaNeue-Bold-24.vlw");
  textFont (font);
  text(targetText[drawMode], 0, 50);
  
  popMatrix();
}

void Story_art(PImage input, String output, int TextSize, int xSpace, int ySpace, color target, color bg, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(TextSize);

  int textIndex = 0;

  //transformation
  for (int x=0; x<input.width; x+=(xSpace)) {
    for (int y=input.height-1; y>0; y-=(ySpace)) {
      // get a grayscale color to determine color intensity
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));

      // map grayscale color to intensity of target color
      color quant = int(map(greyscaleColor, 0, 255, bg, target));
      //fill(quant);
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(output.charAt(textIndex), 0, 0);
      popMatrix();

      //text(output.charAt(textIndex),x + startX, y + startY);

      textIndex++;
      if (textIndex == output.length()) {
        textIndex = 0;
      }
    }
  }
}

void ASCII_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //remap the grayscale color to printable character
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));
      int quant=int(map(greyscaleColor, 0, 255, 0, asciiWeights.length()-1));
      fill(input.get(x, y));


      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(asciiWeights.charAt(quant), 0, 0);
      popMatrix();
    }
  }
}

void rando_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //just get a random character
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(letters.charAt(int(random(0, 55))), 0, 0);
      popMatrix();
    }
  }
}

// Adjust the angle and the depth threshold min and max
void keyPressed() {
  if (key == 'a') {
    minDepth = constrain(minDepth+100, 0, maxDepth);
  } else if (key == 's') {
    minDepth = constrain(minDepth-100, 0, maxDepth);
  } else if (key == 'z') {
    maxDepth = constrain(maxDepth+100, minDepth, 1165952918);
  } else if (key == 'x') {
    maxDepth = constrain(maxDepth-100, minDepth, 1165952918);
  } else if (key == 'm') {
    movingText = !movingText;
  } else if (key >= '0' && key <= '9') {
    drawMode = Character.getNumericValue(key);
  }
}

 

]]>
recursion https://courses.ideate.cmu.edu/62-362/f2019/recursion/ Mon, 16 Dec 2019 23:49:58 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9215

ritual items (photo by Sally Maxson)

as seen from above, performing on table (photo by Christina Brown)

at the beginning of the performance (photo by Sally Maxson)

tracing the outside of a trace (the body map growing) (photo by Christina Brown)

drawing and being tapped on the shoulder by stick, by breath (photo by Christina Brown)

reflexive drawing continues (photo by Christina Brown)

video documentation above (thank you, Jacquelyn Johnson for the footage)


In this piece, I have a device on my body the measures the amount my chest expands as I breathe. This device then triggers the motion of a motor on my back, which moves a twig forward and back. This motion gently taps me, and reminds me of my body continuously.

With this awareness of the body, I map myself, by tracing my outlines on a transparent paper that crinkles as I walk on it. I make a unified portrait of myself, partially by draw the outline of my left hand to my elbow and create a circle, rotating my body as I do so.

After I finish this first iteration of my personal map, I then create one other, taking the roll of transparent paper and folding it over on itself for a new canvas.

On this new map I can see the ghost of my last drawing. In my next mapping, I trace the outlines of the first map. The memory of its previous state is embedded within its very structure. The drawing process moves forward by referencing its previous state.


When thinking about the theme arrows, I was struck by the poetry that abstraction enables computers to perform their every task.  As Zach explained even more beautifully in the class lecture he gave, there are many, many levels of abstraction that convert the instruction “move this motor to the right 90 degrees” to the binary language computers can understand, much like a Russian doll set is a series of containers containing containers.

I began to compile relevant research on the topic here. I was especially interested in how abstraction is a concept relevant in many disciplines, that this conversion between one category to another for the movement of information is so pervasive. I was interested in the way we abstraction a self by the way we narrate our life stories.  And how the beginning of a subjective self might start the moment we recognize our bodies.

Somewhere in my research I came upon the idea of recursion in computer programming. I found it interesting that there was a kind of emergent complexity that existed within a function calling itself; that complex growth structures like fractals could be visualized or complex problems could be solved by the simple process of self-referencing “a smaller instance of that problem”, as the wikipedia page writes.

Recursion felt very connected to my interest in the abstraction of identity formation. I knew I wanted to connect them somehow within a performance.

It was difficult to figure out initially what a recursive drawing might look like (at first, my idea was to make rubbings of fragments of my body, but I realized this action, while it may relate to construction of identity, does not relate to recursion) (I am thankful for my dear friend Jacob Tate for helping me initially realize this). I struggled to know too, how I wanted to trace myself – what is the conceptual difference between trying to draw my entire outline and trying to draw several parts of myself repeatedly in order to create a unified image?

It was also interesting to think about how each performative action would unfold, and how my costume could influence the performance, and what significance color could hold in the piece. I am so thankful for Heidi’s help in this, she gave me so references for inspiration and talked me through different ways of performing. I am so grateful for such incredible and kind teachers.


project statement


The physical making of the mechanism to measure my breath was a bit of a challenge and required some differing approaches! I am so thankful for Zach in this process.  He had so much patience and and helped me work through the physical mechanism for tracking the expansion of my chest, as well as helping me to convert my project from using a normal sized- Arduino to a mini Arduino which would enable me to perform and be able to move and draw without being to fearful of breaking any wire connections.

In the process of making a mechanism to track my breathe, we experimented with materials that had too much friction (a belt, a scarf) before settling on the red nylon rope.

Drawing out the circuit diagrams were so helpful in the soldering process, and made things move so much more smoothly.

experimenting with using a sensor that measures the bending of a disk

finally getting the mechanism to work on a normal Arduino

figuring out the wiring so I could transfer from the normal big Arduino to the tiny Ardiuno which would enable me to perform

one of the original mechanisms made to measure chest expansion, this scarf however had too much friction

consolidating the wires to make chest piece better suited for performance

schematic for Arduino 

// ilona altman
// recursion project
// this code makes the servo motor move with the movement of the potentiometer,
// which moves with the expansion of my chest as i breathe
// this code was made with lots of help from Robert Zacharias (thank you so //much)
// also based off of - sweep code by BARRAGAN <http://barraganstudio.com>, example code is in the public domain.

#include <Servo.h>

Servo myservo;  // create servo object to control a servo
// twelve servo objects can be created on most boards

int posServo = 0;  // variable to store the servo position
int max = 0;
int min = 1023;


void setup() {
  myservo.attach(9); // attaches the servo on pin 9 to the servo object
  pinMode(A0,INPUT); // declares input 
  
}


// move the motor
void loop() {
  int readVal = analogRead(A1);
  if (readVal > max){
    max = readVal;
  } 
  if (readVal < min) {
    min = readVal;
  }
  
  int posWithBreath = map(analogRead(A1), min, max, 0, 180); // map this servo position (posServo)
  myservo.write(posWithBreath); // tell servo to go to position in variable 'pos'
  
}

All in all, this project was so rewarding to work on, especially with the culmination of the performance. I love and admire everyone in our class so much.  I feel proud of the final performance, and I believe that this project was a synthesis of much of what I learned from this class.

I have learned from Heidi and Zach and working through previous projects how valuable it is to create prototypes, and how valuable it is to really understand what interaction you are creating with your project. It is more important that this interaction be thoughtful and working fully than for your project to be perfectly beautiful.  This was difficult for me to learn because I have the tendency of wanting to think about everything before I start making it. This stems from a fear of failure and a desire for a “perfect”, conceptually interesting piece;  I’ve realized however, that  by postponing the physical making, I do not give myself the opportunity to improve the piece as I am making it. I do not give myself the time for the physical piece, and the interaction it creates, to marinate so I can make adjustments.

Even though there is always room to improve on this, and this project was not perfect in this process, I know I have grown so much since project 1. I feel really good about the steps taken to realize this project, and I feel good about how it felt to perform. I am so excited for the future and so thankful for everyone in our beautiful class for making this growth possible.

]]>
INSTALLATION FOR MIRRORED BALLOON AND SEVERAL RECENT MIRRORED BALLOONS https://courses.ideate.cmu.edu/62-362/f2019/installation-for-mirrored-balloon-and-several-recent-mirrored-balloons/ Mon, 16 Dec 2019 22:32:20 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9195  

An audience member interacts with a mirrored balloon.

Judith Butler’s concept of performativity (standing most obviously on the shoulders of Marcel Merleau-Ponty’s phenomenology) proposes that we define the self, and the various types of identity that make up the self, as a process rather than something complete or stable.

Butler’s most famous example is that of gender. Gender (EG ‘womanhood’) means very different things in different cultures, and indeed while babies are born with somewhat differing endocrine systems, all arrive on earth unaware that tall shoes and pink things correlate to an authentic expression of certain genitalia. And yet, by early adulthood, for some the use of tall pink shoes feels to be an authentic expression of the self; and because it feels that way it (by definition) is. How does this happen?

According to Butler, the answer is that gender is essentially a process, in which you intentionally practice (or perform) certain ways of being that you perceive as being associated with you. And over time, both personally and a culture, this performance becomes natural and authentic. As such, what gender means is different for every person, and the agreed-upon end result (tall shoes = Woman) is not inevitable. This argument is supported by recent neuroscience: whatever you practice, you learn, and learning is a literal re-structuring of connections in the brain to prioritize certain outcomes. So whatever you practice, you become.

Which brings us to this piece, in which a balloon is demonstrating the process via which one can become a balloon.

Project Description

This is an ‘analog’ video delay piece: there is a projector pointed at a wall, and there is a camera pointed at the same wall. The slight difference in focus and geometry between the camera and the projector means that the projected image doesn’t quite match the camera’s capture of that image. This offset, when repeated 24 frames a second, results in interesting trailing patterns through space. The closer the focus and image are to each other, generally the more active the effect and the cooler the image.

This edition of the piece featured a mirrored balloon tied to a fan pointing straight up. The mirrored balloon naturally dances in the air of the fan, and so there is always an active input to the feedback system, to make the delay stream move around and look nice. The space is set up in such a way that it is also possible for audience members to get in front of the assemblage, and make their own patterns on the wall. There is a couch on one wall for people to hang out and watch like you would watch a fire on a stormy night. To tie things together thematically, the projector and fan are both stacked on top of piles of architectural books.

Process

This was a tricky one. The matching of the lensing between camera and projector is very finicky, and slightly different alignments and lightings create very different effects. I initially intended to create a system via which a participant could have their own body projected on them, while leaving feedback trails behind in their wake. This would have been a very different physical installation– but more to the point, it would have required the camera lens and projector lens to be as physically close to each other as possible, if the effect would be replicable across a range of distances from the camera. It eventually became clear that I didn’t have the equipment or installation space to do that correctly… and so I decided to move to a short-throw projector, and a camera with a medium-wide field of view. I’m still interested in the other piece, and I think it could be good someday… Maybe they could go in an installation together.

Extraordinarily useful notes on throw ratios

things you can write on paper before just eyeballing it

homebrewing

Finer Points

Things I enjoyed about this installation:

  • Siting: the location of the work, at the top of the stairs, as a backdrop to two more static interactive works, felt successful to me. I was also lucky that the audio piece sited around the corner provided a great score for the dancing balloon. I thought this was a positive in its environment.
  • Lighting: hard sidelighting the balloon from across the room felt successful as well. I was pleased to be able to put a shadow of the balloon on a bit of wall between the projection and an adjacent window. In addition to lighting the object well for the camera, I think this resulted in a more immersive and site-specific implementation. I like it when things can be “thick” with space.
  • Other objects: the use of books, carpet, and couch were not initially part of the plan, but stuff that showed up (usefully or can’t-get-rid-of-it) in the space. I ended up enjoying all of the props, and wish that I had been able to make the gigantic Manfrotto tripod I was using somehow fit the mise-en-scene a little better as well.
  • The spammy descriptive text on performativity felt very good. I like jokes like that, that are very stupid but very serious at the same time.

Things that could have been improved:

  • Again, giant tripod. I used this large expensive tripod in order to have more fine control over the camera position relative to the projection. While this was very important and useful, I wish the object of it had been more dressed or otherwise intentional.
  • Footpaths through active space: only the leading edge of the installation was really accessible to audience, and the couch was less than inviting. I could imagine a better cable management system, perhaps running under the rug and emerging through a hole, that could have made for a more welcoming space.
  • Camera: I transitioned late from a Sony camcorder to a more fancy Sony DSLR. Sony DSLRs are not really made for video, and so while in video mode it had to be poked every half hour to keep from going to sleep… The DSLR was most useful in giving me finer manual control over focus and zoom, but didn’t have an amazing lens on it. I think better aperture control could have given me better results. Ultimately, I ended up paging through a bunch of its auto settings until I found an exposure algorithm I liked, and went with that. It’s possible the auto-exposure flutter helped the piece? Certainly adjusting the ISO had no effect on the feedback once that got going… I just wish I had spent more time with the equipment to fine tune my choices.
]]>
Community Event 2019 https://courses.ideate.cmu.edu/62-362/f2019/community-event-2019/ Mon, 16 Dec 2019 21:45:51 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9164 Everybody who’s here get in front of the green screen! Do something funny! Let’s have something to remember this night by! OMG you were so cute! You looked so young! Let’s not let the night end! Let’s not let the night have ever ended! This is never, ever, ever, ever going to go away! Wahoo!

Descriptor

{CE19} is a party-trick video installation. There is a large green-screen with nice lighting and a camera pointed at it. If you walk in front of the green-screen, whatever you do is sampled and looped endlessly on a nearby TV. Because of the green-screen technology, your body is separated from the green background and layered on top of some other backdrop… except instead of a sandy beach scene or something like that, the backdrop here is a ten-second loop of everything that’s already happened in front of the screen, layered endlessly on top of itself. And so, your captured image won’t go away until somebody else’s image is layered on top of yours and blocks you out. Sounds like the internet, amirite?

Process

The design of the project is extremely simple: a chroma-key effect takes in two video streams, one of the foreground (human performer in front of a monochromatic backdrop that can be ‘keyed out’), and one of the background (some picture to replace the monochromatic background). The ‘hack’ is simply to plug in a copy of the final output back into the background input, with a delay. I used a ten second delay: so, the last ten seconds of output video becomes the background for the current ten seconds of input. The result is twenty seconds of composited video, which then becomes the background for the next ten seconds of video… et cetera. You can see the effect clearly in this guy, who has been looking at this phone for almost a minute in front of the camera:

I made the video processing patch in Isadora, a live video processing software pioneered for use in dance. It uses a patcher-interface similar to Max:

As you can see, the patch really is “that” simple.

Technology

Because this is such a simple idea, proof of concept was fairly quick, which led to the question of quality: how to make it look good (but not too good), how to make it consistent across a long-ish installation, and how to lay out the space so that it would (a) not be an eyesore, (b) would naturally capture enough people for the effect to be visible, and (c) would then be accessible enough to invite the public to play with it.

The first question was capture: analog or digital. It would be possible to run this system with composite video and an analog video delay line. The advantages would be lower latency and a potentially appealing analog grain to the image quality. I decided, however, that a “cleaner” image quality made more sense for a piece that is essentially about social media in 2019. To downplay the latency and frame rate issue (<50ms, as low as 20fps at times), I made the screen out of direct view from the capture area: thus a participant can’t see their lips out of sync as they speak, and it’s close enough for an outside viewer to feel it to be approximately time-aligned.

Having decided on a digital system, I captured from a Panasonic GH4 through a BlackMagic Intensity Shuttle into Isadora, and outputted the media directly from my Mac to a flatscreen. The GH4’s superior image quality, bright exposure, and aperture control made dialing in the chroma-key infinitely easier than my attempts with a webcam and a handheld camcorder. Despite the larger footprint, using a Real Camera ended up being very worth it.

(analog video feedback noise)

Finally, lighting: to key properly, a green-screen has to be very bright and evenly lit, or else its apparent color will not be consistent across the full field. Luckily, the site had a good amount of track lighting available in the stairwell in which I installed. I used all of the track lighting to illuminate the screen, and added a soft key light by the camera to pick out the public.

Yes, that’s a fluorescent winter-time happy lamp. Small size, bright light, soft beam. It worked great! And everyone felt better, I’m sure.

Layout

The last question was experience design: how to make this thing viewable, usable, unavoidable, fun, et cetera. By placing in a cyclorama-like space at the top of a stairwell, I felt that it was positioned in an area that was both high-traffic but also enough out of the way to not feel like a major imposition on other works. The “in a corner” quality allowed the lighting to be fairly involved without messing up the dim and moody vibe of the rest of the evening’s lighting. And, it meant that there were always subjects passing by and becoming part of the work, so that more playful types could easily see what was afoot and learn how to use the piece a toy.

I also made the piece be part of my guided tour of the space as the “Total Librarian”, which narratively cohered it to the larger evening in a constructive way (and again, mandated participation). I am happy with the playful way the work was proposed in physical space, paired with a slightly more sinister description in the show documentation. Perhaps some participants became less comfortable with their fun upon reading about it later.

Hi, I’m the total librarian. We’ll now head to the stacks. There’s a remote possibility you’ll completely change lexigraphical meaning by entering the space—so for the sake of safety, please pause in front of this green void so you can be copied for backup. Remember to be yourself. Remember to be funny. Thank you.

I do think that the piece would have been better served by a larger output. The flatscreen I used was a little too small to catch the eye of a playful user in a crowded room and slightly obscured by columns to a passer-by. I also think that  projection could have brought the human forms up to a more 1:1 scale, which would have been a better sense of intinitely capturing the body of these people.

All that said, I feel pretty good about it.

 

]]>
lonely (for public consumption) https://courses.ideate.cmu.edu/62-362/f2019/lonely-for-public-consumption/ Mon, 16 Dec 2019 21:44:31 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9167 lonely (for public consumption)

padra crisafulli

[Photo: Christina Brown]

A view from the entrance to the room. [Photo: Padra Crisafulli]

But what is it?

In a room, there are three elements. 1; there is an enclosed space with the performer inside, as well as cameras focused on different distinct parts of the body and a variety of props to be used. 2; these cameras are displaying what they see onto screens throughout the rest of the space, set up so that a person can only see one screen at a time. 3; there are letters at a station with different prompts written on them, as well as the option to write your own prompt, which is then delivered to the performer in the enclosed space to act out. The intersection of these three elements creates a performance in which different parts of the body are isolated when performing an action, forced to be considered by themselves as opposed to a part of the whole body.

1; there is an enclosed space with the performer inside, as well as cameras focused on different distinct parts of the body and a variety of props to be used. [Photo: Christina Brown]

2; these cameras are displaying what they see onto screens throughout the rest of the space, set up so that a person can only see one screen at a time. [Video: Jacquelyn Johnson]

3; there are letters at a station with different prompts written on them, as well as the option to write your own prompt, which is then delivered to the performer in the enclosed space to act out. [Photo: Christina Brown]

In this installation, the letters and the enclosed pod were directly across from each other, so that the time between submitting a prompt and seeing it acted out was smaller. [Photo: Padra Crisafulli]

Audience interacting with the letter station, managed by Noah Hull. [Photo: Christina Brown]

Sometimes, the prompts were very fun to receive. [Photo: Christina Brown]

Process Reflection

This piece was borne out of a deep struggle, both with the initial prompt which was both fascinating and incredibly difficult to translate into something tangible, as well as the time crunch that I was under in the midst of all of my other work. The concept of ‘arrows’ has so much resonance in other fields, like semiotics in linguistics and symbolism in any artistic form, that it became daunting to consider how one can variate on that, respond to it in an artistic way that hadn’t already been done a million times before. And to do it with as little time as I could devote to it, yes, it was in fact very hard. Knowing this, it should come to no one’s surprise (at least, not my own) that I am very unhappy with how this piece ended up being. Maybe that is because there is potential in it to be interesting, and conversely, the potential that it becomes another self-indulgent ‘look at me being strange’ kind of work.

The initial idea as written out, with help from Eben Hoffer.

Self-flagellation aside, in terms of growth, maybe this wasn’t all for naught. I have always had a difficult time asking for help, especially in artistic endeavors. But especially in a class in which I am learning new skills and immediately applying them, asking for help has been more and more pertinent not to necessarily the best product, but to anything being produced at all. I think this reluctance to ask for help when producing work like this comes from wanting to hold onto this idea that, funding aside, the artist is the sole reason things are made. This thought made a conflict within myself as I tried to put this together, until I ultimately created a piece which cannot exist without other people, right down to the title. I had doomed myself to this kind of irony from the very beginning.

A day of testing with the original cameras.

It was around this time that the idea of using fingers as a subject arose.

The day of the MuseumLab showing, I was taking the bus to the North Side when I realized that I had accidentally turned myself into a computer. Sticking myself in an enclosed space, receiving commands, returning the constrained results of those commands; and yet, I know one of the critiques on my written description was that I had written it to be incredibly impersonal. Maybe I knew deep down that this analogy was coming, even though it never once crossed my mind. During the performance itself, I overheard one of the audience members noting that it was like I was an animal in a cage. And yet, later on in the evening, I was having a lot of fun with them as they began to write their own prompts for me which were silly and playful. If I were to do this again, this straddling of the deeply personal and the impersonality of the performer as they are separated, as well as to put more thought into how the order in which an audience member engages with any of the elements impacts the interaction. Maybe there is more of a narrative to this.

The angles used in the training when I did not have a tripod were a bit more interesting to me.

Fingers are a very strange thing to film, especially to keep them in interesting shapes. I probably would not make them the only other subject besides my face if given another crack at this.

]]>
Project 3: Walking Backwards https://courses.ideate.cmu.edu/62-362/f2019/project-3-walking-backwards/ Mon, 16 Dec 2019 19:25:41 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9136  

Photos of the Device

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Brief Video of How it Works

IMG_1207

Final Presentation at Museum Lab

Project Statement

Inspired by the writing of Guy Debord’s writing, Theory of dérive (1956), and his idea of the “dérive” or rapid passage through varied ambiances, this project proposes walking backwards as a new means of experiencing a confined space. Debord emphasizes the importance of purposeless strolling for experiencing the discarded, or marginalized aspects of our environment. He believes that purposeful walking has an agenda, making it difficult to absorb the world around us. While walking without a set destination or intention, we are opened to random encounters that enriches our experience.

The Museum Lab is a space with limited to walking and movement options. The sequence of moving through one space to another constructs a linear experience which reduces the chances of random, serendipitous moments. By walking backwards, one is denied of their frontal view and thus changes their perceived path.

In Walking Backwards, the sensors on the hand measure the distance between the user and one’s surroundings. The distance is converted into vibrations, which allows the user to estimate how close one is to nearby objects. From the varying vibrations, the user is able to delineate one’s path.

Process Images 

Sketch of an idea. Device as a glove

The Glove

Process Reflection

The project was a continuation of the previous project, refitted into the context of the Museum Lab. There were some improvements from the previous project, how the device is worn and relocating equipment to different parts of the body. These changes allowed for variety of interaction with the device. Moving the ultrasonic ranger to the hand and keeping the Arduino chip and protoboard to a minimum size improved the wearability of the device. The ultrasonic ranger on the palm of the hand allowed me to use my hands freely to sense any nearby object.

However, I regret that I wasn’t able to make more adjustments to the project. It would have been better if the device was able to detect objects even further ahead and had more variations in the vibrations it let off.

//Project no.3: Walking Backwards 
//Hugh Lee 
//The ultrasonic sensor measures the distance between the sensor and any object nearby and converts into vibration. There are three different types of vibration, indicating how close the object is to the sensor. The different types of vibration are pulled from the "Adafruit Drv2605.h" library. 
#include <NewPing.h> 
#include <Wire.h> 
#include "Adafruit_DRV2605.h" 

#define TRIGGER_PIN 12 // Arduino pin tied to trigger pin on the ultrasonic sensor. 
#define ECHO_PIN 11 // Arduino pin tied to echo pin on the ultrasonic sensor. 
#define MAX_DISTANCE 400 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

Adafruit_DRV2605 drv;

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance. 
void setup() { 
  Serial.begin(9600); 
  Serial.println("DRV test"); drv.begin(); 

  drv.selectLibrary(1); 
  // I2C trigger by sending 'go' command 
  // default, internal trigger when sending GO command drv.setMode(DRV2605_MODE_INTTRIG);
} 

uint8_t effect = 1; 

void loop() {
  // Serial.print("Effect #"); Serial.println(effect); 
  int ping = sonar.ping_cm(); 
  delay(50); // Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings. 
  Serial.print("Ping: "); 
  Serial.print(ping); // Send ping, get distance in cm and print result (0 = outside set distance range) 
  Serial.println("cm"); 

if (ping < 5) { 
  effect = 118; //if ping is smaller than 5, it uses vibration 118 from the "Adafruit_DRV2605.h" library 
  drv.go(); 
} 
else if (ping < 25) { effect = 48; //if ping is smaller than 25, it uses vibration 48 from the "Adafruit_DRV2605.h" library drv.go(); 
} 
else if (ping < 50) { effect = 44; //if ping is smaller than 50, it uses vibration 44 from the "Adafruit_DRV2605.h" library drv.go();
} 
else { drv.stop(); //stop vibrating for any other values } 
drv.setWaveform(0, effect); // play effect 
drv.setWaveform(1, 0); // end waveform // play the effect! 
}

 

]]>
Babbling Creatures https://courses.ideate.cmu.edu/62-362/f2019/babbling-creatures/ Mon, 16 Dec 2019 19:06:58 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9087 [62-362] Activating the Body | Project 3: Arrows | Fall 2019 | Alex Lin & Scarlet Tong X Connie Chau & Nicholas DesRoches

Overall Project

Final State of Rose

Photo Credit: Christina Brown

Video Footage Credit:  Jacquelyn Johnson

Description
Narrative Description

The Anthropocene marks the beginning of the age in which human activity affects the planet as a whole. No longer do our individual decisions and consequences impact only our perceived sphere of influence, but rather, affect the well-being of our ecosystem on a macro level. Babbling Creatures portrays the inherent complexities in a small scale system featuring several simple paper sculptures. The misdirection and flow of information constantly alter the state and behavior of those creatures to reveal the larger consequence of our seemingly insignificant daily actions. 

Our dissociation with nature, through the construction of artificial landscapes, leads us to forget about the fundamental bond we share with other living beings in our ecosystem. Compared to the permanence of the physical erection of new apparatus to facilitate our own societal concerns, the rate by which nature is contaminated and destroyed as a consequence of our selfish pursuits far exceeds the rate in which mother nature can heal herself. To illustrate the fragile and weakened state of our planet’s ecosystem, the project is constructed out of translucent paper, embodying the fleeting beauty of nature amidst a sea of external threats.

Babbling Creatures is a family of paper interactive objects whose reactions are informed by audience and musician movements. Each creature harvest motion and human activity input and translates it into different physical behaviors. The creatures represent the natural world which is complex, interwoven and exists in a delicate equilibrium vulnerable to change.

Process Images

Initial Concept Sketches:

Network Diagram

Initially, we planned to create three different creatures that have individual reactions on their own, but its consequences will also affect the behavior of others. This diagram portrays that world building which was planned at the outset of the project. We were aiming to create a complex network of interrelated creatures that spoke to one another in ways that weren’t perceptible at a quick glance. The larger blue text denotes the different creatures, while the wavy pink lines denote the different reactions and signals that the pieces share with each other. The black represents the audience and their role within the space.

Rose Movement Exploration & Experimentation

Flexinol Stitching Pattern Exploration

Flexinol Movement

Monofilament Actuated Movement

When we first began the physical actuation explorations, we were hoping to get the flexinol working to crumple the petals of the origami rose. Through a couple of studies, we were able to find a means of stitching the flexinol so that it was able to move the petals given its limited mechanical force output. We then implemented the flexinol, resulting in a small, but noticeable movement in the petals. Through further testing, however, we had difficulty implementing the flexinol and having confidence in it. One day it would work, and the next it would seem that it had burnt out and would need to be replaced. Given the amount of time required to set up the flexinol as well as our expended supply, we chose to use a monofilament system instead. We connected the monofilament to a servo and used it to crumple the rose, allowing for more movement, which in hindsight was perhaps for the better, given the noise level and many distractions that filled the MuseumLab during our final installation.

Water Basin for the Rose

To create the basin, we started by CNC routing insulation foam to make the base form to vacuum form with.

CNC Milling the Vacuum Form Base

Since it was our first-time vacuum forming, we had to do a lot of trial and error as illustrated in the image below. We did not have time to gesso the form to make it release easier from the styrene, and we used a 1/16″ thick material which reduces the level of detail that we hoped for the final product.

Vacuum Forming Struggles

Although the vacuum forming created more interesting forms, it also took us out of our comfort zone as a digital fabrication tool that we weren’t as familiar with.

Vacuum Formed Plastic Water Capacity Test

Willow Tree fabrication and movement

The progression of the willow leaves began as a single leaf, then moving up to several, then to an entire array. We chose to hook up the servos from the side despite more mounting difficulty because it allowed us to face less physical resistance as the leaves were shuffled against one another.

Collaborating with Exploded Ensemble

Photo Credit: Christina Brown

Flex Sensor Testing

Our collaboration with the Exploded Ensemble was mainly through a simple wearable that had an embedded flex sensor attached to Connie as she played the harp throughout the night. Similar to the conceptual underpinnings of her musical piece which distorted the audio files of many languages across the world to accompany her improvisation with Nicholas, Connie’s movements in turn distorted and crumbled the rose that sat on a white basin.

We had also considered the use of an accelerometer, but the complexity of the data and information wasn’t as necessary for the interaction, and the piece also would have introduced more physical challenges in terms of disguising the sensor and wiring the five pins.

Board Layout:

Board Layout

Our strategy for handling all of the external power and wires was to use a sheet of plywood to mount the breadboards and Arduino to. As we got closer and closer to the final installation, we realized that we were spending exorbitant amounts of time setting up the power box and cleaning up the power box after our work. As so, we claimed a power box that would be a part of our board, allowing for the project to be carried in almost all of its electronic entirety using one sheet of plywood. We also, considering the transportation and risk involved with long wires, decided to use zip-ties to secure the plethora of wire bundles.

Wires, Wires, and More Wires:

Planning our all the different lengths of wire we need for the project

Electronic Waste Bin Post-Deconstruction

Wires have consistently been a critical consideration of both of our projects in the past. Unfortunately, given the environmental character of our project, we again resorted to using wires to bridge the connection for all of our moving pieces. Although given the time frame of the project it wouldn’t have been feasible, looking forward we will definitely heavily consider more wireless options which will also allow our work to be more portable and mobile.

Process Reflection

Fabrication & Setup:

Throughout the project, we spent the majority of the time realizing the physical aspect of the project. From the array of willow leaves to the rose and all of the technology that was implemented there, we quickly realized that we had to scale back in terms of scope to ensure that each creature would have a presence and role respective of how much time was invested in bringing it to fruition. Much of the time spent during installation day was related to the installation of the willow leaves. Although simple in the aesthetic, mounting paper, wires, and servos to a rugged, dusty surface, lighting up the leaves and positioning the sonar were time sinks and required multiple attempts. The wiring, although it was bundled neatly, was also a challenge given the sheer abundance and length. Unraveling each bundle, merging bundles that were headed to a similar direction, and taping to unforgiving surfaces definitely slowed our progress, but also made the preparation worth the effort! By having a project that focused mainly on creating a space, much of the coding became more simple naturally which allowed us to fully immerse ourselves in the costuming, lighting, and staging of the environmental installation.

In retrospect, scaling down the size of the installation and broadening the number of creatures that were smaller in size may have contributed to a more network feeling, though it would have been much more difficult to implement technically and in regards to the designing and fabricating of more pieces and parts. Looking forward, we both want to continue to explore the performative aspects that we delved into for this project as well as further expand on our technical know-how and capacity.

Arduino Code
//Babbling Creatures by Alex Lin & Scarlet Tong
//This code is used to physically actuate several creatures and physical
//interactions as part of a larger installation. It uses various inputs
//(photoresistor, sonar,  flex sensor) to move physical elements (using a pump //and a couple servos).
#include <NewPing.h>
// defines pins numbers
#define TRIGGER_PIN  9  // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN     10  // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 400 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.

unsigned long pingTimer;

#include <PololuLedStrip.h>

PololuLedStrip<4> ledStrip1;
PololuLedStrip<5> ledStrip2;

#define LED_COUNT 60
rgb_color colors[LED_COUNT];

// defines variables
long duration;
int distance;

int i;
int j;

//Servo Instantiations
#include <Servo.h> 

// Declare the Servo pin 
int servoPin1 = 7;
int servoPin2 = 8; 
int servoPin3 = 6;

int mod = 0;

// Create a servo object 
Servo Servo1; 
Servo Servo2;
Servo Servo3;

int pos;
////////////////////////////////////////////////////////////
int photoreader;

#define laserTest A3
//int laserTest = 12;
int pumpPin = 11;

////////////////////////////////////////////////////////////
int flexreader;

#define flexTest A4
//int flexinolPin = 12;
unsigned long now;
unsigned long nowAnd;

////////////////////////////////////////////////////////////
int delayTime = 250;

void setup() {  
  Serial.begin(9600); // Starts the serial communication

  Servo1.attach(servoPin1); 
  Servo2.attach(servoPin2);
  Servo3.attach(servoPin3); 

  pinMode(servoPin1, OUTPUT);
  pinMode(servoPin2, OUTPUT);
  pinMode(servoPin3, OUTPUT);

  pinMode (laserTest,INPUT);
  pinMode (pumpPin,OUTPUT);

  rgb_color color;
  color.red = 255;
  color.green = 231;
  color.blue = 76;

  for (uint16_t i = 0; i < LED_COUNT; i++) {
    colors[i] = color;
  }

  ledStrip1.write(colors, LED_COUNT);
  ledStrip2.write(colors, LED_COUNT);
}
void loop() {
  if(millis() - pingTimer > 50) {
    distance = sonar.ping_cm();
    pingTimer = millis();
    Serial.print("Sonar: ");
    Serial.println(distance);
  }
  
  if (distance < 150 && distance >= 0){
    if (mod % 2 == 0) {
      Servo1.write(40);
      Servo2.write(0);
      delay(400);
      mod++;
    }
    if (mod % 2 == 1) {
      Servo2.write(40);
      Servo1.write(0);
      delay(400);
      mod--;
    }
  }
  
//////////////////////////////////////////////////////////////////////////////

  photoreader = analogRead(laserTest);
  Serial.print("Photoresistor: ");
  Serial.println(photoreader);
  if (photoreader < 100){
    digitalWrite(pumpPin,HIGH);
  } else if (photoreader > 100){
    digitalWrite(pumpPin,LOW);
  }

////////////////////////////////////////////////////////////////////////////// 
  flexreader = analogRead(flexTest);
  Serial.print("Flex: ");
  Serial.println(flexreader);
  if (flexreader > 250){
    for (pos = 0; pos <= 45; pos += 1) { // goes from 0 degrees to 180 degrees
    // in steps of 1 degree
      Servo3.write(pos);              // tell servo to go to position in variable 'pos'
      delay(30);                       // waits 15ms for the servo to reach the position
    }
    for (pos = 45; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees
      Servo3.write(pos);              // tell servo to go to position in variable 'pos'
      delay(30);                       // waits 15ms for the servo to reach the position
    }
  }
  
//////////////////////////////////////////////////////////////////////////////  
  delay(delayTime);
}
]]>
Staircase Encounters https://courses.ideate.cmu.edu/62-362/f2019/staircase-encounters/ Tue, 12 Nov 2019 16:09:58 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9038 Unity 3D, OpenPose, Kinect for Windows v2, Speakers, Acrylic Plastic
Ambient Auditory Experience Emanating Sculpture
Installed at the Basement Stairwell of Hunt Library, Carnegie Mellon University

2019
Individual Project
Rong Kang Chew

As Hunt Library guests enter the main stairwell, they are greeted with a quiet hum. Something’s changed but they can’t really see what. The hum changes as they walk along the staircase – they are amused but still curious. The sound becomes lower in pitch as they walk down to the basement. Someone else enters the stairwell and notices the noise too – there is brief eye contact with someone else on the staircase – did they hear that too?

As they reach the bottom and approach the door, they hear another sound – a chime as if they have reached their destination. Some of those not in a hurry notice a sleek machine, draped in smooth black plastic, next to the doorway. It is watching them, and seems to be the source of the sounds. Some try to experiment with the machine. Either way, the guests still leave the staircase, only to return sometime soon.


Process

Staircases are usually shared cramped spaces that are sometimes uncomfortable – we have to squeeze past people, make uneasy eye contact, or ask sheepishly if we could get help with the door. How can we become aware of our state of mind and that of other people as we move through staircases, and could this make being on a staircase a better experience?

After learning about with the underlying concept for the FLOW theme, which was that of transduction and the changes between various forms and states, I knew I wanted to apply that concept into a installation that occupied the space of a room. In this case, this room was a stairwell leading to the basement level of the Hunt Library in CMU. This space was close enough to the rest of the installations of our show, WEB OF WUBS.

I had to seek permission from CMU Libraries in order to have my installation sited in the stairwell, and therefore had to come up with a proposal detailing my plans and installation date. Due to the nature and siting of the installation, safety and privacy were important emphasis points in the proposal. I would like to thank my instructor Heidi for helping me to get the proposal across to the right people.

Placement in the stairwell is tricky, as I had to ensure that cabling and positions of objects were safe and would not cause any tripping. I iterated through various webcam and placements of cameras, computers and speakers to find out what would work well for the experience. Eventually, I settled with consolidating the entire installation into a single unit instead of trying to conceal its elements. Some of my earlier onsite testing showed that people didn’t really react with the sound if there was no visual element. This and the advice of Heidi encouraged me to put the installation “out there” so that people could see, interact and perhaps play with it.

The final enclosure for the sculpture was laser cut out of 1/8″ black acrylic plastic and glued together. Speaker holes were also included for the computer speakers used. Unfortunately, I ran out of material and decided to go with exposing the wiring on the sides. The nature of the glue used does allow disassembly and an opportunity to improve this in the future.

As for the software aspects of the implementation, I used the OpenPose library from the CMU Perceptual Computing lab. This allowed me to figure out where humans are in a particular scene. However, it only detected scenes in 2D, so I had to limit myself to working with the height and width of where people were in a scene. I used the Unity 3D game engine to process this information, and used the average horizontal and vertical positions of people’s heads to adjust the pitch in two “zones” of the staircase. (see end of post for some code).

X,Y position in zone <==> pitch of sounds for that zone 

The sounds used by the experience included those from the Listen to Wikipedia experience by Hatnote and some verbal phrases spoken by Google Cloud Text-to-Speech.

Reflection & Improvements

A lot of the learning from this project came from testing on site, and even so, I think I did not arrive at where I actually wanted to be for the installation of Staircase Encounters.

Hidden, Surreal |————————————X——| Explicit, Playful

The key issue was something I mentioned earlier: how noticeable and interacted with did I want my installation to be? In my first tests, it seemed like no one was paying attention to the sounds. But at the end, I think I perhaps made the installation too interactive. I received a lot of feedback from guests that were expecting the sounds to react more to their movements, especially since they were able to see all their limbs being tracked.

I guess given more time, I could have added more parameters to how the music reacts to users, e.g. speed of movement, “excitedness” of limbs, and encounters with any other guests. However, the visual element lead to engagement that was not followed up, which in itself was a little disappointing, like a broken toy.

My key learning from Staircase Encounters is to test and think clearly about the experience – it is easy to be fixated on the building, but not easy to be objective, emotion and measured about the experience and the end product, especially when the building is rushed.

Code

Here is some code that represents the pink and blue “zones”, which track people as they enter and move through them, and updates the sounds accordingly.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Linq;

public class Zone : MonoBehaviour
{
    public enum Axis
    {
        X, Y
    }

    public List<GameObject> soundsToSpawn;
    public float min = -60;
    public float max = 30;
    public float minPitch = 1.5f;
    public float maxPitch = -0.5f;
    public Axis axisToUse = Axis.X;

    private Queue<float> positions;
    private int queueSize = 20;

    private AudioSource sound;
    private float timeStarted = 0;
    private bool played;

    private int soundMode = 0;

    // Start is called before the first frame update
    void Start()
    {
        positions = new Queue<float>();
    }

    // Update is called once per frame
    void Update()
    {
        if (sound != null && played && !sound.isPlaying && Time.time - timeStarted > 1)
        {
            Destroy(sound.gameObject);
            played = false;
            sound = null;
        }

        if (Input.GetKeyDown("0"))
        {
            soundMode = 0;
        }

        if (Input.GetKeyDown("1"))
        {
            soundMode = 1;
        }
    }

    void OnTriggerEnter2D(Collider2D col)
    {
        //Debug.Log(gameObject.name + " entered: " + col.gameObject.name + " : " + Time.time);
        
        if (sound == null)
        {
            timeStarted = Time.time;
            sound = Instantiate(this.soundsToSpawn[soundMode]).GetComponent<AudioSource>();
            sound.Play();
            played = true;
        }
    }

    void OnTriggerStay2D(Collider2D col)
    {
        if (sound != null)
        {
            RectTransform rTransform = col.gameObject.GetComponent<RectTransform>();
            float point = 0;

            switch (this.axisToUse)
            {
                case Zone.Axis.X:
                    point = rTransform.position.x;
                    break;
                case Zone.Axis.Y:
                    point = rTransform.position.y;
                    break;
                default:
                    break;
            }

            while (positions.Count >= queueSize)
            {
                positions.Dequeue();
            }

            positions.Enqueue(point);

            float avgPoint = positions.Average();

            //Debug.Log("Avg value of " + this.gameObject.name + " to " + avgPoint + " : " + Time.time);
            float targetPitch = map(point, this.min, this.max, this.minPitch, this.maxPitch);
            sound.pitch = targetPitch;
        }
    }

    static float map(float x, float in_min, float in_max, float out_min, float out_max)
    {
        return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
    }
}

 

]]>
It Never Made Any Sense https://courses.ideate.cmu.edu/62-362/f2019/it-never-made-any-sense/ Mon, 11 Nov 2019 17:12:16 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9047 A multi-visualizer for a place that was once home.

I was born in Portland, OR in 1988, at Bess Kaiser Hospital, during a nurse’s strike. My older brother had been a traumatic and difficult birth, and so it was decided that a planned Caesarian section would be the best way to ensure a smooth delivery for me. As an added benefit, this meant that the birth could be scheduled around the strike. And so, that July, on a hot day weeks before my projected due date, my folks got a spot in the parking lot on Interstate Avenue, and went home in the mid-afternoon with a baby, feet and hands rubbed and warmed to make the lingering blue coloration fade. The hospital closed within the next couple of years, and was razed to the ground, replaced by a scrub embankment between the actual interstate highway and the street named for it.

This site is now the national headquarters for Adidas, who design shoes for human feet and shirts for human chests and backs, to be fabricated elsewhere by human hands and shoulders. It’s about two hours drive from there to Mount Hood, and about two hours from there to the low-class beach towns of Tillamook County I’ve been going to since shortly after that finally opening my eyes, a few weeks after returning from the hospital. From there to the Media Lab at CMU, travel time depends on your means of travel.

Waves

Here at CMU, there is a live projection of a wave-like shape on the wall. The waves are as a big as the waves are in Manzanita, OR, just outside the mouth of Tillamook Bay. The height of the waves on the wall is the height of the tide in Garibaldi, OR, just inside that bay. The scale of the waterline is 1:1; if you imagine the floor of this room to be “sea level”, these waves are as high above that as the water is in Oregon, right now. “Sea Level” is of course a relative term– the level of the actual sea is anywhere from +8.0′ to -2.0′ above Sea Level. There’s a certain similarity between this and the notion of a birth time, or a birth location: if not here, where? If not now, whenever.

There is a tiny screen in front of the big wave that shows you what what is happening at Manzanita right now. It’s what’s above, here. The waves you’re seeing in the video are the same waves you’re seeing in the other video, translated through a National Weather Service API and fifteen years of living on the East Coast. The relative scale between these things is about right, don’t you think? Some things are hard to let go, even when they’ve been reduced by time to a tiny picture-in-picture in your memory. I’m reconstructing this using random sinusoidal functions. I’m reconstructing this using a prejudicial feeling that ascetic minimalism, only tighter, only more monochrome, is the only way ethically correct, and accurate, way to model a place like this. I’m reconstructing this memory using another memory. Of something I was never here for. Poorly.

There is also a basin of water, with a bright light pointing into it, reflecting the surface of the water onto the wall. There is a contact speaker attached to the bottom of the basin, playing a filter-swept noise generator that sounds like waves. When the sound gets loud enough, it makes a little pump drip water into the basin from above, and ripples show up on the wall. You’re going to have to trust me here. The pictures didn’t come out. The pictures didn’t come out. But the idea was for it to look something like this:It, of course, didn’t look like this. This is a theater piece I saw maybe ten years ago, with an old friend I don’t get to see much any more– a person whose name starts with D, and for that reason or maybe some other, I would occasionally accidentally call him “dad”. It was very snowy– we made it to the theater at the last section, far up on the Upper East Side, far from everything. I’ve been thinking about it ever since. My friend said it was one of those singular experiences you’ll never forget, that you only get a handful of, and that made me feel proud to have been able to get him the ticket. The key about this show is it’s about memory, but more crucially that there are no people in it.

But, unfortunately, there are people in every room. The concept of “room” implies people, and so you find them there. Speaking of which: the drips from the pump. During critique for this piece, a professor asked me if the timing of the waves and drips was related in any way to the timing and the structure of the video. It’s not. But he asked it in such an aggressive, dismissive way that I lied about it– now, I lied in a way that didn’t make the piece any more conceptually coherent, of course, making some additional claim about the API that really didn’t make any difference. But it felt good to lie to him.

Here’s the code for the max patch:

<pre><code>
----------begin_max5_patcher----------
1342.3ocyZ0zbhaCF9L7qviO1kFrrrMvdnW5N6zCcu0sW5rSFgQPzFgkqkbR
n6D9sWYIYBjXqnrHfbHFirhezy6GOuuxleLbP3b1CXdXvGC9mfAC9wvACTC0
Lv.y2GDtF8PNEwUSKrjh1PIbw1vQ5qVTulTPwB0kAOMHqVzNZpYzRjH+FRwp
qqv4BMnooWEMJHY1jlO.oQMe.kGC9l4+grPAKa92+U.D1Bp9lK1Th02lPNYU
AhFNp6yZ9aAIWPXEnpMg6t4bwFp5ND1diWfDHC6MzePXNkTtyFs2EjWZIghK
Pq02iufxIEBF+lf+3Seb7W43J9X7bbwMrkKwUi+DKudMtPvG+6e4qAelQWTo
OCQoAfIi+a4BjUsY7mYL4bP0KHrw+I69f+hUfu5dzclkXKr2RJzlF0LaFYuI
PYrx87GJVvJDRzulKPBbKE2mLRyAstRPVaro64ETWcNhKWg0ERGndBIIGNgk
LJkc+JJaNhJvqKYl6yylT0ZTgHmU0DDHcHcMo+sFQIhMFeqB3vClPIQFIwug
rzDFAd1ZkUQVQjNeo2Yk3l07tYzZ1Bb+XvKw3Ece6aRB5Zc2X73hJrbw00kO
bUsaMICNEj7a4cPQ6lIisztIXOOAHVYAZmyiCG1dhYP8HpiONbjiJBRJyQqZ
i9BE3GTqHYZCii6TkHtSUBP+pDvoPkLQlRl.Nq4Xbb2pDS6WjnyDeOxXVItv
SDNBnHb5TEgmXivYSO4DVvVshheCJ9Vn1Lk2KQSv3j1icwrz9YFoIr+a6PoR
pAKvUWKEimq4YjWHdA9d4B4EN5eSl1NyOdZs4.JirSkeL0liNMwMywwQ4kTl
bw+F70wuJ4fph75h68RNKosiZzmap57Fc3lQTxjpUTl+BAJwnaQqK2FDmF40
HgHkT2L6FqndMVR2GxawB8vcYmMDDMXdvrrnH+Do.lk0P3rDUFvDqJ7I1kEF
4OcuyG+0N+3rTGn+jKL8+PSSK.+DzCln67OKVUUHwJwgmon950ywUdJtVWmy
vO6Rfvo9WBzmdcoRmmj5ZsJ55cIVq5AyNGU85gzKppKtUtbihjG5j6veVtGo
RxmZs0NX5Ej6iU79J+lma731Y8Yp5lO6zokeSm8544wydm1piOay2XPLdZqs
4GO4cZa9RvpXMYAdMGPWcWWou2Pj9S7OL73zp5Ej5IMOS1gVyah0p7fKol2G
j8y6K2cJPKzOK40KxARdsmr4IVqWxa+xZnpgGHvJqgWVVK81.+x5HUVMzZtM
H9bwZO1GahtvFLF9502l8dtM1eYqbK19NE2zcer0GS2kMVOeSNEusYmaodk6
Zw7IVod5kk576Vt8MTGKwxikMQ8TJh0jVlEjZY2p+juqJ+ZRv+2BT912fGOp
e1GCMpaY5mMc6wN6gO7T5QKXDNdqeZTs0mBLuhAqR2GYfr5xgTRwyemqp0Uy
3GZK3r5p7VHZe4FO8RfBWf4BRA5o2OztmOVf4877B6sqHkM8rgjC.0zezQCT
5YiRoINfTpWPxk.hTuX8hbAInOPxk.hXO.ThKFulcqe7H4hs6PCLqZgr4sFc
5SNxvtAFbV.F7RfiNJfgtjlmoQ93.xEkKnOxxgtjRztbNNjbIKOwGY4s4U1y
I7APSbIK2KH4jaJ8Tjs4Dzfdf93x2.Nw5HOXeAtnoj3gLa0uzLWkM8.R.2zI
ORjbTV7noTrKghdw34X3vg3naBGUVdGthalrBB4VO9NSkPLYj5qjB8WU6SLr
BeGoc9peQignJ4lIDxcRTWo60+go5c4q9kkUUTSLI1Rjeb3+CYljYjC
-----------end_max5_patcher-----------
</code></pre>

And while we’re at it, here’s the code for the P5.js wave modeler:

var weather;
const tideURL = 'https://www.tidesandcurrents.noaa.gov/api/datagetter?date=latest&station=9437540&product=one_minute_water_level&datum=MLLW&time_zone=lst&units=english&format=json';
const waveURL = 'https://api.weather.gov/gridpoints/PQR/46,124';
const canvasHeight = 800;
const canvasWidth = 1280;
const wallHeight = 10.0;
const wallBottom = 3;
const numWaves = 4;
const zeroPoint = canvasHeight - (canvasHeight/(wallHeight+wallBottom)*wallBottom);
var ampCtr = 10;
var newAmpCtr = 10;
var waterHeight;
var newWaterHeight;
var xSpacing;

var serial;
var portName = '/dev/tty.usbmodem14201'; 
var inData;                            
var outByte = 0;  

let waves = [];
let period = 500.0;
let fontsize = 20;
var time = "";
var name = "";

function preload(){
    askNOAA();
}

function setup() {
  waterHeight = newWaterHeight;
  ampCtr = newAmpCtr;
  createCanvas(canvasWidth,canvasHeight);
  serial = new p5.SerialPort();
  //serial.on('data', serialEvent);
  //serial.on('error', serialError);
  serial.open(portName); 
  setInterval(askNOAA, 60000);
  setInterval(serialDrip, 1000);
  for (let i=0; i<numWaves; i++) {waves[i] = new wave();}
  textSize(fontsize);
}

function wave(){
  this.size = Math.floor(Math.random()*10+7);
  this.offset = +(Math.random()*6.3).toFixed(2);
  this.speed = +(Math.random()/100).toFixed(5)+0.001;
  this.shade = Math.floor(Math.random()*75)+180;
  this.yVals = [];
  this.theta= 0;
  this.ampVar = Math.random()*ampCtr-ampCtr/2;
  this.amp = ampCtr + this.ampVar;
}

function askNOAA(){
  loadJSON(tideURL, gotTide);
  loadJSON(waveURL, gotWave);
}


function gotTide(data){
  weather = data;
  if (weather){
    newWaterHeight = (weather.data[0].v);
    newWaterHeight = map(newWaterHeight, 0.0, wallHeight, 0, canvasHeight);
    newWaterHeight = newWaterHeight.toFixed(2);
    name = weather.metadata.name;
    time = weather.data[0].t;
  }
}

function gotWave(data){
  weather = data;
  if (weather){
    waveHeight = (weather.properties.waveHeight.values[0].value);
    waveHeight = waveHeight * 3.28;
    if  (waveHeight !=0){
      newAmpCtr = Math.floor(waveHeight*(wallHeight+wallBottom));
    }
  }
}

function serialDrip(){
  serial.write("hello");
 // s = ['A','B','C','D'];
  //i = Math.floor(Math.random()*4)
  //Serial.write(s[i]);
}

function draw() {
  increment();

  background(0);
  for (let wave of waves){
    calcWave(wave);
    drawWave(wave);
  }
  drawText();
}

function increment(){
  if(waterHeight < newWaterHeight) waterHeight+=.01;
  else if (waterHeight > newWaterHeight) waterHeight -=.01;
  if (ampCtr<newAmpCtr) ampCtr++;
  else if (ampCtr<newAmpCtr) ampCtr--;
  for (let wave of waves){
    wave.amp = ampCtr + wave.ampVar;
    }

}


function calcWave(wave){
  var w = canvasWidth+wave.size;
  dx = (3.14159*2 / period)*wave.size;
  wave.theta += wave.speed;
  let x = wave.theta;
   for (let i=0; i <= w/wave.size; i++){
     wave.yVals[i] = sin(x+wave.offset)*wave.amp + (zeroPoint-waterHeight);
     x+=dx;
   }
}

function drawWave(wave){
  noStroke();
  fill(wave.shade);
  for (x=0; x<wave.yVals.length; x++){
    ellipse(x*wave.size, wave.yVals[x], wave.size, wave.size);
  }
  //yValues = [];
}

function drawText(){
  textAlign(RIGHT);
  text(name + " | " + time, canvasWidth, canvasHeight-10);
  textAlign(LEFT);
  let divisions = canvasHeight/(wallHeight+wallBottom);
  let n = -wallBottom;
  for (let i=canvasHeight;i>0;i-=divisions){
    text('> '+n, 0, i);
    n++;
  }
}

And as for the Arduino that controlled the pump, it was just a single transistor switching a 12v power supply, triggered over the serial port from Max. With some basic filtering to make less like the sky was peeing into the basin. Code:

const int buttonPin = 2;
const int pumpPin = 9;
const int ledPin = 13;
int newByte;
int lastByte;

void setup() {
  Serial.begin(9600);
  pinMode(buttonPin, INPUT);
  pinMode(pumpPin, OUTPUT);
  pinMode(ledPin, OUTPUT);

}

void loop() {
  if (Serial.available()){
    newByte = Serial.read();

    if (newByte == lastByte) newByte = 0;
  
    if (digitalRead(buttonPin) == HIGH || newByte == 1){
      digitalWrite(ledPin, HIGH);
      digitalWrite(pumpPin, HIGH);
    } else{
      digitalWrite(ledPin, LOW);
      digitalWrite(pumpPin, LOW);
    }
    lastByte = newByte;
  }
}

They say you can never go home, because it’s always different. Which is of course true: people change and times change and cities undergo massive gentrification driven by an insane belief that unyielding and constant growth is an unimpeachable positive that will continue to be sustainable for the same amount of time that human civilization has existed thus far. And besides, your parents get old and die, and your friends move, or lose their minds, or slowly disappear into themselves; and maybe you realize that nothing ever made any sense, and indeed that’s the one constant through all of this.

On the other hand, for all its environmentalist overtones, isn’t that a remarkably anthropocentric way to look at the world? Decay is a constant, and you’re part of it. Standing at the upper lip of a waterfall watching the freezing white water whip past, one notices that the water is always different, and the rock always smoothing and decaying, the gorge filled with fallen trees that decay and rejoin, your feet the same stupid feet standing and supporting your towering eyes, the feet that won’t stop walking despite mounting evidence that the contrary would be a better move– what makes these things is constantly in flux, never the same, always immaterial, and yet we try and call it by the same name as long as we possibly can.

Life is, properly, a curse: inescapable, defining, visited upon you when you weren’t paying attention, destined to define your ultimate end. Comforting in its familiarity, becoming ‘you’ through lack of any other serious offers.
The water comes and goes, and comes, and goes.

]]>
Hyphen  https://courses.ideate.cmu.edu/62-362/f2019/8991-2/ Thu, 07 Nov 2019 19:25:23 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=8991 Description – There is a tent structure covered in charcoal drawings of old organisms, cellular structures, and spirals. This tented structure is made from large branches found in the park. On the inside, I am there, with a prompt to state a personal truth. A computer program transfers this audio information into a sentence, and the volume of ones voice is reflected by a circle that changes sizes (the louder, the larger). I observe the circle and document the sentence the program returns. I write down a number of repetitions of ones statement, based on how loud one stated it (the louder, the more repetitions). I share the truth statements and how many repetitions they received at the end of the performance. 

Process Reflection –  In the future I definitely want to work with Arduino, because it was hard to work with a new library where I didn’t know many people who could support me in the technology aspect. Because P5.js and arduino are both very new to me, I really could have benefited from more guidance from someone who had works with speech rec before, but I was unsure who to reach out to that would make time to help me. I think in the future it would be better to work on something that I knew I could get support in in the class, as this makes it easier to finish the technology well and on time. 

In the critique, one thing I learned is that you cannot just consider something you make pithing interacting with one person, rather you must consider the context of someone watching another [person interacting with the thing you made, and what. That experience as an onlooker is like. I think what could help with this is staging practice performances ahead of time that imitate the interaction I would like to happen in my piece, and record it so I could gain more perspective on this. 

One thing that I am very happy with is how the space turned out! I definitely did plan my time better than the last piece in spacing out the work for myself. Even though I am in studio art, so I feel comfortable making things with my hands, this is the first time I have made an installation someone could walk inside. I was really proud of how this space turned out and the intimacy of the natural elements and the traces of drawing. 

I am a bit critical of myself in the conceptual consideration of this piece, even though I think some parts were strong. The more I think about the prompt I had, to assert a personal truth, the less powerful or interesting I think it is. I like the idea of being biases toward what is loud, in a space that celebrates silence, but I think this prompt was distracting to that exploration. I dislike the question too, because it implies that there is a kind of stable personal truth, or feels like it is pressuring someone to establish a personal truth, when in reality, I believe we are all in flux. I dislike confessional work because it feels as though the artist is attempting to create this space that is transformative instead of actually creating someone interesting that derives from exploration/ the loss of ego. 

 

marking the sticks with string to record their position 

drawing very old plants onto the fabric of the installation tent

reference image

brainstorming self-supporting structure.

drawing on the backside of the fabric, to create some color

creating some rubbings of plants on the fabric

inspiration for the creation of my tent

image of interior of the tent, showing the wooden beams, drawing, and plants

view looking up inside the structure

image of projected voice recording text

me, inside the space

the installation entrance

detail of marked fabric

outer view

outer view of installation

p5.js Code –

/// project title – hyphen 

//  this code recognizes one’s speech and returns one’s sentence. 

//  it also draws a circle which varies sizes according to how loud one speaks. 

/// code credit goes to the p5.SpeechRec examples as well as p5.Sound examples

let speechRec;

function setup() {  

createCanvas(400,400);

let lang = navigator.language || ‘en-US’  

speechRec = new p5.SpeechRec(‘lang’,gotSpeech);

speechRec.continuous = true;

  speechRec.start(); 

  //mic information

  mic = new p5.AudioIn();

  mic.start();

  

}

function gotSpeech(){

  console.log(“I got result, the result is”);

  console.log(speechRec.resultString);

  console.log(“—————————-“)

}

function draw(){

  background(250,230,230);

  if (speechRec.hasOwnProperty(“resultString”)){

    micLevel = mic.getLevel();

  text(speechRec.resultString,100,100);

    ellipse(width/2, 3*height/4, micLevel*500, micLevel*500); 

  } 

  

}

function onresult() {

  console.log(“this is working!!!!!”);

  console.log(“________________”);

}

]]>