Processing, Kinect for Windows v2
Wood Frame, Mylar
Digital Capture & Projection, Sculpture
Shown at Every Possible Utterance, MuseumLab, December 2019

2019
Individual Project
Ron Chew

A large silver prism stands in the corner, almost as tall as a human – light seems to emanate out of it as it reflects your presence – projecting something onto the facing wall. The projection, along with a set of digital footprints prompt you to stand in front of it. The display reacts to you – and starts to scan your body as it attempts to digitize and codify your body. It seems to be applying something to your body, a story! The words appear within your silhouette, while it is hard to make out the entire story, you move around trying to read it. You leave wondering if the story was chosen for you to be embodied.

Project Statement

Humans are the most complex machines, and every effort has been made to understand, uncomplicate and replicate our brains. Turning our behavior and personality into something that other machines can begin to mimic has been the ultimate goal.

What if we were truly successful? What if we had a Human Codifier that understood every facet of us: our true cornerstones, motivations and beliefs. Could we use that to configure a Human Operating System, that would perfectly recreate us within its properties and subroutines?

FutureTech Corporation, in collaboration with the Library of Babel, invites you to experience the personalities of Every Possible Utterance, as captured by the Human Codifier. In this experimental procedure, codified experiences, thoughts and occasional ramblings of other humans, will be grafted onto your digital shadows. Through this fully non-invasive installation of other humans onto yourself, you may experience euphoria, epiphanies and emotional changes.

This installation seeks to understand the abstraction of people and their personalities through stories and textual representations, by projecting these stories onto a captured silhouette of each guest. Is this how we might embody other people’s stories? How much of its original meaning we apply to ourselves?

Concept

The prompt for this piece was ARROWS, which captured the complexity in computers where there is direction and indirection of meaning, from simple looking code to complex code, and many disparate pieces coming together to create a whole system. One thing unique about this prompt was that our piece would be shown live during a public show on the night of Dec 13 2019.

I initially had two ideas to work along each theme. For the first, “Emergent Complexity” of a system of pieces, I had an idea which would work alongside another artist in the Exploded Ensemble (experimental music) class, who wanted to translate the position of yoga balls in a large plaza into music. I would try to capture the position of balls and generate units in a system that would change over time during the night.

The other idea about abstraction and complexity revolved around trying to abstract humans into the stories and words we use to describe ourselves. I would try to capture the silhouette of the guest and capture a story they would tell – this story would be captured and fill their silhouette, making them “eat their words”. There would be some comparison of what they said and others who were also prompted the same thing, e.g. showing who else who responded “animals” when prompted “What is your biggest fear?”

Due to tight timelines it was not feasible to work on a collaborative piece for the show – I decided to work on the latter idea of presenting an abstraction of a human being as words – simplifying the idea so that there was not a need to capture words live at the show, which would likely have reliability issues.

Building: Software

I decided to learn a new platform in building this experience: Processing. While it is relatively simpler than other software platforms, it was still new to me and had a good repository of Kinect examples. I worked my way through different representations of the depth information, starting with thresholding the depth to specific areas to isolate the human, converting the image to an ASCII representation, then filling up the space with specific words.

One of the issues was that there was no skeletal tracking, which would have been useful for introducing more interesting interactions and having a better isolation of the body shape. Unfortunately, this library SimpleOpenNI, required a full privileged Mac to build, which I did not have at the time. I decided to use the openkinect-processing library which just had depth information.

My first iteration of the display was more simplistic, which only displayed simple words according to a certain theme – this first iteration worked well enough – but the meaning of the piece was still not found in the relatively random words. For the second iteration, I formed actual stories within the body, but the use of small constantly-moving text made it difficult to read.

For the final version of the software, a collection of texts from Cloud Atlas, Zima Blue, and additional quotes/statements from students in the IDeATe classes on show was displayed in a static format in front of the user. The display automatically rotated between different displays and texts – slowly abstracting the user from an image of themselves to the actual story. Additional text prompted users to stand in front of the display.

The following resources were helpful in my development:

Building: Sculpture

In order to hide the raised projector creating the display, I decided to come up with a simple sculpture that would thematically hide the projector. I received a great suggestion from my instructors and classmates to use Mylar as a reflective surface – creating a sci-fi like obelisk that would have a slightly imposing presence.

With the guidance of my instructor Zach, a wooden frame for the prism was built upon which the Mylar was stapled and mounted onto. A wooden mount was also created for the projector so that it could be held vertically without obstructing its heat exhaust vents, which caused an overheating issue during one of the critiques.

Overall, this construction went well and effectively helped to hide the short throw projector and increasing the focus/immersiveness of the display. With a physical presence to the installation, I decided to call the installation “The Human Codifier”, a machine that codified humans and applied stories to them.

Performance Day

In order to provide some guidance to guests during the display, I decided to add a performative aspect to the installation where I played a “Test Associate” from FutureTech Corporation, who was conducting testing in collaboration with the Library of Babel (the thematic location of the show). The character was completed with a lab coat, security badge and clipboard. As the test associate, I would provide some instructions if they were lost, and conducted some one-question interviews at the end of their interaction.

I felt that the character was useful in providing an opportunity to complete the piece with a connection to the real world instead of a fully self-guided experience. It was also great to be able to talk to guests and also figure out how they felt about the installation.

The position I took up in the MuseumLab for the show was a small nook in the gallery space on the 2nd floor, which I found after scouting the space. It was cozy and perhaps private enough for a single person to experience. The only small tweak I would make about the showing would be that my spot ended up a little poorly lit and perhaps a little hard to get to in the space.

Reflection

Although the actual interaction of this piece was simple, I felt the overall scale of the show and achieving an meaningful experience was the most challenging aspects of this project. Having the additional time and iterations did help me hone in on the exact experience I wanted – a good handful of guests tried to read the stories and dissect its meaning. However, some refinement into the content of the stories could have been made. As to the overall show, I felt that it was a great success, and it was a first for me to have presented to a public audience of 100+ people.

Last of all, I was most thankful to be able to learn along this journey with my classmates Alex, Scarlett, Ilona, Padra, Hugh and Eben, and appreciate all the guidance provided from my instructors Heidi and Zach. Having a close group of friends to work together with and bounce ideas off/provide critique was immensely helpful – and I could not have done it without them.

Other mentions: moving day, after a day of setting up, tearing down 🙁

 

Additional thanks to Christina Brown for photography, Jacquelyn Johnson for videography, and my friends Brandon and Zoe for some photographs.

Code

Here is the Processing code that I used, alongside the openkinect library as metnioned earlier.

import org.openkinect.processing.*;

Kinect2 kinect2;

// Depth image
PImage targetImg, inputImg, resizeImg;
int[] rawDepth;

// Which pixels do we care about?
int minDepth =  100;
int maxDepth =  1800; //1.8m

// What is the kinect's angle
float angle;

// Density characters for ascii art
String letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()-=_+{}[];':,./<>?`~";
String asciiWeights=".`-_':,;^=+/\"|)\\<>)iv%xclrs{*}I?!][1taeo7zjLunT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";

String[] stories = {
  "To be is to be perceived, and so to know thyself is only possible through the eyes of the other. The nature of our immortal lives is in the consequences of our words and deeds, that go on and are pushing themselves throughout all time. Our lives are not our own. From womb to tomb, we are bound to others, past and present, and by each crime and every kindness, we birth our future."
  , "I understand now that boundaries between noise and sound are conventions. All boundaries are conventions, waiting to be transcended. One may transcend any convention if only one can first conceive of doing so. My life extends far beyond the limitations of me."
  , "I'll die? I'm going to die anyway, so what difference does it make? Sometimes, it is difficult event for me to understand what I've become. And harder still to remember what I once was. Life is precious. Infinitely so. Perhaps it takes a machine intelligence to appreciate that."
  , "I think I know what I'm passionate about. But is that really true? I don't want to be too much of one thing, in computer science or art, but yet I still want to be an expert at what I can be. They say that I should not be closing too many doors, but it takes so much energy to keep them open too."
  , "I would want the machine to capture the connection I share with others: my family, my friends, my mentors. Ideally, it would describe my flaws as well: my insecurities, my difficulty expressing emotions, my past mistakes. Perhaps if these aspects of me were permanently inscribed in the Library of Babel, a reader would come to understand what I value above all else and what imperfections I work to accept."
  , "Loneliness, the hidden greed I possess, and maybe what happens behind the many masks I have. to the point it might seem like I don’t know who I am."
  , "To know 'mono no aware' is to discern the power and essence, not just of the moon and the cherry blossoms, but of every single thing existing in this world, and to be stirred by each of them."
};

String[] titles = {
  "The Orison"
  , "Boundaries are Conventions"
  , "The Final Work"
  , "Balancing my Dreams"
  , "Inscribing Myself to the Machine"
  , "When Knowing = Not Knowing"
  , "The Book of Life"
};

String[] authors = {
  "Somni-451"
  , "Robert Frobisher"
  , "Zima Blue"
  , "Ron Chew"
  , "Alex Lin"
  , "Lightslayer"
  , "Motoori Norinaga"
};

String[] modeText = {
   "Scanning for:"
  ,"Digitizing:"
  ,"Codifying:"
  ,"Applying Story:"
  ,"Application Complete."
};

String[] targetText = {
   "HUMANS..."
  ,"HUMAN_BODY..."
  ,"DIGITAL_BODY..."
  ,"STORY_HERE"
  ,"STORY_HERE"
};

int drawMode = 0;

// 0 - full color with depth filter
// 1 - pixelize to squares
// 2 - decolorize to single color
// 3 - small text with depth
// 4 - story text without deth

color black = color(0, 0, 0);
color white = color(255, 255, 255);
color red = color(255, 0, 0);
color magenta = color(255, 0, 255);
color cyan = color(0, 255, 255);

color BACKGROUND_COLOR = black;
color FOREGROUND_COLOR = white;

int modeTimer = 0;
int indexTimer = 0;
int index = 0;
String currentText;

Boolean movingText = false;

int largerWidth = 1300;
int yPos = -200;

PFont font;

void setup() {
  size(3000, 1000);

  kinect2 = new Kinect2(this);
  kinect2.initVideo();
  kinect2.initDepth();
  kinect2.initIR();
  kinect2.initRegistered();
  kinect2.initDevice();

  // Blank image
  targetImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);
  inputImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);

  currentText = stories[0];

  drawMode = 0;
}

void draw() {
  background(BACKGROUND_COLOR);

  // text switcher
  if (millis() - indexTimer >= 60000) {
    index++;
    if (index > 6) {
      index = 0;
    }

    currentText = stories[index]; 
    
    indexTimer = millis();
  }

  // mode switcher
  if (drawMode <= 3 && millis() - modeTimer >= 5000) {
    drawMode++;

    modeTimer = millis();
  } 

  if (drawMode <= 4 && millis() - modeTimer >= 40000) {
    drawMode = 0;

    modeTimer = millis();
  }
  
  if (drawMode >= 3) {
     targetText[3] = titles[index] + " : " + authors[index] + "...";
     targetText[4] = titles[index] + " : " + authors[index];
  }
  

  // START DRAWING!
  if (drawMode < 3) {
    targetImg = kinect2.getRegisteredImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = targetImg.pixels[i];
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  } else {
    targetImg = kinect2.getDepthImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = FOREGROUND_COLOR;
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  }

  inputImg.updatePixels();
  resizeImg = inputImg.get();
  resizeImg.resize(0, largerWidth);
  //image(resizeImg, kinect2.depthWidth, yPos);


  switch(drawMode) {
  case 0:
    image(resizeImg, kinect2.depthWidth, yPos);
    break;

  case 1:
    pixelateImage(resizeImg, 10, kinect2.depthWidth, yPos);
    break;

  case 2:
    ASCII_art(resizeImg, 20, 15, kinect2.depthWidth, yPos);
    break;

  case 3:
    rando_art(resizeImg, 25, 20, kinect2.depthWidth, yPos);
    break;

  case 4:
    Story_art(resizeImg, currentText, 30, 40, 17, FOREGROUND_COLOR, BACKGROUND_COLOR, kinect2.depthWidth, yPos);
    break;
  }

  // DEBUG TEXT
  fill(FOREGROUND_COLOR);
  //text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

  // PRINT TEXT
  fill(FOREGROUND_COLOR);
  pushMatrix();
  translate(450, 700);
  rotate(-HALF_PI);
  textAlign(LEFT);
  
  font = loadFont("HelveticaNeue-Bold-48.vlw");
  textFont (font);
  text(modeText[drawMode], 0, 0);
  
  font = loadFont("HelveticaNeue-Bold-24.vlw");
  textFont (font);
  text(targetText[drawMode], 0, 50);
  
  popMatrix();
}

void Story_art(PImage input, String output, int TextSize, int xSpace, int ySpace, color target, color bg, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(TextSize);

  int textIndex = 0;

  //transformation
  for (int x=0; x<input.width; x+=(xSpace)) {
    for (int y=input.height-1; y>0; y-=(ySpace)) {
      // get a grayscale color to determine color intensity
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));

      // map grayscale color to intensity of target color
      color quant = int(map(greyscaleColor, 0, 255, bg, target));
      //fill(quant);
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(output.charAt(textIndex), 0, 0);
      popMatrix();

      //text(output.charAt(textIndex),x + startX, y + startY);

      textIndex++;
      if (textIndex == output.length()) {
        textIndex = 0;
      }
    }
  }
}

void ASCII_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //remap the grayscale color to printable character
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));
      int quant=int(map(greyscaleColor, 0, 255, 0, asciiWeights.length()-1));
      fill(input.get(x, y));


      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(asciiWeights.charAt(quant), 0, 0);
      popMatrix();
    }
  }
}

void rando_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //just get a random character
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(letters.charAt(int(random(0, 55))), 0, 0);
      popMatrix();
    }
  }
}

// Adjust the angle and the depth threshold min and max
void keyPressed() {
  if (key == 'a') {
    minDepth = constrain(minDepth+100, 0, maxDepth);
  } else if (key == 's') {
    minDepth = constrain(minDepth-100, 0, maxDepth);
  } else if (key == 'z') {
    maxDepth = constrain(maxDepth+100, minDepth, 1165952918);
  } else if (key == 'x') {
    maxDepth = constrain(maxDepth-100, minDepth, 1165952918);
  } else if (key == 'm') {
    movingText = !movingText;
  } else if (key >= '0' && key <= '9') {
    drawMode = Character.getNumericValue(key);
  }
}