rchew@andrew.cmu.edu – 62-362 Fall 2019 https://courses.ideate.cmu.edu/62-362/f2019 Activating the Body: Physical Computing and Technology in Performance Tue, 17 Dec 2019 15:58:03 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.20 The Human Codifier https://courses.ideate.cmu.edu/62-362/f2019/the-human-codifier/ Tue, 17 Dec 2019 01:19:00 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9204 Processing, Kinect for Windows v2
Wood Frame, Mylar
Digital Capture & Projection, Sculpture
Shown at Every Possible Utterance, MuseumLab, December 2019

2019
Individual Project
Ron Chew

A large silver prism stands in the corner, almost as tall as a human – light seems to emanate out of it as it reflects your presence – projecting something onto the facing wall. The projection, along with a set of digital footprints prompt you to stand in front of it. The display reacts to you – and starts to scan your body as it attempts to digitize and codify your body. It seems to be applying something to your body, a story! The words appear within your silhouette, while it is hard to make out the entire story, you move around trying to read it. You leave wondering if the story was chosen for you to be embodied.

Project Statement

Humans are the most complex machines, and every effort has been made to understand, uncomplicate and replicate our brains. Turning our behavior and personality into something that other machines can begin to mimic has been the ultimate goal.

What if we were truly successful? What if we had a Human Codifier that understood every facet of us: our true cornerstones, motivations and beliefs. Could we use that to configure a Human Operating System, that would perfectly recreate us within its properties and subroutines?

FutureTech Corporation, in collaboration with the Library of Babel, invites you to experience the personalities of Every Possible Utterance, as captured by the Human Codifier. In this experimental procedure, codified experiences, thoughts and occasional ramblings of other humans, will be grafted onto your digital shadows. Through this fully non-invasive installation of other humans onto yourself, you may experience euphoria, epiphanies and emotional changes.

This installation seeks to understand the abstraction of people and their personalities through stories and textual representations, by projecting these stories onto a captured silhouette of each guest. Is this how we might embody other people’s stories? How much of its original meaning we apply to ourselves?

Concept

The prompt for this piece was ARROWS, which captured the complexity in computers where there is direction and indirection of meaning, from simple looking code to complex code, and many disparate pieces coming together to create a whole system. One thing unique about this prompt was that our piece would be shown live during a public show on the night of Dec 13 2019.

I initially had two ideas to work along each theme. For the first, “Emergent Complexity” of a system of pieces, I had an idea which would work alongside another artist in the Exploded Ensemble (experimental music) class, who wanted to translate the position of yoga balls in a large plaza into music. I would try to capture the position of balls and generate units in a system that would change over time during the night.

The other idea about abstraction and complexity revolved around trying to abstract humans into the stories and words we use to describe ourselves. I would try to capture the silhouette of the guest and capture a story they would tell – this story would be captured and fill their silhouette, making them “eat their words”. There would be some comparison of what they said and others who were also prompted the same thing, e.g. showing who else who responded “animals” when prompted “What is your biggest fear?”

Due to tight timelines it was not feasible to work on a collaborative piece for the show – I decided to work on the latter idea of presenting an abstraction of a human being as words – simplifying the idea so that there was not a need to capture words live at the show, which would likely have reliability issues.

Building: Software

I decided to learn a new platform in building this experience: Processing. While it is relatively simpler than other software platforms, it was still new to me and had a good repository of Kinect examples. I worked my way through different representations of the depth information, starting with thresholding the depth to specific areas to isolate the human, converting the image to an ASCII representation, then filling up the space with specific words.

One of the issues was that there was no skeletal tracking, which would have been useful for introducing more interesting interactions and having a better isolation of the body shape. Unfortunately, this library SimpleOpenNI, required a full privileged Mac to build, which I did not have at the time. I decided to use the openkinect-processing library which just had depth information.

My first iteration of the display was more simplistic, which only displayed simple words according to a certain theme – this first iteration worked well enough – but the meaning of the piece was still not found in the relatively random words. For the second iteration, I formed actual stories within the body, but the use of small constantly-moving text made it difficult to read.

For the final version of the software, a collection of texts from Cloud Atlas, Zima Blue, and additional quotes/statements from students in the IDeATe classes on show was displayed in a static format in front of the user. The display automatically rotated between different displays and texts – slowly abstracting the user from an image of themselves to the actual story. Additional text prompted users to stand in front of the display.

The following resources were helpful in my development:

Building: Sculpture

In order to hide the raised projector creating the display, I decided to come up with a simple sculpture that would thematically hide the projector. I received a great suggestion from my instructors and classmates to use Mylar as a reflective surface – creating a sci-fi like obelisk that would have a slightly imposing presence.

With the guidance of my instructor Zach, a wooden frame for the prism was built upon which the Mylar was stapled and mounted onto. A wooden mount was also created for the projector so that it could be held vertically without obstructing its heat exhaust vents, which caused an overheating issue during one of the critiques.

Overall, this construction went well and effectively helped to hide the short throw projector and increasing the focus/immersiveness of the display. With a physical presence to the installation, I decided to call the installation “The Human Codifier”, a machine that codified humans and applied stories to them.

Performance Day

In order to provide some guidance to guests during the display, I decided to add a performative aspect to the installation where I played a “Test Associate” from FutureTech Corporation, who was conducting testing in collaboration with the Library of Babel (the thematic location of the show). The character was completed with a lab coat, security badge and clipboard. As the test associate, I would provide some instructions if they were lost, and conducted some one-question interviews at the end of their interaction.

I felt that the character was useful in providing an opportunity to complete the piece with a connection to the real world instead of a fully self-guided experience. It was also great to be able to talk to guests and also figure out how they felt about the installation.

The position I took up in the MuseumLab for the show was a small nook in the gallery space on the 2nd floor, which I found after scouting the space. It was cozy and perhaps private enough for a single person to experience. The only small tweak I would make about the showing would be that my spot ended up a little poorly lit and perhaps a little hard to get to in the space.

Reflection

Although the actual interaction of this piece was simple, I felt the overall scale of the show and achieving an meaningful experience was the most challenging aspects of this project. Having the additional time and iterations did help me hone in on the exact experience I wanted – a good handful of guests tried to read the stories and dissect its meaning. However, some refinement into the content of the stories could have been made. As to the overall show, I felt that it was a great success, and it was a first for me to have presented to a public audience of 100+ people.

Last of all, I was most thankful to be able to learn along this journey with my classmates Alex, Scarlett, Ilona, Padra, Hugh and Eben, and appreciate all the guidance provided from my instructors Heidi and Zach. Having a close group of friends to work together with and bounce ideas off/provide critique was immensely helpful – and I could not have done it without them.

Other mentions: moving day, after a day of setting up, tearing down 🙁

 

Additional thanks to Christina Brown for photography, Jacquelyn Johnson for videography, and my friends Brandon and Zoe for some photographs.

Code

Here is the Processing code that I used, alongside the openkinect library as metnioned earlier.

import org.openkinect.processing.*;

Kinect2 kinect2;

// Depth image
PImage targetImg, inputImg, resizeImg;
int[] rawDepth;

// Which pixels do we care about?
int minDepth =  100;
int maxDepth =  1800; //1.8m

// What is the kinect's angle
float angle;

// Density characters for ascii art
String letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()-=_+{}[];':,./<>?`~";
String asciiWeights=".`-_':,;^=+/\"|)\\<>)iv%xclrs{*}I?!][1taeo7zjLunT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";

String[] stories = {
  "To be is to be perceived, and so to know thyself is only possible through the eyes of the other. The nature of our immortal lives is in the consequences of our words and deeds, that go on and are pushing themselves throughout all time. Our lives are not our own. From womb to tomb, we are bound to others, past and present, and by each crime and every kindness, we birth our future."
  , "I understand now that boundaries between noise and sound are conventions. All boundaries are conventions, waiting to be transcended. One may transcend any convention if only one can first conceive of doing so. My life extends far beyond the limitations of me."
  , "I'll die? I'm going to die anyway, so what difference does it make? Sometimes, it is difficult event for me to understand what I've become. And harder still to remember what I once was. Life is precious. Infinitely so. Perhaps it takes a machine intelligence to appreciate that."
  , "I think I know what I'm passionate about. But is that really true? I don't want to be too much of one thing, in computer science or art, but yet I still want to be an expert at what I can be. They say that I should not be closing too many doors, but it takes so much energy to keep them open too."
  , "I would want the machine to capture the connection I share with others: my family, my friends, my mentors. Ideally, it would describe my flaws as well: my insecurities, my difficulty expressing emotions, my past mistakes. Perhaps if these aspects of me were permanently inscribed in the Library of Babel, a reader would come to understand what I value above all else and what imperfections I work to accept."
  , "Loneliness, the hidden greed I possess, and maybe what happens behind the many masks I have. to the point it might seem like I don’t know who I am."
  , "To know 'mono no aware' is to discern the power and essence, not just of the moon and the cherry blossoms, but of every single thing existing in this world, and to be stirred by each of them."
};

String[] titles = {
  "The Orison"
  , "Boundaries are Conventions"
  , "The Final Work"
  , "Balancing my Dreams"
  , "Inscribing Myself to the Machine"
  , "When Knowing = Not Knowing"
  , "The Book of Life"
};

String[] authors = {
  "Somni-451"
  , "Robert Frobisher"
  , "Zima Blue"
  , "Ron Chew"
  , "Alex Lin"
  , "Lightslayer"
  , "Motoori Norinaga"
};

String[] modeText = {
   "Scanning for:"
  ,"Digitizing:"
  ,"Codifying:"
  ,"Applying Story:"
  ,"Application Complete."
};

String[] targetText = {
   "HUMANS..."
  ,"HUMAN_BODY..."
  ,"DIGITAL_BODY..."
  ,"STORY_HERE"
  ,"STORY_HERE"
};

int drawMode = 0;

// 0 - full color with depth filter
// 1 - pixelize to squares
// 2 - decolorize to single color
// 3 - small text with depth
// 4 - story text without deth

color black = color(0, 0, 0);
color white = color(255, 255, 255);
color red = color(255, 0, 0);
color magenta = color(255, 0, 255);
color cyan = color(0, 255, 255);

color BACKGROUND_COLOR = black;
color FOREGROUND_COLOR = white;

int modeTimer = 0;
int indexTimer = 0;
int index = 0;
String currentText;

Boolean movingText = false;

int largerWidth = 1300;
int yPos = -200;

PFont font;

void setup() {
  size(3000, 1000);

  kinect2 = new Kinect2(this);
  kinect2.initVideo();
  kinect2.initDepth();
  kinect2.initIR();
  kinect2.initRegistered();
  kinect2.initDevice();

  // Blank image
  targetImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);
  inputImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);

  currentText = stories[0];

  drawMode = 0;
}

void draw() {
  background(BACKGROUND_COLOR);

  // text switcher
  if (millis() - indexTimer >= 60000) {
    index++;
    if (index > 6) {
      index = 0;
    }

    currentText = stories[index]; 
    
    indexTimer = millis();
  }

  // mode switcher
  if (drawMode <= 3 && millis() - modeTimer >= 5000) {
    drawMode++;

    modeTimer = millis();
  } 

  if (drawMode <= 4 && millis() - modeTimer >= 40000) {
    drawMode = 0;

    modeTimer = millis();
  }
  
  if (drawMode >= 3) {
     targetText[3] = titles[index] + " : " + authors[index] + "...";
     targetText[4] = titles[index] + " : " + authors[index];
  }
  

  // START DRAWING!
  if (drawMode < 3) {
    targetImg = kinect2.getRegisteredImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = targetImg.pixels[i];
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  } else {
    targetImg = kinect2.getDepthImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = FOREGROUND_COLOR;
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  }

  inputImg.updatePixels();
  resizeImg = inputImg.get();
  resizeImg.resize(0, largerWidth);
  //image(resizeImg, kinect2.depthWidth, yPos);


  switch(drawMode) {
  case 0:
    image(resizeImg, kinect2.depthWidth, yPos);
    break;

  case 1:
    pixelateImage(resizeImg, 10, kinect2.depthWidth, yPos);
    break;

  case 2:
    ASCII_art(resizeImg, 20, 15, kinect2.depthWidth, yPos);
    break;

  case 3:
    rando_art(resizeImg, 25, 20, kinect2.depthWidth, yPos);
    break;

  case 4:
    Story_art(resizeImg, currentText, 30, 40, 17, FOREGROUND_COLOR, BACKGROUND_COLOR, kinect2.depthWidth, yPos);
    break;
  }

  // DEBUG TEXT
  fill(FOREGROUND_COLOR);
  //text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

  // PRINT TEXT
  fill(FOREGROUND_COLOR);
  pushMatrix();
  translate(450, 700);
  rotate(-HALF_PI);
  textAlign(LEFT);
  
  font = loadFont("HelveticaNeue-Bold-48.vlw");
  textFont (font);
  text(modeText[drawMode], 0, 0);
  
  font = loadFont("HelveticaNeue-Bold-24.vlw");
  textFont (font);
  text(targetText[drawMode], 0, 50);
  
  popMatrix();
}

void Story_art(PImage input, String output, int TextSize, int xSpace, int ySpace, color target, color bg, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(TextSize);

  int textIndex = 0;

  //transformation
  for (int x=0; x<input.width; x+=(xSpace)) {
    for (int y=input.height-1; y>0; y-=(ySpace)) {
      // get a grayscale color to determine color intensity
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));

      // map grayscale color to intensity of target color
      color quant = int(map(greyscaleColor, 0, 255, bg, target));
      //fill(quant);
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(output.charAt(textIndex), 0, 0);
      popMatrix();

      //text(output.charAt(textIndex),x + startX, y + startY);

      textIndex++;
      if (textIndex == output.length()) {
        textIndex = 0;
      }
    }
  }
}

void ASCII_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //remap the grayscale color to printable character
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));
      int quant=int(map(greyscaleColor, 0, 255, 0, asciiWeights.length()-1));
      fill(input.get(x, y));


      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(asciiWeights.charAt(quant), 0, 0);
      popMatrix();
    }
  }
}

void rando_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //just get a random character
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(letters.charAt(int(random(0, 55))), 0, 0);
      popMatrix();
    }
  }
}

// Adjust the angle and the depth threshold min and max
void keyPressed() {
  if (key == 'a') {
    minDepth = constrain(minDepth+100, 0, maxDepth);
  } else if (key == 's') {
    minDepth = constrain(minDepth-100, 0, maxDepth);
  } else if (key == 'z') {
    maxDepth = constrain(maxDepth+100, minDepth, 1165952918);
  } else if (key == 'x') {
    maxDepth = constrain(maxDepth-100, minDepth, 1165952918);
  } else if (key == 'm') {
    movingText = !movingText;
  } else if (key >= '0' && key <= '9') {
    drawMode = Character.getNumericValue(key);
  }
}

 

]]>
Staircase Encounters https://courses.ideate.cmu.edu/62-362/f2019/staircase-encounters/ Tue, 12 Nov 2019 16:09:58 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9038 Unity 3D, OpenPose, Kinect for Windows v2, Speakers, Acrylic Plastic
Ambient Auditory Experience Emanating Sculpture
Installed at the Basement Stairwell of Hunt Library, Carnegie Mellon University

2019
Individual Project
Rong Kang Chew

As Hunt Library guests enter the main stairwell, they are greeted with a quiet hum. Something’s changed but they can’t really see what. The hum changes as they walk along the staircase – they are amused but still curious. The sound becomes lower in pitch as they walk down to the basement. Someone else enters the stairwell and notices the noise too – there is brief eye contact with someone else on the staircase – did they hear that too?

As they reach the bottom and approach the door, they hear another sound – a chime as if they have reached their destination. Some of those not in a hurry notice a sleek machine, draped in smooth black plastic, next to the doorway. It is watching them, and seems to be the source of the sounds. Some try to experiment with the machine. Either way, the guests still leave the staircase, only to return sometime soon.


Process

Staircases are usually shared cramped spaces that are sometimes uncomfortable – we have to squeeze past people, make uneasy eye contact, or ask sheepishly if we could get help with the door. How can we become aware of our state of mind and that of other people as we move through staircases, and could this make being on a staircase a better experience?

After learning about with the underlying concept for the FLOW theme, which was that of transduction and the changes between various forms and states, I knew I wanted to apply that concept into a installation that occupied the space of a room. In this case, this room was a stairwell leading to the basement level of the Hunt Library in CMU. This space was close enough to the rest of the installations of our show, WEB OF WUBS.

I had to seek permission from CMU Libraries in order to have my installation sited in the stairwell, and therefore had to come up with a proposal detailing my plans and installation date. Due to the nature and siting of the installation, safety and privacy were important emphasis points in the proposal. I would like to thank my instructor Heidi for helping me to get the proposal across to the right people.

Placement in the stairwell is tricky, as I had to ensure that cabling and positions of objects were safe and would not cause any tripping. I iterated through various webcam and placements of cameras, computers and speakers to find out what would work well for the experience. Eventually, I settled with consolidating the entire installation into a single unit instead of trying to conceal its elements. Some of my earlier onsite testing showed that people didn’t really react with the sound if there was no visual element. This and the advice of Heidi encouraged me to put the installation “out there” so that people could see, interact and perhaps play with it.

The final enclosure for the sculpture was laser cut out of 1/8″ black acrylic plastic and glued together. Speaker holes were also included for the computer speakers used. Unfortunately, I ran out of material and decided to go with exposing the wiring on the sides. The nature of the glue used does allow disassembly and an opportunity to improve this in the future.

As for the software aspects of the implementation, I used the OpenPose library from the CMU Perceptual Computing lab. This allowed me to figure out where humans are in a particular scene. However, it only detected scenes in 2D, so I had to limit myself to working with the height and width of where people were in a scene. I used the Unity 3D game engine to process this information, and used the average horizontal and vertical positions of people’s heads to adjust the pitch in two “zones” of the staircase. (see end of post for some code).

X,Y position in zone <==> pitch of sounds for that zone 

The sounds used by the experience included those from the Listen to Wikipedia experience by Hatnote and some verbal phrases spoken by Google Cloud Text-to-Speech.

Reflection & Improvements

A lot of the learning from this project came from testing on site, and even so, I think I did not arrive at where I actually wanted to be for the installation of Staircase Encounters.

Hidden, Surreal |————————————X——| Explicit, Playful

The key issue was something I mentioned earlier: how noticeable and interacted with did I want my installation to be? In my first tests, it seemed like no one was paying attention to the sounds. But at the end, I think I perhaps made the installation too interactive. I received a lot of feedback from guests that were expecting the sounds to react more to their movements, especially since they were able to see all their limbs being tracked.

I guess given more time, I could have added more parameters to how the music reacts to users, e.g. speed of movement, “excitedness” of limbs, and encounters with any other guests. However, the visual element lead to engagement that was not followed up, which in itself was a little disappointing, like a broken toy.

My key learning from Staircase Encounters is to test and think clearly about the experience – it is easy to be fixated on the building, but not easy to be objective, emotion and measured about the experience and the end product, especially when the building is rushed.

Code

Here is some code that represents the pink and blue “zones”, which track people as they enter and move through them, and updates the sounds accordingly.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Linq;

public class Zone : MonoBehaviour
{
    public enum Axis
    {
        X, Y
    }

    public List<GameObject> soundsToSpawn;
    public float min = -60;
    public float max = 30;
    public float minPitch = 1.5f;
    public float maxPitch = -0.5f;
    public Axis axisToUse = Axis.X;

    private Queue<float> positions;
    private int queueSize = 20;

    private AudioSource sound;
    private float timeStarted = 0;
    private bool played;

    private int soundMode = 0;

    // Start is called before the first frame update
    void Start()
    {
        positions = new Queue<float>();
    }

    // Update is called once per frame
    void Update()
    {
        if (sound != null && played && !sound.isPlaying && Time.time - timeStarted > 1)
        {
            Destroy(sound.gameObject);
            played = false;
            sound = null;
        }

        if (Input.GetKeyDown("0"))
        {
            soundMode = 0;
        }

        if (Input.GetKeyDown("1"))
        {
            soundMode = 1;
        }
    }

    void OnTriggerEnter2D(Collider2D col)
    {
        //Debug.Log(gameObject.name + " entered: " + col.gameObject.name + " : " + Time.time);
        
        if (sound == null)
        {
            timeStarted = Time.time;
            sound = Instantiate(this.soundsToSpawn[soundMode]).GetComponent<AudioSource>();
            sound.Play();
            played = true;
        }
    }

    void OnTriggerStay2D(Collider2D col)
    {
        if (sound != null)
        {
            RectTransform rTransform = col.gameObject.GetComponent<RectTransform>();
            float point = 0;

            switch (this.axisToUse)
            {
                case Zone.Axis.X:
                    point = rTransform.position.x;
                    break;
                case Zone.Axis.Y:
                    point = rTransform.position.y;
                    break;
                default:
                    break;
            }

            while (positions.Count >= queueSize)
            {
                positions.Dequeue();
            }

            positions.Enqueue(point);

            float avgPoint = positions.Average();

            //Debug.Log("Avg value of " + this.gameObject.name + " to " + avgPoint + " : " + Time.time);
            float targetPitch = map(point, this.min, this.max, this.minPitch, this.maxPitch);
            sound.pitch = targetPitch;
        }
    }

    static float map(float x, float in_min, float in_max, float out_min, float out_max)
    {
        return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
    }
}

 

]]>
Life Requires your Input https://courses.ideate.cmu.edu/62-362/f2019/life-requires-your-input/ Thu, 03 Oct 2019 18:04:08 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=8540

Pine, Tile Board, misc. Electronics
6′ Physical Installation and Sculpture
Introduced with a 2 mins. Spoken Word Performance

2019
Individual Project
Rong Kang Chew

 

A human-sized lightswitch stands in front of you. This is life, and Life Requires your Input. What happens when you lift up the lightswitch – it’s heavy and needs both hands – but nothing changes. You notice two footswitches. It’s awkward to try to step on both of them and lift up the lightswitch at the same time. Finally – something happens a few feet away, behind the switch! But wait – you can’t see it, you’re frozen in your position. A small light turns on, and you here a faint buzz, but it’s barely there and you can’t witness it for yourself. Is all this effort worth it? Do you need to do everything at once? Does life require all your input at the same time?

You’re tired – maybe you want to take a break.

The Concept

This interactive sculpture explores a physical representation of our commitments, responsibilities and tasks in our life. There is a semblance of a logic gate here, which was a requirement for this piece. Logic exists in our desire to sometimes do everything – essentially to do this AND that AND this AND that. This desire can sometimes be overwhelming or paralyzing. What’s worse is that if you can’t see the fruits of your labor – and it causes you to consider and question all of your efforts.

The Process

While brainstorming for this piece, I began to think about the relationships that I have in life. For this concept, I was thinking about the relationships within my life and how I have to manage all of them. I began to think about how sometimes I spend a lot of time on certain projects, only for them to disappear after some time. How could I represent this in a physical sculpture?

(There were certain ideas I had a stronger desire to explore – but were more difficult to implement with more complex electronics. Learning to let go of certain ideas in the moment was difficult but a necessary process to have an installation in time.)

I decided to go with something more literal – switches that represent life. The original concept was to juxtapose the size of inputs and outputs. An extremely large input would produce an extremely small output, and vice versa. These two processes or transformations would be placed next to each other to show how we usually don’t get what we expect. A maquette of this type of portrayal was made to visualize how this juxtaposition might look like.

As the deadline approached, I decided that it would be necessary to cut down the sculpture to something achievable. Upon the advice of my instructor Heidi, I decided to focus on the large to small input. This would require me to build a large physical switch from scratch out of wood and smaller electronics. I added the dimension of the footswitches to freeze the guest and prevent them from looking at the output.

Under the guidance of my other instructor Zach, I was able to draw up some plans for the support and front panel of this physical switch. We bought a white front panel for this switch to emulate a wall switch, which was quite an adventure to haul over from Home Depot.

The woodworking process was the most challenging, having never worked with wood and power tools beyond drilling the occasional hole. I had to ask for a lot of help in order to get started, but fortunately was able to learn quickly. Within the timeframe, I was unable to come up with a precise CAD model of the work, so I had to improvise and make changes as I discovered available materials issues with the piece, like the height of the hole for the switch and so on. Learning to be flexible and making quick informed decisions proved to be helpful.

The final piece came together after a several long hours of hard work, and I was happy that it was functionally complete. There were some sacrifices made –  the light switch knob remaining black instead of having a white housing, the front panel had to be sawed in half, and footswitches were used instead of arcade buttons. But these sacrifices did ensure the work was complete on time and looked complete for the showing. The final switch was installed together with its output box about 6 feet away, wires hidden away with gaffer tape.

As part of the showing, I decided the night before that I had to introduce the piece with some words. The piece was quite literal, so visually, it looked just like a switch and would likely just be read as is without additional meaning. In order to convey the emotional aspect of the piece – I gave a 2-minute long spoken word piece, talking through my emotions and thoughts as I approached the switch and “wanting to do everything”. I was glad to have given this performance, as I believed it helped everyone to understand the piece and connect with the underlying concepts better.

If I had more time, I would like to have further completed the appearance of the switch. There are some missing pieces, like the housing of the switch that instead used a piece of fabric as the screen – that made the switch a little less desirable looking. The output box was also missing a physical motion output I was intended to produce and complement the light and sound outputs. However, I guess the missing output also adds to the “disappointment” message from the outputs of life.

Another aspect of the piece that I thought a lot about was that during the performance, there was moments of unexpected laughter as certain words or statements were said. I felt that this broke the atmosphere that I was trying to build, but perhaps this is a way that the audience would process those thoughts. If I were to perform this piece again, I might employ music in order to create a more controlled atmosphere.

Overall, as a first step into the journey between performance art, sculpture and physical computing, I would consider this piece to be a successful exploration that did convey a specific message about what kind of inputs and expectations we have in life. The piece maybe did portray some of my current frustrations with life, but creating it did help me think about it, and made it OK to have some smaller outputs out of life.

 

]]>