Project no. 3 – 62-362 Fall 2019 https://courses.ideate.cmu.edu/62-362/f2019 Activating the Body: Physical Computing and Technology in Performance Fri, 20 Mar 2020 23:30:58 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.20 The Human Codifier https://courses.ideate.cmu.edu/62-362/f2019/the-human-codifier/ Tue, 17 Dec 2019 01:19:00 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9204 Processing, Kinect for Windows v2
Wood Frame, Mylar
Digital Capture & Projection, Sculpture
Shown at Every Possible Utterance, MuseumLab, December 2019

2019
Individual Project
Ron Chew

A large silver prism stands in the corner, almost as tall as a human – light seems to emanate out of it as it reflects your presence – projecting something onto the facing wall. The projection, along with a set of digital footprints prompt you to stand in front of it. The display reacts to you – and starts to scan your body as it attempts to digitize and codify your body. It seems to be applying something to your body, a story! The words appear within your silhouette, while it is hard to make out the entire story, you move around trying to read it. You leave wondering if the story was chosen for you to be embodied.

Project Statement

Humans are the most complex machines, and every effort has been made to understand, uncomplicate and replicate our brains. Turning our behavior and personality into something that other machines can begin to mimic has been the ultimate goal.

What if we were truly successful? What if we had a Human Codifier that understood every facet of us: our true cornerstones, motivations and beliefs. Could we use that to configure a Human Operating System, that would perfectly recreate us within its properties and subroutines?

FutureTech Corporation, in collaboration with the Library of Babel, invites you to experience the personalities of Every Possible Utterance, as captured by the Human Codifier. In this experimental procedure, codified experiences, thoughts and occasional ramblings of other humans, will be grafted onto your digital shadows. Through this fully non-invasive installation of other humans onto yourself, you may experience euphoria, epiphanies and emotional changes.

This installation seeks to understand the abstraction of people and their personalities through stories and textual representations, by projecting these stories onto a captured silhouette of each guest. Is this how we might embody other people’s stories? How much of its original meaning we apply to ourselves?

Concept

The prompt for this piece was ARROWS, which captured the complexity in computers where there is direction and indirection of meaning, from simple looking code to complex code, and many disparate pieces coming together to create a whole system. One thing unique about this prompt was that our piece would be shown live during a public show on the night of Dec 13 2019.

I initially had two ideas to work along each theme. For the first, “Emergent Complexity” of a system of pieces, I had an idea which would work alongside another artist in the Exploded Ensemble (experimental music) class, who wanted to translate the position of yoga balls in a large plaza into music. I would try to capture the position of balls and generate units in a system that would change over time during the night.

The other idea about abstraction and complexity revolved around trying to abstract humans into the stories and words we use to describe ourselves. I would try to capture the silhouette of the guest and capture a story they would tell – this story would be captured and fill their silhouette, making them “eat their words”. There would be some comparison of what they said and others who were also prompted the same thing, e.g. showing who else who responded “animals” when prompted “What is your biggest fear?”

Due to tight timelines it was not feasible to work on a collaborative piece for the show – I decided to work on the latter idea of presenting an abstraction of a human being as words – simplifying the idea so that there was not a need to capture words live at the show, which would likely have reliability issues.

Building: Software

I decided to learn a new platform in building this experience: Processing. While it is relatively simpler than other software platforms, it was still new to me and had a good repository of Kinect examples. I worked my way through different representations of the depth information, starting with thresholding the depth to specific areas to isolate the human, converting the image to an ASCII representation, then filling up the space with specific words.

One of the issues was that there was no skeletal tracking, which would have been useful for introducing more interesting interactions and having a better isolation of the body shape. Unfortunately, this library SimpleOpenNI, required a full privileged Mac to build, which I did not have at the time. I decided to use the openkinect-processing library which just had depth information.

My first iteration of the display was more simplistic, which only displayed simple words according to a certain theme – this first iteration worked well enough – but the meaning of the piece was still not found in the relatively random words. For the second iteration, I formed actual stories within the body, but the use of small constantly-moving text made it difficult to read.

For the final version of the software, a collection of texts from Cloud Atlas, Zima Blue, and additional quotes/statements from students in the IDeATe classes on show was displayed in a static format in front of the user. The display automatically rotated between different displays and texts – slowly abstracting the user from an image of themselves to the actual story. Additional text prompted users to stand in front of the display.

The following resources were helpful in my development:

Building: Sculpture

In order to hide the raised projector creating the display, I decided to come up with a simple sculpture that would thematically hide the projector. I received a great suggestion from my instructors and classmates to use Mylar as a reflective surface – creating a sci-fi like obelisk that would have a slightly imposing presence.

With the guidance of my instructor Zach, a wooden frame for the prism was built upon which the Mylar was stapled and mounted onto. A wooden mount was also created for the projector so that it could be held vertically without obstructing its heat exhaust vents, which caused an overheating issue during one of the critiques.

Overall, this construction went well and effectively helped to hide the short throw projector and increasing the focus/immersiveness of the display. With a physical presence to the installation, I decided to call the installation “The Human Codifier”, a machine that codified humans and applied stories to them.

Performance Day

In order to provide some guidance to guests during the display, I decided to add a performative aspect to the installation where I played a “Test Associate” from FutureTech Corporation, who was conducting testing in collaboration with the Library of Babel (the thematic location of the show). The character was completed with a lab coat, security badge and clipboard. As the test associate, I would provide some instructions if they were lost, and conducted some one-question interviews at the end of their interaction.

I felt that the character was useful in providing an opportunity to complete the piece with a connection to the real world instead of a fully self-guided experience. It was also great to be able to talk to guests and also figure out how they felt about the installation.

The position I took up in the MuseumLab for the show was a small nook in the gallery space on the 2nd floor, which I found after scouting the space. It was cozy and perhaps private enough for a single person to experience. The only small tweak I would make about the showing would be that my spot ended up a little poorly lit and perhaps a little hard to get to in the space.

Reflection

Although the actual interaction of this piece was simple, I felt the overall scale of the show and achieving an meaningful experience was the most challenging aspects of this project. Having the additional time and iterations did help me hone in on the exact experience I wanted – a good handful of guests tried to read the stories and dissect its meaning. However, some refinement into the content of the stories could have been made. As to the overall show, I felt that it was a great success, and it was a first for me to have presented to a public audience of 100+ people.

Last of all, I was most thankful to be able to learn along this journey with my classmates Alex, Scarlett, Ilona, Padra, Hugh and Eben, and appreciate all the guidance provided from my instructors Heidi and Zach. Having a close group of friends to work together with and bounce ideas off/provide critique was immensely helpful – and I could not have done it without them.

Other mentions: moving day, after a day of setting up, tearing down 🙁

 

Additional thanks to Christina Brown for photography, Jacquelyn Johnson for videography, and my friends Brandon and Zoe for some photographs.

Code

Here is the Processing code that I used, alongside the openkinect library as metnioned earlier.

import org.openkinect.processing.*;

Kinect2 kinect2;

// Depth image
PImage targetImg, inputImg, resizeImg;
int[] rawDepth;

// Which pixels do we care about?
int minDepth =  100;
int maxDepth =  1800; //1.8m

// What is the kinect's angle
float angle;

// Density characters for ascii art
String letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()-=_+{}[];':,./<>?`~";
String asciiWeights=".`-_':,;^=+/\"|)\\<>)iv%xclrs{*}I?!][1taeo7zjLunT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";

String[] stories = {
  "To be is to be perceived, and so to know thyself is only possible through the eyes of the other. The nature of our immortal lives is in the consequences of our words and deeds, that go on and are pushing themselves throughout all time. Our lives are not our own. From womb to tomb, we are bound to others, past and present, and by each crime and every kindness, we birth our future."
  , "I understand now that boundaries between noise and sound are conventions. All boundaries are conventions, waiting to be transcended. One may transcend any convention if only one can first conceive of doing so. My life extends far beyond the limitations of me."
  , "I'll die? I'm going to die anyway, so what difference does it make? Sometimes, it is difficult event for me to understand what I've become. And harder still to remember what I once was. Life is precious. Infinitely so. Perhaps it takes a machine intelligence to appreciate that."
  , "I think I know what I'm passionate about. But is that really true? I don't want to be too much of one thing, in computer science or art, but yet I still want to be an expert at what I can be. They say that I should not be closing too many doors, but it takes so much energy to keep them open too."
  , "I would want the machine to capture the connection I share with others: my family, my friends, my mentors. Ideally, it would describe my flaws as well: my insecurities, my difficulty expressing emotions, my past mistakes. Perhaps if these aspects of me were permanently inscribed in the Library of Babel, a reader would come to understand what I value above all else and what imperfections I work to accept."
  , "Loneliness, the hidden greed I possess, and maybe what happens behind the many masks I have. to the point it might seem like I don’t know who I am."
  , "To know 'mono no aware' is to discern the power and essence, not just of the moon and the cherry blossoms, but of every single thing existing in this world, and to be stirred by each of them."
};

String[] titles = {
  "The Orison"
  , "Boundaries are Conventions"
  , "The Final Work"
  , "Balancing my Dreams"
  , "Inscribing Myself to the Machine"
  , "When Knowing = Not Knowing"
  , "The Book of Life"
};

String[] authors = {
  "Somni-451"
  , "Robert Frobisher"
  , "Zima Blue"
  , "Ron Chew"
  , "Alex Lin"
  , "Lightslayer"
  , "Motoori Norinaga"
};

String[] modeText = {
   "Scanning for:"
  ,"Digitizing:"
  ,"Codifying:"
  ,"Applying Story:"
  ,"Application Complete."
};

String[] targetText = {
   "HUMANS..."
  ,"HUMAN_BODY..."
  ,"DIGITAL_BODY..."
  ,"STORY_HERE"
  ,"STORY_HERE"
};

int drawMode = 0;

// 0 - full color with depth filter
// 1 - pixelize to squares
// 2 - decolorize to single color
// 3 - small text with depth
// 4 - story text without deth

color black = color(0, 0, 0);
color white = color(255, 255, 255);
color red = color(255, 0, 0);
color magenta = color(255, 0, 255);
color cyan = color(0, 255, 255);

color BACKGROUND_COLOR = black;
color FOREGROUND_COLOR = white;

int modeTimer = 0;
int indexTimer = 0;
int index = 0;
String currentText;

Boolean movingText = false;

int largerWidth = 1300;
int yPos = -200;

PFont font;

void setup() {
  size(3000, 1000);

  kinect2 = new Kinect2(this);
  kinect2.initVideo();
  kinect2.initDepth();
  kinect2.initIR();
  kinect2.initRegistered();
  kinect2.initDevice();

  // Blank image
  targetImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);
  inputImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);

  currentText = stories[0];

  drawMode = 0;
}

void draw() {
  background(BACKGROUND_COLOR);

  // text switcher
  if (millis() - indexTimer >= 60000) {
    index++;
    if (index > 6) {
      index = 0;
    }

    currentText = stories[index]; 
    
    indexTimer = millis();
  }

  // mode switcher
  if (drawMode <= 3 && millis() - modeTimer >= 5000) {
    drawMode++;

    modeTimer = millis();
  } 

  if (drawMode <= 4 && millis() - modeTimer >= 40000) {
    drawMode = 0;

    modeTimer = millis();
  }
  
  if (drawMode >= 3) {
     targetText[3] = titles[index] + " : " + authors[index] + "...";
     targetText[4] = titles[index] + " : " + authors[index];
  }
  

  // START DRAWING!
  if (drawMode < 3) {
    targetImg = kinect2.getRegisteredImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = targetImg.pixels[i];
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  } else {
    targetImg = kinect2.getDepthImage();
    rawDepth = kinect2.getRawDepth();
    for (int i=0; i < rawDepth.length; i++) {
      if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
        inputImg.pixels[i] = FOREGROUND_COLOR;
      } else {
        inputImg.pixels[i] = BACKGROUND_COLOR;
      }
    }
  }

  inputImg.updatePixels();
  resizeImg = inputImg.get();
  resizeImg.resize(0, largerWidth);
  //image(resizeImg, kinect2.depthWidth, yPos);


  switch(drawMode) {
  case 0:
    image(resizeImg, kinect2.depthWidth, yPos);
    break;

  case 1:
    pixelateImage(resizeImg, 10, kinect2.depthWidth, yPos);
    break;

  case 2:
    ASCII_art(resizeImg, 20, 15, kinect2.depthWidth, yPos);
    break;

  case 3:
    rando_art(resizeImg, 25, 20, kinect2.depthWidth, yPos);
    break;

  case 4:
    Story_art(resizeImg, currentText, 30, 40, 17, FOREGROUND_COLOR, BACKGROUND_COLOR, kinect2.depthWidth, yPos);
    break;
  }

  // DEBUG TEXT
  fill(FOREGROUND_COLOR);
  //text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

  // PRINT TEXT
  fill(FOREGROUND_COLOR);
  pushMatrix();
  translate(450, 700);
  rotate(-HALF_PI);
  textAlign(LEFT);
  
  font = loadFont("HelveticaNeue-Bold-48.vlw");
  textFont (font);
  text(modeText[drawMode], 0, 0);
  
  font = loadFont("HelveticaNeue-Bold-24.vlw");
  textFont (font);
  text(targetText[drawMode], 0, 50);
  
  popMatrix();
}

void Story_art(PImage input, String output, int TextSize, int xSpace, int ySpace, color target, color bg, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(TextSize);

  int textIndex = 0;

  //transformation
  for (int x=0; x<input.width; x+=(xSpace)) {
    for (int y=input.height-1; y>0; y-=(ySpace)) {
      // get a grayscale color to determine color intensity
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));

      // map grayscale color to intensity of target color
      color quant = int(map(greyscaleColor, 0, 255, bg, target));
      //fill(quant);
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(output.charAt(textIndex), 0, 0);
      popMatrix();

      //text(output.charAt(textIndex),x + startX, y + startY);

      textIndex++;
      if (textIndex == output.length()) {
        textIndex = 0;
      }
    }
  }
}

void ASCII_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //remap the grayscale color to printable character
      color C = input.get(x, y);
      color greyscaleColor = int(0.299 * red(C) + 0.587*green(C) + 0.114*blue(C));
      int quant=int(map(greyscaleColor, 0, 255, 0, asciiWeights.length()-1));
      fill(input.get(x, y));


      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(asciiWeights.charAt(quant), 0, 0);
      popMatrix();
    }
  }
}

void rando_art(PImage input, int textSize, int spacing, int startX, int startY) {
  textAlign(CENTER, CENTER);
  strokeWeight(50);
  textSize(textSize);

  //transformation from grayscale to ASCII art
  for (int y=0; y<input.height; y+=(spacing)) {
    for (int x=0; x<input.width; x+=(spacing)) {
      //just get a random character
      fill(input.get(x, y));

      // draw, but rotated
      pushMatrix();
      translate(x + startX, y + startY);
      rotate(-HALF_PI);
      text(letters.charAt(int(random(0, 55))), 0, 0);
      popMatrix();
    }
  }
}

// Adjust the angle and the depth threshold min and max
void keyPressed() {
  if (key == 'a') {
    minDepth = constrain(minDepth+100, 0, maxDepth);
  } else if (key == 's') {
    minDepth = constrain(minDepth-100, 0, maxDepth);
  } else if (key == 'z') {
    maxDepth = constrain(maxDepth+100, minDepth, 1165952918);
  } else if (key == 'x') {
    maxDepth = constrain(maxDepth-100, minDepth, 1165952918);
  } else if (key == 'm') {
    movingText = !movingText;
  } else if (key >= '0' && key <= '9') {
    drawMode = Character.getNumericValue(key);
  }
}

 

]]>
recursion https://courses.ideate.cmu.edu/62-362/f2019/recursion/ Mon, 16 Dec 2019 23:49:58 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9215

ritual items (photo by Sally Maxson)

as seen from above, performing on table (photo by Christina Brown)

at the beginning of the performance (photo by Sally Maxson)

tracing the outside of a trace (the body map growing) (photo by Christina Brown)

drawing and being tapped on the shoulder by stick, by breath (photo by Christina Brown)

reflexive drawing continues (photo by Christina Brown)

video documentation above (thank you, Jacquelyn Johnson for the footage)


In this piece, I have a device on my body the measures the amount my chest expands as I breathe. This device then triggers the motion of a motor on my back, which moves a twig forward and back. This motion gently taps me, and reminds me of my body continuously.

With this awareness of the body, I map myself, by tracing my outlines on a transparent paper that crinkles as I walk on it. I make a unified portrait of myself, partially by draw the outline of my left hand to my elbow and create a circle, rotating my body as I do so.

After I finish this first iteration of my personal map, I then create one other, taking the roll of transparent paper and folding it over on itself for a new canvas.

On this new map I can see the ghost of my last drawing. In my next mapping, I trace the outlines of the first map. The memory of its previous state is embedded within its very structure. The drawing process moves forward by referencing its previous state.


When thinking about the theme arrows, I was struck by the poetry that abstraction enables computers to perform their every task.  As Zach explained even more beautifully in the class lecture he gave, there are many, many levels of abstraction that convert the instruction “move this motor to the right 90 degrees” to the binary language computers can understand, much like a Russian doll set is a series of containers containing containers.

I began to compile relevant research on the topic here. I was especially interested in how abstraction is a concept relevant in many disciplines, that this conversion between one category to another for the movement of information is so pervasive. I was interested in the way we abstraction a self by the way we narrate our life stories.  And how the beginning of a subjective self might start the moment we recognize our bodies.

Somewhere in my research I came upon the idea of recursion in computer programming. I found it interesting that there was a kind of emergent complexity that existed within a function calling itself; that complex growth structures like fractals could be visualized or complex problems could be solved by the simple process of self-referencing “a smaller instance of that problem”, as the wikipedia page writes.

Recursion felt very connected to my interest in the abstraction of identity formation. I knew I wanted to connect them somehow within a performance.

It was difficult to figure out initially what a recursive drawing might look like (at first, my idea was to make rubbings of fragments of my body, but I realized this action, while it may relate to construction of identity, does not relate to recursion) (I am thankful for my dear friend Jacob Tate for helping me initially realize this). I struggled to know too, how I wanted to trace myself – what is the conceptual difference between trying to draw my entire outline and trying to draw several parts of myself repeatedly in order to create a unified image?

It was also interesting to think about how each performative action would unfold, and how my costume could influence the performance, and what significance color could hold in the piece. I am so thankful for Heidi’s help in this, she gave me so references for inspiration and talked me through different ways of performing. I am so grateful for such incredible and kind teachers.


project statement


The physical making of the mechanism to measure my breath was a bit of a challenge and required some differing approaches! I am so thankful for Zach in this process.  He had so much patience and and helped me work through the physical mechanism for tracking the expansion of my chest, as well as helping me to convert my project from using a normal sized- Arduino to a mini Arduino which would enable me to perform and be able to move and draw without being to fearful of breaking any wire connections.

In the process of making a mechanism to track my breathe, we experimented with materials that had too much friction (a belt, a scarf) before settling on the red nylon rope.

Drawing out the circuit diagrams were so helpful in the soldering process, and made things move so much more smoothly.

experimenting with using a sensor that measures the bending of a disk

finally getting the mechanism to work on a normal Arduino

figuring out the wiring so I could transfer from the normal big Arduino to the tiny Ardiuno which would enable me to perform

one of the original mechanisms made to measure chest expansion, this scarf however had too much friction

consolidating the wires to make chest piece better suited for performance

schematic for Arduino 

// ilona altman
// recursion project
// this code makes the servo motor move with the movement of the potentiometer,
// which moves with the expansion of my chest as i breathe
// this code was made with lots of help from Robert Zacharias (thank you so //much)
// also based off of - sweep code by BARRAGAN <http://barraganstudio.com>, example code is in the public domain.

#include <Servo.h>

Servo myservo;  // create servo object to control a servo
// twelve servo objects can be created on most boards

int posServo = 0;  // variable to store the servo position
int max = 0;
int min = 1023;


void setup() {
  myservo.attach(9); // attaches the servo on pin 9 to the servo object
  pinMode(A0,INPUT); // declares input 
  
}


// move the motor
void loop() {
  int readVal = analogRead(A1);
  if (readVal > max){
    max = readVal;
  } 
  if (readVal < min) {
    min = readVal;
  }
  
  int posWithBreath = map(analogRead(A1), min, max, 0, 180); // map this servo position (posServo)
  myservo.write(posWithBreath); // tell servo to go to position in variable 'pos'
  
}

All in all, this project was so rewarding to work on, especially with the culmination of the performance. I love and admire everyone in our class so much.  I feel proud of the final performance, and I believe that this project was a synthesis of much of what I learned from this class.

I have learned from Heidi and Zach and working through previous projects how valuable it is to create prototypes, and how valuable it is to really understand what interaction you are creating with your project. It is more important that this interaction be thoughtful and working fully than for your project to be perfectly beautiful.  This was difficult for me to learn because I have the tendency of wanting to think about everything before I start making it. This stems from a fear of failure and a desire for a “perfect”, conceptually interesting piece;  I’ve realized however, that  by postponing the physical making, I do not give myself the opportunity to improve the piece as I am making it. I do not give myself the time for the physical piece, and the interaction it creates, to marinate so I can make adjustments.

Even though there is always room to improve on this, and this project was not perfect in this process, I know I have grown so much since project 1. I feel really good about the steps taken to realize this project, and I feel good about how it felt to perform. I am so excited for the future and so thankful for everyone in our beautiful class for making this growth possible.

]]>
INSTALLATION FOR MIRRORED BALLOON AND SEVERAL RECENT MIRRORED BALLOONS https://courses.ideate.cmu.edu/62-362/f2019/installation-for-mirrored-balloon-and-several-recent-mirrored-balloons/ Mon, 16 Dec 2019 22:32:20 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9195  

An audience member interacts with a mirrored balloon.

Judith Butler’s concept of performativity (standing most obviously on the shoulders of Marcel Merleau-Ponty’s phenomenology) proposes that we define the self, and the various types of identity that make up the self, as a process rather than something complete or stable.

Butler’s most famous example is that of gender. Gender (EG ‘womanhood’) means very different things in different cultures, and indeed while babies are born with somewhat differing endocrine systems, all arrive on earth unaware that tall shoes and pink things correlate to an authentic expression of certain genitalia. And yet, by early adulthood, for some the use of tall pink shoes feels to be an authentic expression of the self; and because it feels that way it (by definition) is. How does this happen?

According to Butler, the answer is that gender is essentially a process, in which you intentionally practice (or perform) certain ways of being that you perceive as being associated with you. And over time, both personally and a culture, this performance becomes natural and authentic. As such, what gender means is different for every person, and the agreed-upon end result (tall shoes = Woman) is not inevitable. This argument is supported by recent neuroscience: whatever you practice, you learn, and learning is a literal re-structuring of connections in the brain to prioritize certain outcomes. So whatever you practice, you become.

Which brings us to this piece, in which a balloon is demonstrating the process via which one can become a balloon.

Project Description

This is an ‘analog’ video delay piece: there is a projector pointed at a wall, and there is a camera pointed at the same wall. The slight difference in focus and geometry between the camera and the projector means that the projected image doesn’t quite match the camera’s capture of that image. This offset, when repeated 24 frames a second, results in interesting trailing patterns through space. The closer the focus and image are to each other, generally the more active the effect and the cooler the image.

This edition of the piece featured a mirrored balloon tied to a fan pointing straight up. The mirrored balloon naturally dances in the air of the fan, and so there is always an active input to the feedback system, to make the delay stream move around and look nice. The space is set up in such a way that it is also possible for audience members to get in front of the assemblage, and make their own patterns on the wall. There is a couch on one wall for people to hang out and watch like you would watch a fire on a stormy night. To tie things together thematically, the projector and fan are both stacked on top of piles of architectural books.

Process

This was a tricky one. The matching of the lensing between camera and projector is very finicky, and slightly different alignments and lightings create very different effects. I initially intended to create a system via which a participant could have their own body projected on them, while leaving feedback trails behind in their wake. This would have been a very different physical installation– but more to the point, it would have required the camera lens and projector lens to be as physically close to each other as possible, if the effect would be replicable across a range of distances from the camera. It eventually became clear that I didn’t have the equipment or installation space to do that correctly… and so I decided to move to a short-throw projector, and a camera with a medium-wide field of view. I’m still interested in the other piece, and I think it could be good someday… Maybe they could go in an installation together.

Extraordinarily useful notes on throw ratios

things you can write on paper before just eyeballing it

homebrewing

Finer Points

Things I enjoyed about this installation:

  • Siting: the location of the work, at the top of the stairs, as a backdrop to two more static interactive works, felt successful to me. I was also lucky that the audio piece sited around the corner provided a great score for the dancing balloon. I thought this was a positive in its environment.
  • Lighting: hard sidelighting the balloon from across the room felt successful as well. I was pleased to be able to put a shadow of the balloon on a bit of wall between the projection and an adjacent window. In addition to lighting the object well for the camera, I think this resulted in a more immersive and site-specific implementation. I like it when things can be “thick” with space.
  • Other objects: the use of books, carpet, and couch were not initially part of the plan, but stuff that showed up (usefully or can’t-get-rid-of-it) in the space. I ended up enjoying all of the props, and wish that I had been able to make the gigantic Manfrotto tripod I was using somehow fit the mise-en-scene a little better as well.
  • The spammy descriptive text on performativity felt very good. I like jokes like that, that are very stupid but very serious at the same time.

Things that could have been improved:

  • Again, giant tripod. I used this large expensive tripod in order to have more fine control over the camera position relative to the projection. While this was very important and useful, I wish the object of it had been more dressed or otherwise intentional.
  • Footpaths through active space: only the leading edge of the installation was really accessible to audience, and the couch was less than inviting. I could imagine a better cable management system, perhaps running under the rug and emerging through a hole, that could have made for a more welcoming space.
  • Camera: I transitioned late from a Sony camcorder to a more fancy Sony DSLR. Sony DSLRs are not really made for video, and so while in video mode it had to be poked every half hour to keep from going to sleep… The DSLR was most useful in giving me finer manual control over focus and zoom, but didn’t have an amazing lens on it. I think better aperture control could have given me better results. Ultimately, I ended up paging through a bunch of its auto settings until I found an exposure algorithm I liked, and went with that. It’s possible the auto-exposure flutter helped the piece? Certainly adjusting the ISO had no effect on the feedback once that got going… I just wish I had spent more time with the equipment to fine tune my choices.
]]>
Community Event 2019 https://courses.ideate.cmu.edu/62-362/f2019/community-event-2019/ Mon, 16 Dec 2019 21:45:51 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9164 Everybody who’s here get in front of the green screen! Do something funny! Let’s have something to remember this night by! OMG you were so cute! You looked so young! Let’s not let the night end! Let’s not let the night have ever ended! This is never, ever, ever, ever going to go away! Wahoo!

Descriptor

{CE19} is a party-trick video installation. There is a large green-screen with nice lighting and a camera pointed at it. If you walk in front of the green-screen, whatever you do is sampled and looped endlessly on a nearby TV. Because of the green-screen technology, your body is separated from the green background and layered on top of some other backdrop… except instead of a sandy beach scene or something like that, the backdrop here is a ten-second loop of everything that’s already happened in front of the screen, layered endlessly on top of itself. And so, your captured image won’t go away until somebody else’s image is layered on top of yours and blocks you out. Sounds like the internet, amirite?

Process

The design of the project is extremely simple: a chroma-key effect takes in two video streams, one of the foreground (human performer in front of a monochromatic backdrop that can be ‘keyed out’), and one of the background (some picture to replace the monochromatic background). The ‘hack’ is simply to plug in a copy of the final output back into the background input, with a delay. I used a ten second delay: so, the last ten seconds of output video becomes the background for the current ten seconds of input. The result is twenty seconds of composited video, which then becomes the background for the next ten seconds of video… et cetera. You can see the effect clearly in this guy, who has been looking at this phone for almost a minute in front of the camera:

I made the video processing patch in Isadora, a live video processing software pioneered for use in dance. It uses a patcher-interface similar to Max:

As you can see, the patch really is “that” simple.

Technology

Because this is such a simple idea, proof of concept was fairly quick, which led to the question of quality: how to make it look good (but not too good), how to make it consistent across a long-ish installation, and how to lay out the space so that it would (a) not be an eyesore, (b) would naturally capture enough people for the effect to be visible, and (c) would then be accessible enough to invite the public to play with it.

The first question was capture: analog or digital. It would be possible to run this system with composite video and an analog video delay line. The advantages would be lower latency and a potentially appealing analog grain to the image quality. I decided, however, that a “cleaner” image quality made more sense for a piece that is essentially about social media in 2019. To downplay the latency and frame rate issue (<50ms, as low as 20fps at times), I made the screen out of direct view from the capture area: thus a participant can’t see their lips out of sync as they speak, and it’s close enough for an outside viewer to feel it to be approximately time-aligned.

Having decided on a digital system, I captured from a Panasonic GH4 through a BlackMagic Intensity Shuttle into Isadora, and outputted the media directly from my Mac to a flatscreen. The GH4’s superior image quality, bright exposure, and aperture control made dialing in the chroma-key infinitely easier than my attempts with a webcam and a handheld camcorder. Despite the larger footprint, using a Real Camera ended up being very worth it.

(analog video feedback noise)

Finally, lighting: to key properly, a green-screen has to be very bright and evenly lit, or else its apparent color will not be consistent across the full field. Luckily, the site had a good amount of track lighting available in the stairwell in which I installed. I used all of the track lighting to illuminate the screen, and added a soft key light by the camera to pick out the public.

Yes, that’s a fluorescent winter-time happy lamp. Small size, bright light, soft beam. It worked great! And everyone felt better, I’m sure.

Layout

The last question was experience design: how to make this thing viewable, usable, unavoidable, fun, et cetera. By placing in a cyclorama-like space at the top of a stairwell, I felt that it was positioned in an area that was both high-traffic but also enough out of the way to not feel like a major imposition on other works. The “in a corner” quality allowed the lighting to be fairly involved without messing up the dim and moody vibe of the rest of the evening’s lighting. And, it meant that there were always subjects passing by and becoming part of the work, so that more playful types could easily see what was afoot and learn how to use the piece a toy.

I also made the piece be part of my guided tour of the space as the “Total Librarian”, which narratively cohered it to the larger evening in a constructive way (and again, mandated participation). I am happy with the playful way the work was proposed in physical space, paired with a slightly more sinister description in the show documentation. Perhaps some participants became less comfortable with their fun upon reading about it later.

Hi, I’m the total librarian. We’ll now head to the stacks. There’s a remote possibility you’ll completely change lexigraphical meaning by entering the space—so for the sake of safety, please pause in front of this green void so you can be copied for backup. Remember to be yourself. Remember to be funny. Thank you.

I do think that the piece would have been better served by a larger output. The flatscreen I used was a little too small to catch the eye of a playful user in a crowded room and slightly obscured by columns to a passer-by. I also think that  projection could have brought the human forms up to a more 1:1 scale, which would have been a better sense of intinitely capturing the body of these people.

All that said, I feel pretty good about it.

 

]]>
lonely (for public consumption) https://courses.ideate.cmu.edu/62-362/f2019/lonely-for-public-consumption/ Mon, 16 Dec 2019 21:44:31 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9167 lonely (for public consumption)

padra crisafulli

[Photo: Christina Brown]

A view from the entrance to the room. [Photo: Padra Crisafulli]

But what is it?

In a room, there are three elements. 1; there is an enclosed space with the performer inside, as well as cameras focused on different distinct parts of the body and a variety of props to be used. 2; these cameras are displaying what they see onto screens throughout the rest of the space, set up so that a person can only see one screen at a time. 3; there are letters at a station with different prompts written on them, as well as the option to write your own prompt, which is then delivered to the performer in the enclosed space to act out. The intersection of these three elements creates a performance in which different parts of the body are isolated when performing an action, forced to be considered by themselves as opposed to a part of the whole body.

1; there is an enclosed space with the performer inside, as well as cameras focused on different distinct parts of the body and a variety of props to be used. [Photo: Christina Brown]

2; these cameras are displaying what they see onto screens throughout the rest of the space, set up so that a person can only see one screen at a time. [Video: Jacquelyn Johnson]

3; there are letters at a station with different prompts written on them, as well as the option to write your own prompt, which is then delivered to the performer in the enclosed space to act out. [Photo: Christina Brown]

In this installation, the letters and the enclosed pod were directly across from each other, so that the time between submitting a prompt and seeing it acted out was smaller. [Photo: Padra Crisafulli]

Audience interacting with the letter station, managed by Noah Hull. [Photo: Christina Brown]

Sometimes, the prompts were very fun to receive. [Photo: Christina Brown]

Process Reflection

This piece was borne out of a deep struggle, both with the initial prompt which was both fascinating and incredibly difficult to translate into something tangible, as well as the time crunch that I was under in the midst of all of my other work. The concept of ‘arrows’ has so much resonance in other fields, like semiotics in linguistics and symbolism in any artistic form, that it became daunting to consider how one can variate on that, respond to it in an artistic way that hadn’t already been done a million times before. And to do it with as little time as I could devote to it, yes, it was in fact very hard. Knowing this, it should come to no one’s surprise (at least, not my own) that I am very unhappy with how this piece ended up being. Maybe that is because there is potential in it to be interesting, and conversely, the potential that it becomes another self-indulgent ‘look at me being strange’ kind of work.

The initial idea as written out, with help from Eben Hoffer.

Self-flagellation aside, in terms of growth, maybe this wasn’t all for naught. I have always had a difficult time asking for help, especially in artistic endeavors. But especially in a class in which I am learning new skills and immediately applying them, asking for help has been more and more pertinent not to necessarily the best product, but to anything being produced at all. I think this reluctance to ask for help when producing work like this comes from wanting to hold onto this idea that, funding aside, the artist is the sole reason things are made. This thought made a conflict within myself as I tried to put this together, until I ultimately created a piece which cannot exist without other people, right down to the title. I had doomed myself to this kind of irony from the very beginning.

A day of testing with the original cameras.

It was around this time that the idea of using fingers as a subject arose.

The day of the MuseumLab showing, I was taking the bus to the North Side when I realized that I had accidentally turned myself into a computer. Sticking myself in an enclosed space, receiving commands, returning the constrained results of those commands; and yet, I know one of the critiques on my written description was that I had written it to be incredibly impersonal. Maybe I knew deep down that this analogy was coming, even though it never once crossed my mind. During the performance itself, I overheard one of the audience members noting that it was like I was an animal in a cage. And yet, later on in the evening, I was having a lot of fun with them as they began to write their own prompts for me which were silly and playful. If I were to do this again, this straddling of the deeply personal and the impersonality of the performer as they are separated, as well as to put more thought into how the order in which an audience member engages with any of the elements impacts the interaction. Maybe there is more of a narrative to this.

The angles used in the training when I did not have a tripod were a bit more interesting to me.

Fingers are a very strange thing to film, especially to keep them in interesting shapes. I probably would not make them the only other subject besides my face if given another crack at this.

]]>
Project 3: Walking Backwards https://courses.ideate.cmu.edu/62-362/f2019/project-3-walking-backwards/ Mon, 16 Dec 2019 19:25:41 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9136  

Photos of the Device

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Brief Video of How it Works

IMG_1207

Final Presentation at Museum Lab

Project Statement

Inspired by the writing of Guy Debord’s writing, Theory of dérive (1956), and his idea of the “dérive” or rapid passage through varied ambiances, this project proposes walking backwards as a new means of experiencing a confined space. Debord emphasizes the importance of purposeless strolling for experiencing the discarded, or marginalized aspects of our environment. He believes that purposeful walking has an agenda, making it difficult to absorb the world around us. While walking without a set destination or intention, we are opened to random encounters that enriches our experience.

The Museum Lab is a space with limited to walking and movement options. The sequence of moving through one space to another constructs a linear experience which reduces the chances of random, serendipitous moments. By walking backwards, one is denied of their frontal view and thus changes their perceived path.

In Walking Backwards, the sensors on the hand measure the distance between the user and one’s surroundings. The distance is converted into vibrations, which allows the user to estimate how close one is to nearby objects. From the varying vibrations, the user is able to delineate one’s path.

Process Images 

Sketch of an idea. Device as a glove

The Glove

Process Reflection

The project was a continuation of the previous project, refitted into the context of the Museum Lab. There were some improvements from the previous project, how the device is worn and relocating equipment to different parts of the body. These changes allowed for variety of interaction with the device. Moving the ultrasonic ranger to the hand and keeping the Arduino chip and protoboard to a minimum size improved the wearability of the device. The ultrasonic ranger on the palm of the hand allowed me to use my hands freely to sense any nearby object.

However, I regret that I wasn’t able to make more adjustments to the project. It would have been better if the device was able to detect objects even further ahead and had more variations in the vibrations it let off.

//Project no.3: Walking Backwards 
//Hugh Lee 
//The ultrasonic sensor measures the distance between the sensor and any object nearby and converts into vibration. There are three different types of vibration, indicating how close the object is to the sensor. The different types of vibration are pulled from the "Adafruit Drv2605.h" library. 
#include <NewPing.h> 
#include <Wire.h> 
#include "Adafruit_DRV2605.h" 

#define TRIGGER_PIN 12 // Arduino pin tied to trigger pin on the ultrasonic sensor. 
#define ECHO_PIN 11 // Arduino pin tied to echo pin on the ultrasonic sensor. 
#define MAX_DISTANCE 400 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

Adafruit_DRV2605 drv;

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance. 
void setup() { 
  Serial.begin(9600); 
  Serial.println("DRV test"); drv.begin(); 

  drv.selectLibrary(1); 
  // I2C trigger by sending 'go' command 
  // default, internal trigger when sending GO command drv.setMode(DRV2605_MODE_INTTRIG);
} 

uint8_t effect = 1; 

void loop() {
  // Serial.print("Effect #"); Serial.println(effect); 
  int ping = sonar.ping_cm(); 
  delay(50); // Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings. 
  Serial.print("Ping: "); 
  Serial.print(ping); // Send ping, get distance in cm and print result (0 = outside set distance range) 
  Serial.println("cm"); 

if (ping < 5) { 
  effect = 118; //if ping is smaller than 5, it uses vibration 118 from the "Adafruit_DRV2605.h" library 
  drv.go(); 
} 
else if (ping < 25) { effect = 48; //if ping is smaller than 25, it uses vibration 48 from the "Adafruit_DRV2605.h" library drv.go(); 
} 
else if (ping < 50) { effect = 44; //if ping is smaller than 50, it uses vibration 44 from the "Adafruit_DRV2605.h" library drv.go();
} 
else { drv.stop(); //stop vibrating for any other values } 
drv.setWaveform(0, effect); // play effect 
drv.setWaveform(1, 0); // end waveform // play the effect! 
}

 

]]>
Babbling Creatures https://courses.ideate.cmu.edu/62-362/f2019/babbling-creatures/ Mon, 16 Dec 2019 19:06:58 +0000 https://courses.ideate.cmu.edu/62-362/f2019/?p=9087 [62-362] Activating the Body | Project 3: Arrows | Fall 2019 | Alex Lin & Scarlet Tong X Connie Chau & Nicholas DesRoches

Overall Project

Final State of Rose

Photo Credit: Christina Brown

Video Footage Credit:  Jacquelyn Johnson

Description
Narrative Description

The Anthropocene marks the beginning of the age in which human activity affects the planet as a whole. No longer do our individual decisions and consequences impact only our perceived sphere of influence, but rather, affect the well-being of our ecosystem on a macro level. Babbling Creatures portrays the inherent complexities in a small scale system featuring several simple paper sculptures. The misdirection and flow of information constantly alter the state and behavior of those creatures to reveal the larger consequence of our seemingly insignificant daily actions. 

Our dissociation with nature, through the construction of artificial landscapes, leads us to forget about the fundamental bond we share with other living beings in our ecosystem. Compared to the permanence of the physical erection of new apparatus to facilitate our own societal concerns, the rate by which nature is contaminated and destroyed as a consequence of our selfish pursuits far exceeds the rate in which mother nature can heal herself. To illustrate the fragile and weakened state of our planet’s ecosystem, the project is constructed out of translucent paper, embodying the fleeting beauty of nature amidst a sea of external threats.

Babbling Creatures is a family of paper interactive objects whose reactions are informed by audience and musician movements. Each creature harvest motion and human activity input and translates it into different physical behaviors. The creatures represent the natural world which is complex, interwoven and exists in a delicate equilibrium vulnerable to change.

Process Images

Initial Concept Sketches:

Network Diagram

Initially, we planned to create three different creatures that have individual reactions on their own, but its consequences will also affect the behavior of others. This diagram portrays that world building which was planned at the outset of the project. We were aiming to create a complex network of interrelated creatures that spoke to one another in ways that weren’t perceptible at a quick glance. The larger blue text denotes the different creatures, while the wavy pink lines denote the different reactions and signals that the pieces share with each other. The black represents the audience and their role within the space.

Rose Movement Exploration & Experimentation

Flexinol Stitching Pattern Exploration

Flexinol Movement

Monofilament Actuated Movement

When we first began the physical actuation explorations, we were hoping to get the flexinol working to crumple the petals of the origami rose. Through a couple of studies, we were able to find a means of stitching the flexinol so that it was able to move the petals given its limited mechanical force output. We then implemented the flexinol, resulting in a small, but noticeable movement in the petals. Through further testing, however, we had difficulty implementing the flexinol and having confidence in it. One day it would work, and the next it would seem that it had burnt out and would need to be replaced. Given the amount of time required to set up the flexinol as well as our expended supply, we chose to use a monofilament system instead. We connected the monofilament to a servo and used it to crumple the rose, allowing for more movement, which in hindsight was perhaps for the better, given the noise level and many distractions that filled the MuseumLab during our final installation.

Water Basin for the Rose

To create the basin, we started by CNC routing insulation foam to make the base form to vacuum form with.

CNC Milling the Vacuum Form Base

Since it was our first-time vacuum forming, we had to do a lot of trial and error as illustrated in the image below. We did not have time to gesso the form to make it release easier from the styrene, and we used a 1/16″ thick material which reduces the level of detail that we hoped for the final product.

Vacuum Forming Struggles

Although the vacuum forming created more interesting forms, it also took us out of our comfort zone as a digital fabrication tool that we weren’t as familiar with.

Vacuum Formed Plastic Water Capacity Test

Willow Tree fabrication and movement

The progression of the willow leaves began as a single leaf, then moving up to several, then to an entire array. We chose to hook up the servos from the side despite more mounting difficulty because it allowed us to face less physical resistance as the leaves were shuffled against one another.

Collaborating with Exploded Ensemble

Photo Credit: Christina Brown

Flex Sensor Testing

Our collaboration with the Exploded Ensemble was mainly through a simple wearable that had an embedded flex sensor attached to Connie as she played the harp throughout the night. Similar to the conceptual underpinnings of her musical piece which distorted the audio files of many languages across the world to accompany her improvisation with Nicholas, Connie’s movements in turn distorted and crumbled the rose that sat on a white basin.

We had also considered the use of an accelerometer, but the complexity of the data and information wasn’t as necessary for the interaction, and the piece also would have introduced more physical challenges in terms of disguising the sensor and wiring the five pins.

Board Layout:

Board Layout

Our strategy for handling all of the external power and wires was to use a sheet of plywood to mount the breadboards and Arduino to. As we got closer and closer to the final installation, we realized that we were spending exorbitant amounts of time setting up the power box and cleaning up the power box after our work. As so, we claimed a power box that would be a part of our board, allowing for the project to be carried in almost all of its electronic entirety using one sheet of plywood. We also, considering the transportation and risk involved with long wires, decided to use zip-ties to secure the plethora of wire bundles.

Wires, Wires, and More Wires:

Planning our all the different lengths of wire we need for the project

Electronic Waste Bin Post-Deconstruction

Wires have consistently been a critical consideration of both of our projects in the past. Unfortunately, given the environmental character of our project, we again resorted to using wires to bridge the connection for all of our moving pieces. Although given the time frame of the project it wouldn’t have been feasible, looking forward we will definitely heavily consider more wireless options which will also allow our work to be more portable and mobile.

Process Reflection

Fabrication & Setup:

Throughout the project, we spent the majority of the time realizing the physical aspect of the project. From the array of willow leaves to the rose and all of the technology that was implemented there, we quickly realized that we had to scale back in terms of scope to ensure that each creature would have a presence and role respective of how much time was invested in bringing it to fruition. Much of the time spent during installation day was related to the installation of the willow leaves. Although simple in the aesthetic, mounting paper, wires, and servos to a rugged, dusty surface, lighting up the leaves and positioning the sonar were time sinks and required multiple attempts. The wiring, although it was bundled neatly, was also a challenge given the sheer abundance and length. Unraveling each bundle, merging bundles that were headed to a similar direction, and taping to unforgiving surfaces definitely slowed our progress, but also made the preparation worth the effort! By having a project that focused mainly on creating a space, much of the coding became more simple naturally which allowed us to fully immerse ourselves in the costuming, lighting, and staging of the environmental installation.

In retrospect, scaling down the size of the installation and broadening the number of creatures that were smaller in size may have contributed to a more network feeling, though it would have been much more difficult to implement technically and in regards to the designing and fabricating of more pieces and parts. Looking forward, we both want to continue to explore the performative aspects that we delved into for this project as well as further expand on our technical know-how and capacity.

Arduino Code
//Babbling Creatures by Alex Lin & Scarlet Tong
//This code is used to physically actuate several creatures and physical
//interactions as part of a larger installation. It uses various inputs
//(photoresistor, sonar,  flex sensor) to move physical elements (using a pump //and a couple servos).
#include <NewPing.h>
// defines pins numbers
#define TRIGGER_PIN  9  // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN     10  // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 400 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.

unsigned long pingTimer;

#include <PololuLedStrip.h>

PololuLedStrip<4> ledStrip1;
PololuLedStrip<5> ledStrip2;

#define LED_COUNT 60
rgb_color colors[LED_COUNT];

// defines variables
long duration;
int distance;

int i;
int j;

//Servo Instantiations
#include <Servo.h> 

// Declare the Servo pin 
int servoPin1 = 7;
int servoPin2 = 8; 
int servoPin3 = 6;

int mod = 0;

// Create a servo object 
Servo Servo1; 
Servo Servo2;
Servo Servo3;

int pos;
////////////////////////////////////////////////////////////
int photoreader;

#define laserTest A3
//int laserTest = 12;
int pumpPin = 11;

////////////////////////////////////////////////////////////
int flexreader;

#define flexTest A4
//int flexinolPin = 12;
unsigned long now;
unsigned long nowAnd;

////////////////////////////////////////////////////////////
int delayTime = 250;

void setup() {  
  Serial.begin(9600); // Starts the serial communication

  Servo1.attach(servoPin1); 
  Servo2.attach(servoPin2);
  Servo3.attach(servoPin3); 

  pinMode(servoPin1, OUTPUT);
  pinMode(servoPin2, OUTPUT);
  pinMode(servoPin3, OUTPUT);

  pinMode (laserTest,INPUT);
  pinMode (pumpPin,OUTPUT);

  rgb_color color;
  color.red = 255;
  color.green = 231;
  color.blue = 76;

  for (uint16_t i = 0; i < LED_COUNT; i++) {
    colors[i] = color;
  }

  ledStrip1.write(colors, LED_COUNT);
  ledStrip2.write(colors, LED_COUNT);
}
void loop() {
  if(millis() - pingTimer > 50) {
    distance = sonar.ping_cm();
    pingTimer = millis();
    Serial.print("Sonar: ");
    Serial.println(distance);
  }
  
  if (distance < 150 && distance >= 0){
    if (mod % 2 == 0) {
      Servo1.write(40);
      Servo2.write(0);
      delay(400);
      mod++;
    }
    if (mod % 2 == 1) {
      Servo2.write(40);
      Servo1.write(0);
      delay(400);
      mod--;
    }
  }
  
//////////////////////////////////////////////////////////////////////////////

  photoreader = analogRead(laserTest);
  Serial.print("Photoresistor: ");
  Serial.println(photoreader);
  if (photoreader < 100){
    digitalWrite(pumpPin,HIGH);
  } else if (photoreader > 100){
    digitalWrite(pumpPin,LOW);
  }

////////////////////////////////////////////////////////////////////////////// 
  flexreader = analogRead(flexTest);
  Serial.print("Flex: ");
  Serial.println(flexreader);
  if (flexreader > 250){
    for (pos = 0; pos <= 45; pos += 1) { // goes from 0 degrees to 180 degrees
    // in steps of 1 degree
      Servo3.write(pos);              // tell servo to go to position in variable 'pos'
      delay(30);                       // waits 15ms for the servo to reach the position
    }
    for (pos = 45; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees
      Servo3.write(pos);              // tell servo to go to position in variable 'pos'
      delay(30);                       // waits 15ms for the servo to reach the position
    }
  }
  
//////////////////////////////////////////////////////////////////////////////  
  delay(delayTime);
}
]]>