Visibility Through Voice

Visibility Through Voice is a Kinect based capture setup that draws the silhouette of a participants when they speak. I used Touchdesigner as the interface between Kinect, camera, and microphone and explored it as a tool for manipulating the relationship vocalization and visibility. I spoke from a random text generator and did walks in and out of the bounds of the Kinect to make these paintings.

Videos with camera view!

 

Videos without camera view

 

I used the CHOP reference feature in Touchdesigner to pull off this effect mainly. It allowed me to take volume and pitch data from the microphone I was using and have it real-time update the brightness of my Kinect data. Once I have the audio reactive capability set up, I create a feedback loop that retains the brightness from the prior frame.

While I was very interested in the silhouette aspect of my project, I was mainly inspired by the visuals of long exposure video and the work of David Rokeby who uses a similar technique in “Plot Against Time” 

I started this project knowing I wanted to work and grow in Touchdesigner. The conceptual framework grew around tests in audio reactivity. This project was mainly the result of a handful of experiments. I tried many different types of projection, recording, and methods for producing things in Touchdesigner as well as getting lots of help from the people around me. The only thing I wish I had time to fix in this project is to make the audio reactivity not connected to brightness, but opacity so that I don’t need to interact with new pixels to create color.

 

 

Person in Time, Informal Update

Woarb update:  the waorb is becoming warmer.

Seriously though… Here’s the waorb prototype circuit.

As you can see it’s basically the circuit from the waorb proposal (missing the on/off switch, heater being powered by an external power supply) with the addition of a momentary switch and a cockroft-walton ladder to power the MOSFET switching the heater.

Let’s break it down a bit more!

HEATER ELEMENT TEST

The heater is made from 40 gauge steel wire (ie. about human hair width)–and in order to achieve the right resistance (~10 ohms). The batteries I intend to use in the final waorb should be able to create about a 6.6w heater so I tested the heating elements to determine how well they could heat the brass using A and V which would produce 6.6w.

Above: my horrible math notes.

I found that on the brass sheet (about 6 times as much brass as in the waorb) I was able to heat the brass to about 89.0 degrees f.

This was important because I now know:

  1. That the woarb should perform as hoped (heating power and thermal tape attachment wise). The heating elements I created don’t catch fire.
  2. The number of wires at what length I will need in the final waorb (resistance is directly proportional to length of wire and inversely proportional to number of wires of said length). I will need 3 wires at ~5.5″ for each heating element.

CIRCUIT BUILDING

I then built the circuit needed to power on an RGB LED on a breadboard with an Arduino and installed Arduino. (As shown in the design).

After that I built the temp circuit (as shown in the design) and used the serial port on the Arduino to test it was working. Once it was working I took readings to calibrate it.

Reading | temp in F | temp in C | Temp in 10ths of C
880            71.5                    22.2                222
680            85.5                   30.1                 301
580            96.9                  35.5                355
640            94.2                  35                   350
I used these values to find a line of best fit to convert the sensor reading into 10ths of degrees C (so they can be stored accurately without floating point numbers).

After this I added the heater element back into the circuit (almost as shown on the design, added a cockroft-walton ladder to make the MOSFET work correctly).

After this I added a momentary switch to change modes in the final program.

PROGRAM  

Currently the program is able to: switch between modes when the mom switch is clicked–showing a different LED color for each, take the temperature when the switch is single clicked in listening mode–showing a white flash from the LED, record the temperature into in 10ths of degrees c to an array.

TODO: add the heater control, make the LEDs change dynamically with temperature in listen and repeat mode. I’m sure some other stuff I’m forgetting.

//vars for button stuff bool pressed;
int pressnum = 0;
unsigned long pressMil; //there is a bug where milli() runs out of space and makes measurements invalid… maybe fixed abs
int mode = 0; int test; //vars for recalibrating the temp sensor (form: y=mx+b)y is degrees in 10ths of c, x is sensor data number
float m = -0.4639; int b = 629; //array for temps recorded
int currentSlot = 0; //check to make sure this is correct
int tempsRec[100];
//write mode 0 here (ie. startup/idle mode)
void idle () {
analogWrite(9, 0);
analogWrite(10, 0);
analogWrite(11, 150); }
//write mode 1 here (ie. listening mode)
void listens () {
//temp sensing
int tempSensor = analogRead(A5);
int temp = m*tempSensor+b; //temp in 10ths of degrees C
//LEDS
analogWrite(9, 0);
analogWrite(10, 150);
analogWrite(11, 0); }
//write the picture function here (ie. take temp and store it)
void picture () {
analogWrite(9, 255);
analogWrite(10, 255);
analogWrite(11, 255);
int tempSensor = analogRead(A5);
int temp = m*tempSensor+b; //temp in 10ths of degrees C tempsRec[currentSlot] = temp; currentSlot++;
delay(500);
analogWrite(9, 0);
analogWrite(10, 0);
analogWrite(11, 0); }
//write mode 2 here, (ie. run the heater correctly)
void repeat () {
analogWrite(9, 150);
analogWrite(10, 10);
analogWrite(11, 0); }
//IRS
void switchPressed () {
pressed = true;
pressMil = millis(); }
void setup() {
// put your setup code here, to run once:
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
pinMode(11, OUTPUT);
digitalWrite(3, HIGH);
attachInterrupt (digitalPinToInterrupt (3), switchPressed, CHANGE); pinMode(A5, INPUT);
analogReference(INTERNAL);
Serial.begin(9600); }
void loop() {
// put your main code here, to run repeatedly:
//this whole part controls the mode selection and picture action
test = pressMil+500-abs(millis());
if (pressed == true) {
pressnum++;
delay(250);
pressed=false; }
if (test < 0) { pressnum=0; }
else if (pressnum >= 2) { mode++; delay(500); }
if (test > 0 && mode == 1 && pressnum == 1) { picture(); }
if (mode==3) { mode = 0; }
//these run the mode code depending on the mode
if (mode==0) { idle(); }
else if (mode==1) { listens(); }
else if (mode==2) { repeat(); }
}

 

MarthasCatMug – Person in Time Proposal

I want to use motion capture of dance movements to create 3d models. The classical Chinese ballet and many folk/minority group dances I learned when I was young share many basic movements. I’m currently rediscovering those components and am interested in capturing myself attempting these isolated movements in mo-cap. I want to then turn the recording of those movements into still 3d sculptures. I even have a proper display method this time: Sketchfab!

I’m basing my ideas for movements from an instructional video I found online but I think I will be consulting my old dance teachers on movements. As for the mo-cap, I’ve been told Justin Macey runs a motion capture studio in Wean and that I should contact him. There also seems to be a bit of documentation on how to turn motion capture data not into animation, but into 3d sculpture. Mostly I’ve found student project documentation pages but I am confident that I will be able to create models from motion capture points. The main hurdles I have are thankfully separate: to capture the motions and to figure out a workflow to create models.

Some MoCap Visualization Projects I’ve been looking at:

https://sdm.scad.edu/faculty/mkesson/vsfx705/wip/best/winter18/karlee_hanson/mocap-data.html

https://sdm.scad.edu/faculty/mkesson/vsfx705/wip/best/fall12/ziming_liu/parsing-mocap-data/index.html

bumble_b-PersonInTimeProposal

As all the rest of my ideas in this class, this idea comes from one of our instructors (Golan this time!).

I’m an absolute crybaby. I cry about everything. I cry when I watch game shows (it makes me happy when they win), I cry when I’m angry or frustrated. Hell, I’m crying right now while typing this (okay, jk).

Crying is a really important part of my life, and sometimes when I’m feeling a lot of emotion, I just make myself cry because I know it’ll instantly make me feel better. Basically, I love crying and am so totally fascinated by it.

Considering I also happen to be in the School of Drama, I know many-a-person who claims to be able to cry on cue. Golan thought it would be interesting to put those claims to the test… by recording someone crying on cue, from start to finish (start being when they begin trying and finish being some sort of condition, perhaps that a tear must reach their chin or when they just can’t cry anymore… this finish condition is something I’m still trying to think through).

In my head, I see this looking a lot like Andy Warhol’s Screen Tests… in terms of framing, not quality, considering that project is from the 60s.

I’d also be interested in what different cameras we have and perhaps if I can capture people’s tears in some interesting way with any, like maybe being able to zoom in super close to them or see them in slow-motion.

Person In Time Proposal

My dad once mentioned to me that he used to tell people their fortunes by reading their palms. Ever since, I had been intrigued by the shapes and patterns I could find on and under people’s hands.

For my Person in Time project, I want to capture similarities between the forms created by lines in the hands and the organic compositions of trees. I am thinking of taking pictures of people’s hands using both a regular DSLR and Near-Infrared Imaging techniques. The DSLR will capture the contours and lines in someone’s hands and I will use those to form trunks of “trees”. The Near-Infrared Images will allow me to capture people’s veins. I will create branches for these trees by separating and collaging these formations. My hopes are to gather enough sets of images to create a whole entire “forest” of these trees. 

I’d either present them all as one image, or as a dizzying video which starts slow and later moves faster and faster through the forest (like at 0:25 of https://www.youtube.com/watch?v=4eBl7U64_XE).

 

proposal- person in Time

I want to do a live capture installation that takes the footage of people walking in front of a large screen

  1. segments their bodies into individual parts
  2. records the motion of that part over a second at a time
  3. plays that motion across time on the screen offset in time to sort of give a synchronized swimming effect

marimonda – PersonInTimeProposal

For this project, I am interested in exploring human movement and interactions by programming a UR5 to mimic/mirror a person.

This is a UR5 Robot:

UR5 Robot Arm Manipulator by Universal Robots - Clearpath — Clearpath Robotics

(Very cute right?)

I am interested in exploring this for two reasons:

  1. I think the performance is inherently a timed experience, exploring the interaction between a human and a robot through mimicry is an interesting approach to the idea of “A Person In Time”.
  2. I want to learn how to work well with the UR5 and explore the ways in which a human can perform/otherwise collaborate with a robot.

Functionally, the UR5 will detect a person’s face using a wide lens camera and center its field of view to the person’s head, attempting to move as close as possible to them. So that if the person moves down, or to the side, the robot looks at them as they move, with it’s joint paralleling the human’s motion.