Final Project — Revision of Project Two (Smalling)

Hello darlings. My final project is essentially a revised and fixed version of my previous project –smalling.

To recap, particularly for any new guests, I’m interested in the form of being small, the action of having to contort oneself, and how that works when it has to happen consciously, with no immediate threat or reward. This idea came out of considering more formal elements of small bodies along with “smallness” as a symbolic item (as it’s used in movies and other media,) and smallness as a relatable concept. Some of the images I was looking at were these:

I had ChatGPT write most of my code, and then Golan, Leo, Benny, and Yon helped fix anything that I couldn’t figure out past that, so I can’t explain most of the technical details here. What I can say is that it’s a pretty basic green screen setup, the program turns anything green to white judging on hue values, functions for ten-thirty seconds (this number has changed over the course of the project,) and retroactively takes a picture of the participants at their smallest size during that time. The tests from the previous version looked like this:

The problem with this, which I didn’t address in the last critique, was that it measured the size of the person compared to the canvas, not as compared to their previous size. This privileged people with smaller frames and was never my intention, it was a genuine forgotten detail over the course of the mess of me ChatGPT-ing my code. This is fixed now (wow!)

Here’s a mini reel because WordPress doesn’t like anything that isn’t a teeny tiny gif compressed to the size of a molecule ♥️  sorry

This is essentially where it’s at right now. I’m mostly looking for feedback on potential output and duration of the piece. As far as duration, I noticed that even with more time, most people tended to immediately get as small as possible and then were left to backtrack or just sit there. On the other end, there were a few participants that were kind of shocked by the short amount of time, even when they knew what it was before starting.

As for output, I had somewhat decided on a grid of photos of people at their smallest that’s automatically added to as more people participate, but I started feeling not so good about that. It would look something like this: (and be displayed on a second monitor next to the interactive piece)

Some of the other ideas mentioned in my proposal include:

  • physical output (golan mentioned receipt printer, but open to other ideas here) example here: https://www.flickr.com/photos/thart2009/53094225345
  • receipts could exist as labeling system or just an image
  • scoreboard on a separate monitor
  • website with compiled images of people at their smallest
  • email sent to participants of their photo(s)
  • value to descriptor system
  • some other form of data visualization or image reproducing (???)

I’m not married to any of these. If any ideas are had please let me know. I just don’t want anything to become too murky because it’s being dragged down by a massive unrelated piece of physical/digital output/data visualization/whatever. There’s already the element of interaction that makes this project inherently game-like, but I’m trying to keep it from becoming too gimmicky.

 

Final Project: Thanksgiving Pupils

This project is an extension of my preview project People In Time-Pupil Stillness. I had essentially developed a technology for farther distance pupil detection. With this project I wanted to use it to capture something less performed and more organic.

With the opportunity of Thanksgiving last week, I decided to film my mother and grandmother during Thanksgiving dinner. My grandmother has some hearing issues so she is usually less engaged in conversation, I often view her as a fly on the wall. My mom is often also more on the reserved side during meals so I thought it could be interesting to put the spotlight on them.

I set up two cameras 1 on a shelf zoomed in on my mom:

And another (which I forgot to photograph) that was hanging from the pots and pans holder using a magic arm.

No one noticed the cameras which was great because I didn’t want them to change their behavior from knowing they were being filmed.

Here is a side by side, 2 minute segment, centered and rotated in alignment with their eyes:

Extra if time:

drive link:
https://drive.google.com/file/d/1oq-m-V6lfHwxYX60Y2QresFYoX-9KRCF/view?usp=drive_link

Natural found camera shutters

My proposal is to get liquid emulsion, covering large surfaces, and then testing natural found shutters.

Ex: Opening a door, walking through it, and shutting the door again is a natural shutter. The capture happens if the room I exit is lit and the room I enter is dark, and the wall opposing the doorway in the dark room has an emulsion surface.

A motion sensor light that times out after a while is a natural shutter.

The sun rises and sets is a natural shutter.

I think something standardized might be nice if I wanted to compare things in a type. I think something I mixed & applied myself would not be standardized enough, and I might have to buy something premade, but this might not work with scale.

I am looking at other things like natural dyes and cyanotypes (anything light sensitive), I will be doing media studies & trying to get down cost.

Final Project Proposal

Idea 1: Thermal sunset/sunrise

Thermal Imaging Training with WildlifeTek - Training for Ecologists - Bat Conservation Trust

Can Heat Damage a Camera Lens? How Hot Is Too Hot? – joshweissphotography.com

My first idea is to capture the warmth of a sunrise or cooling of a sunset. I would essentially set up a regular camera and a thermal camera and capture the entire sunrise/sunset. I’d process the video as a time lapse and put them side by side. To expand on this and make it more ‘experimental’ I could also place an interesting object in the camera’s view, to see how its relative temperature changes over the course of the sunrise/sunset. One idea is a cup of ice in front of a sunset. You would expect the ice to melt, increasing in temperature, while the environment would be decreasing in temperature, which could create a cool effect.

Idea 2: Fractals in Nature

Fractals In Nature: Develop Your Pattern Recognition Skills

4,200+ Fractal Vegetable Stock Photos, Pictures & Royalty-Free Images - iStock

My second idea is to capture fractals in nature, both at a macro and microscopic level. I think it could be interesting to see/juxtapose a plant’s fractal shape with its microscopic structure. It could also be interesting to try to mathematically formalize this structure and create digital versions of these structures.

Final Project Proposal – Duq

For my final project I would like to create a “texture measurer” – a high sensitivity topography sensor that displays its measurements directly onto the object being measured. This uses the Sensel with the object to be measured placed upon it and then having my axidraw drawing lines over the image with the lines perturbed by the amount of pressure the axidraw exerts on the sensel through the measured material. This draws a representation of the texture of a material directly onto the material (assuming that it is something that can be drawn on).

 

This is a quick test that I ran, you can see that the results are quite deterministic, which bodes well.

Final Project Proposal

I’m currently working on a project outside of class that combines a brain sensing headband and a VR headset to create a mixed reality experience. The brain sensing headset I borrowed from the studio doesn’t capture accurate EEG information, but I view this project as more of a proof of concept. I might continue working on that and make it my final project.

I’m also considering using some kind of special capturing equipment to create an experimental project(e.g. motion capture, drone), but I haven’t decided on that yet.

Final Proposal

1. Just using the eulerian magnification to explore different small movements, coupled with tracking/stabilization. People also said that sound would be interesting (movements of electronics + sound captured by I forgot what that device is called)

2. Body watch? Somehow real time image but still small enough to wear.

3. I wonder if I can set up a larger scene, i.e., a silent gathering around the table with participants unmoving, and see how their blood pulses align by  processing the video so that only a small frame of skin is magnified on each person. Still life-ish setting and lighting.

Pittsburgh Bridges (Continued)

For my final project, I plan on using photogrammetry to expand on my Typology project of Pittsburgh’s bridges

My goal is to create a collection of bridge captures that highlights our aging infrastructure, and the beauty present within it. My intention is to show the beauty of the bridges when taken out of context, and to show the different types of erosion that can affect them.

Background

I came up with the idea for this project in September after reading this article about the state of steel bridges in the US. They found that over 25 percent of all bridges in the US will collapse by 2050, due to the extreme temperature fluctuation throughout the planet as a result of climate change. I  immediately connected this to Pittsburgh, and the Fern Hollow bridge collapse. There are almost 450 bridges within Pittsburgh city limits (446 to be exact) and I found this report from June 2024 that concluded that 15 percent of these bridges are in poor condition and at risk of failure right now.

As my project developed, I began to realize how beautiful the undersides of bridges actually are. My interest became less about their safety, and more about their beauty, especially because the undersides of the bridges are usually only understood for their utility and not their aesthetics. We often pay attention to bridges while driving across them, or seeing them from a distance, but we don’t notice them as much while driving under them, and the undersides of bridges are often just as beautiful as the top. These parts of the bridges are almost always designed to prioritize safety and functionality over beauty, but they are often incredibly beautiful anyways.

Method

I plan to use photogrammetry and Agisoft to create my 3d models. In the past I used Polycam, so I hope to expand my project and the detail within the bridges by using a more professional camera and better software. My original scans were pretty crunchy, and at times the detail didn’t translate properly, so I’m hoping that using a better camera and professional software will lead to better results. That being said, I’m more interested in volume than perfection, so I will most likely prioritize having more scans over having a couple really great ones/perfecting my process. I think the bridges are most powerful when placed in relation to each other, rather than isolated, so I want as many as I can. 

My biggest concern right now is that I’m not going to do it correctly!  It is actually incredibly challenging to do Photogrammetry for something so large. It’s hard to standardize camera angles, lighting, and takes an extraordinary amount of time to do correctly, and not many people have made tutorials for capturing something at this size. In my first iteration, I only had about a 65% success rate, which is difficult when each bridge takes so much of my time, and if I do one incorrectly, that’s an hour or more of work left unusable. This was the biggest benefit of Polycam, as I could find out in realtime if a capture had failed and potentially fix it before moving on. I’m hoping that now that I’m more familiar with photogrammetry, I should be able to get better results, but I’m worried about the inconsistent lighting conditions that come with being outside, and the reflective nature of metal in daylight. I’m trying to remain flexible, and I think it’s possible Polycam will actually be my best option, but it’s worth trying with more professional software. I am also thinking about using polycam to take the captures, but then processing them through Agisoft.

Additional Resources:

More about bridge diagnosis methods

Non paywalled article on bridge prognosis

Cause of the Fern Hollow Bridge collapse

Final Project Proposal – Identity Collections

TLDR: I’m interested in illustrating surveillance and facial collection in real time as a visual experience allowing viewers to interact with the captured collection of faces, from viewers that came before them.

Inspiration

Tony Oursler, Face to Face, 2012

 

Michael Naimark, Taking Head, 1979

 

Tony Oursler, Obscura, 2014

Nancy Burson, Human Race Machine, 2000

 

Concept

The idea of taking something fun that makes people want to “stick” their head into some contraption, and then completely doing a 180 that illustrates the subtle ways of surveillance is interesting.

 

 

Possible Process?

Golan recommended Touch Designer for this particular project, but I’m debating about the possibility of Open Processing and/or Python in VSCode.

 

Reverse Projection Sculpture could be the Initial Capture!

Nica & I talked about projecting the individual’s face on a sculpture infront of them in real time. If I was able to create a cardboard box with the black fabric on the inside/outside, and when the individual stuck their face inside, they see the reverse projection of their face onto the frosted mask. The individual sticking their head inside is the “opt in” and the video would most likely capture their shock and confusion as they try and make out who it is. The mask is of a specific size, so no matter who sticks their face in, they will look different – especially if I apply a face filter on. I know if I stuck my face and saw a reverse projection, my mouth would be dropped open and I’d say “woah that’s crazy” and likely not recognize myself at first, I’d also then try and see it replicate my mouth movements. That video of me trying to put together my reflection and process what I see in the box, would be projected onto the grid, and would be the continuous video seen of me.

 

In terms of ethics, I’ll ask people I know to stick their head in. If I show this at open studio, I can write that the faces will be deleted after the performance, or I could ask for signatures of individuals willing to let me “keep their face” for art purposes. Lots to think of here…

 

 

Recording faces in real time will be annoying, so I need to be able to build a rig that does the following:

    1.   How to capture the video (Reverse Projected Sculpture)
      1. I think Golan’s open processing holds a lot of promise, esp OpenCV!!
      2. FaceWork – https://openprocessing.org/sketch/2219047
      3. Alternate possibilities include more hardcore facial imaging:
        1. Google Image Segmenter – Extract specific regions of image you want
          1. https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter
        2. Google Face Landmark Detection Guide
          1. https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker
    2. Crop video?
      1. If everything’s set at a specific camera and the rig consistently captures each individual using Tony Oursler’s method, it could be okay!
    3. Develop rules – does the video capture for 5 seconds, when does it start? After 1s of detection? When does it end? Can I make it automatic? Or do I need to be there like I’m a fair grounds person.
    4. Save the video to a folder on the desktop
    5. Develop another set of code that reads the most recent video featured on the desktop
    6. Adds the most recent video to a projection grid of faces
      1. I would love to randomize faces, but I’m not sure that’s realistic in this timeframe.

 

Golan recommended Touch Designer and after much researching, I can add videos in real time using two options (thank you to this anonymous person):

“You could use the Folder DAT to monitor the files in the network and load them into individual Movie File In TOPs are you want to show them. In this case you wouldn’t be using the ‘Sequence of Images’ mode in the Movie File In TOP whereby it’s pointing at a directory.

Another option is to use the ‘Specify Index’ mode of the Movie File In TOP instead of ‘Sequential’, which allows you to set the index manually. This way when you reload the node it won’t start from the beginning again.” https://forum.derivative.ca/t/realtime-update-folders-contents/7744

 

 

Ideas for capturing the video consistently:

Tony Oursler’s method of facial capturing

Isla gave an interesting idea for making it fun for people to stick their heads in, almost like at a fair. Drastic contrast for the idea of capturing people’s faces.

 

Quick Mockup of Possible Rig Process

Final project proposal

My final project is essentially going to refine my most recent project, so people making themselves small with little to no threat or reward. Reminder image here:

One of the issues with this project when I presnted it recently is the program was shooting out the pixel value of a person within the canvas. My intention was not for this, but for it to give the “percentage of person” compared to their previous percentage.

Say you’re standing at full height, facing the camera. Your size or pixel value should be 100%, then if you were to double over your size or pixel value should be 80%, and then if you laid down and got as small as physically possible, maybe your size or pixel value would be somewhere near 60%. This was the intention, but not the outcome, and one of the things I would like to fix in the adjusted version. This way there won’t be priority given to people with naturally smaller figures.

I feel like because this makes the person’s initial size much more important, I’ll also need to figure out a way to make the “game” elements much cleaner and easier to navigate. EX: “stand in the frame,” game auto starts, “make yourself as physically small as possible,” timer and pixel value are clear at all times to viewer. This way it can be operated by anyone and they will have clear preparatory instructions without me just talking at them. I don’t want to have to sit behind a screen during the showcase.

I still don’t want it to be too game-y in a kitschy sense, so it’ll probably be closer to an interactive art piece. The thing that encourages people to interact is the main point of contention as far as my thought process, and it’s the portion I think I could use the most input on. I have a lot of ideas and I feel kind of okay-ish about them all.

Some of those ideas:

  • physical output (golan mentioned receipt printer, but open to other ideas here) example here: https://www.flickr.com/photos/thart2009/53094225345
  • scoreboard on a separate monitor
  • website with compiled images of people at their smallest
  • email sent to participants of their photo(s)
  • some other form of data visualization or image reproducing (???)

Like I mentioned above, these all feel kind of “meh.” Either I’m interested in it, but it feels disjointed from the general vibe of the project, or it fits well with the project, but I don’t really care about it as a concept. I liked Heidi’s word choice of “turtleing” and I think there are ways the game/interaction could become more specialized like that. Like, this isn’t just “getting small,” its repeating a familiar/symbolic gesture. (using my bus example,) “rush hour [bus line], with the instruction to get small, and a receipt printout with some “thanks for riding” -esque text.

I’m most concerned about unnecessarily weighing it down, which I think the above concept may do, but I think it might be (?) in my favor to think about things that way. IDK!