An Attempt at Exhausting A Park In Pittsburgh

When Ginger came to teach us spectroscopy, and we were asked to collect samples outside to look at under a microscope, I unexpectedly discovered a microscopic bug when looking at a leaf from outside (I could’t see it with my plain, human eyes). This made me curious about what else I was missing. Then, for my first project, I spent so much time in Schenley Park (I was trying to exhaust it- à la Perec). In how many ways could I get to know Schenley Park? What was interesting in it?

In October, I collected a bunch of samples (algae from Panther Hollow lake, flowers, twigs, rocks, leaves, etc.) and brought them to the studio to see what I could find in a deeper look. Below is a short video highlighting only a small portion of my micro explorations. A lot of oohs, ahh, and silliness.

Last week, Richard Pell told us about focus stacking! I wanted to try using this technique to make higher resolution images from some of my videos. While the images are not super high resolution because they are made from extracted video stills, you are still able to see more detail in one image with the stacking. I ran a bunch of my captures through heliconFocus to achieve this. Below are my focus stack experiments along with GIFs of the original footage to show the focus depth changing.

Focus Stacking Captures:

  1. Pollinated Flower Petals.

Focus Stacking – Blue Flower

Focus Stacking – Dry Moss

Focus Stacking – Bugs on Flowers


 

You can see the movement of the bug over time like this!!

I also did this with a tiny bug I saw on this leaf, you can see its path.

Next, you can see some bugs moving around on this flower

Now Mold!

A single strand of moss

 

A flower bud


Having done these microscopic captures for fun in October and really enjoying this close-up look at things, and being so mesmerized by the bugs lives, I wanted to spend an extraordinary amount of time just looking at the same types of bugs found on Schenley organic matter for my final project. On November 25th, I set out again to collect a plethora of objects from Schenley Park- including flowers, strands of grass, twigs, acorns, stones, algae+water from the pond, charred wood and more… I eagerly brought them to studio… wondering what bugs I’d see. But of course, it had gotten cold. After an hours-long search under the microscope, I did find one bug. But by the time I found the bug, I was exhausted (and interested in other things).

In an Attempt at Exhausting A Park In Pittsburgh. (Schenley Park), I ended up exhausting myself.

 

Natural found camera shutters

My proposal is to get liquid emulsion, covering large surfaces, and then testing natural found shutters.

Ex: Opening a door, walking through it, and shutting the door again is a natural shutter. The capture happens if the room I exit is lit and the room I enter is dark, and the wall opposing the doorway in the dark room has an emulsion surface.

A motion sensor light that times out after a while is a natural shutter.

The sun rises and sets is a natural shutter.

I think something standardized might be nice if I wanted to compare things in a type. I think something I mixed & applied myself would not be standardized enough, and I might have to buy something premade, but this might not work with scale.

I am looking at other things like natural dyes and cyanotypes (anything light sensitive), I will be doing media studies & trying to get down cost.

Final Project Proposal

Idea 1: Thermal sunset/sunrise

Thermal Imaging Training with WildlifeTek - Training for Ecologists - Bat Conservation Trust

Can Heat Damage a Camera Lens? How Hot Is Too Hot? – joshweissphotography.com

My first idea is to capture the warmth of a sunrise or cooling of a sunset. I would essentially set up a regular camera and a thermal camera and capture the entire sunrise/sunset. I’d process the video as a time lapse and put them side by side. To expand on this and make it more ‘experimental’ I could also place an interesting object in the camera’s view, to see how its relative temperature changes over the course of the sunrise/sunset. One idea is a cup of ice in front of a sunset. You would expect the ice to melt, increasing in temperature, while the environment would be decreasing in temperature, which could create a cool effect.

Idea 2: Fractals in Nature

Fractals In Nature: Develop Your Pattern Recognition Skills

4,200+ Fractal Vegetable Stock Photos, Pictures & Royalty-Free Images - iStock

My second idea is to capture fractals in nature, both at a macro and microscopic level. I think it could be interesting to see/juxtapose a plant’s fractal shape with its microscopic structure. It could also be interesting to try to mathematically formalize this structure and create digital versions of these structures.

Final Project Proposal – Duq

For my final project I would like to create a “texture measurer” – a high sensitivity topography sensor that displays its measurements directly onto the object being measured. This uses the Sensel with the object to be measured placed upon it and then having my axidraw drawing lines over the image with the lines perturbed by the amount of pressure the axidraw exerts on the sensel through the measured material. This draws a representation of the texture of a material directly onto the material (assuming that it is something that can be drawn on).

 

This is a quick test that I ran, you can see that the results are quite deterministic, which bodes well.

Final Project Proposal

I’m currently working on a project outside of class that combines a brain sensing headband and a VR headset to create a mixed reality experience. The brain sensing headset I borrowed from the studio doesn’t capture accurate EEG information, but I view this project as more of a proof of concept. I might continue working on that and make it my final project.

I’m also considering using some kind of special capturing equipment to create an experimental project(e.g. motion capture, drone), but I haven’t decided on that yet.

Final Proposal

1. Just using the eulerian magnification to explore different small movements, coupled with tracking/stabilization. People also said that sound would be interesting (movements of electronics + sound captured by I forgot what that device is called)

2. Body watch? Somehow real time image but still small enough to wear.

3. I wonder if I can set up a larger scene, i.e., a silent gathering around the table with participants unmoving, and see how their blood pulses align by  processing the video so that only a small frame of skin is magnified on each person. Still life-ish setting and lighting.

Pittsburgh Bridges (Continued)

For my final project, I plan on using photogrammetry to expand on my Typology project of Pittsburgh’s bridges

My goal is to create a collection of bridge captures that highlights our aging infrastructure, and the beauty present within it. My intention is to show the beauty of the bridges when taken out of context, and to show the different types of erosion that can affect them.

Background

I came up with the idea for this project in September after reading this article about the state of steel bridges in the US. They found that over 25 percent of all bridges in the US will collapse by 2050, due to the extreme temperature fluctuation throughout the planet as a result of climate change. I  immediately connected this to Pittsburgh, and the Fern Hollow bridge collapse. There are almost 450 bridges within Pittsburgh city limits (446 to be exact) and I found this report from June 2024 that concluded that 15 percent of these bridges are in poor condition and at risk of failure right now.

As my project developed, I began to realize how beautiful the undersides of bridges actually are. My interest became less about their safety, and more about their beauty, especially because the undersides of the bridges are usually only understood for their utility and not their aesthetics. We often pay attention to bridges while driving across them, or seeing them from a distance, but we don’t notice them as much while driving under them, and the undersides of bridges are often just as beautiful as the top. These parts of the bridges are almost always designed to prioritize safety and functionality over beauty, but they are often incredibly beautiful anyways.

Method

I plan to use photogrammetry and Agisoft to create my 3d models. In the past I used Polycam, so I hope to expand my project and the detail within the bridges by using a more professional camera and better software. My original scans were pretty crunchy, and at times the detail didn’t translate properly, so I’m hoping that using a better camera and professional software will lead to better results. That being said, I’m more interested in volume than perfection, so I will most likely prioritize having more scans over having a couple really great ones/perfecting my process. I think the bridges are most powerful when placed in relation to each other, rather than isolated, so I want as many as I can. 

My biggest concern right now is that I’m not going to do it correctly!  It is actually incredibly challenging to do Photogrammetry for something so large. It’s hard to standardize camera angles, lighting, and takes an extraordinary amount of time to do correctly, and not many people have made tutorials for capturing something at this size. In my first iteration, I only had about a 65% success rate, which is difficult when each bridge takes so much of my time, and if I do one incorrectly, that’s an hour or more of work left unusable. This was the biggest benefit of Polycam, as I could find out in realtime if a capture had failed and potentially fix it before moving on. I’m hoping that now that I’m more familiar with photogrammetry, I should be able to get better results, but I’m worried about the inconsistent lighting conditions that come with being outside, and the reflective nature of metal in daylight. I’m trying to remain flexible, and I think it’s possible Polycam will actually be my best option, but it’s worth trying with more professional software. I am also thinking about using polycam to take the captures, but then processing them through Agisoft.

Additional Resources:

More about bridge diagnosis methods

Non paywalled article on bridge prognosis

Cause of the Fern Hollow Bridge collapse

Final Project Proposal – Identity Collections

TLDR: I’m interested in illustrating surveillance and facial collection in real time as a visual experience allowing viewers to interact with the captured collection of faces, from viewers that came before them.

Inspiration

Tony Oursler, Face to Face, 2012

 

Michael Naimark, Taking Head, 1979

 

Tony Oursler, Obscura, 2014

Nancy Burson, Human Race Machine, 2000

 

Concept

The idea of taking something fun that makes people want to “stick” their head into some contraption, and then completely doing a 180 that illustrates the subtle ways of surveillance is interesting.

 

 

Possible Process?

Golan recommended Touch Designer for this particular project, but I’m debating about the possibility of Open Processing and/or Python in VSCode.

 

Reverse Projection Sculpture could be the Initial Capture!

Nica & I talked about projecting the individual’s face on a sculpture infront of them in real time. If I was able to create a cardboard box with the black fabric on the inside/outside, and when the individual stuck their face inside, they see the reverse projection of their face onto the frosted mask. The individual sticking their head inside is the “opt in” and the video would most likely capture their shock and confusion as they try and make out who it is. The mask is of a specific size, so no matter who sticks their face in, they will look different – especially if I apply a face filter on. I know if I stuck my face and saw a reverse projection, my mouth would be dropped open and I’d say “woah that’s crazy” and likely not recognize myself at first, I’d also then try and see it replicate my mouth movements. That video of me trying to put together my reflection and process what I see in the box, would be projected onto the grid, and would be the continuous video seen of me.

 

In terms of ethics, I’ll ask people I know to stick their head in. If I show this at open studio, I can write that the faces will be deleted after the performance, or I could ask for signatures of individuals willing to let me “keep their face” for art purposes. Lots to think of here…

 

 

Recording faces in real time will be annoying, so I need to be able to build a rig that does the following:

    1.   How to capture the video (Reverse Projected Sculpture)
      1. I think Golan’s open processing holds a lot of promise, esp OpenCV!!
      2. FaceWork – https://openprocessing.org/sketch/2219047
      3. Alternate possibilities include more hardcore facial imaging:
        1. Google Image Segmenter – Extract specific regions of image you want
          1. https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter
        2. Google Face Landmark Detection Guide
          1. https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker
    2. Crop video?
      1. If everything’s set at a specific camera and the rig consistently captures each individual using Tony Oursler’s method, it could be okay!
    3. Develop rules – does the video capture for 5 seconds, when does it start? After 1s of detection? When does it end? Can I make it automatic? Or do I need to be there like I’m a fair grounds person.
    4. Save the video to a folder on the desktop
    5. Develop another set of code that reads the most recent video featured on the desktop
    6. Adds the most recent video to a projection grid of faces
      1. I would love to randomize faces, but I’m not sure that’s realistic in this timeframe.

 

Golan recommended Touch Designer and after much researching, I can add videos in real time using two options (thank you to this anonymous person):

“You could use the Folder DAT to monitor the files in the network and load them into individual Movie File In TOPs are you want to show them. In this case you wouldn’t be using the ‘Sequence of Images’ mode in the Movie File In TOP whereby it’s pointing at a directory.

Another option is to use the ‘Specify Index’ mode of the Movie File In TOP instead of ‘Sequential’, which allows you to set the index manually. This way when you reload the node it won’t start from the beginning again.” https://forum.derivative.ca/t/realtime-update-folders-contents/7744

 

 

Ideas for capturing the video consistently:

Tony Oursler’s method of facial capturing

Isla gave an interesting idea for making it fun for people to stick their heads in, almost like at a fair. Drastic contrast for the idea of capturing people’s faces.

 

Quick Mockup of Possible Rig Process

Final project proposal

My final project is essentially going to refine my most recent project, so people making themselves small with little to no threat or reward. Reminder image here:

One of the issues with this project when I presnted it recently is the program was shooting out the pixel value of a person within the canvas. My intention was not for this, but for it to give the “percentage of person” compared to their previous percentage.

Say you’re standing at full height, facing the camera. Your size or pixel value should be 100%, then if you were to double over your size or pixel value should be 80%, and then if you laid down and got as small as physically possible, maybe your size or pixel value would be somewhere near 60%. This was the intention, but not the outcome, and one of the things I would like to fix in the adjusted version. This way there won’t be priority given to people with naturally smaller figures.

I feel like because this makes the person’s initial size much more important, I’ll also need to figure out a way to make the “game” elements much cleaner and easier to navigate. EX: “stand in the frame,” game auto starts, “make yourself as physically small as possible,” timer and pixel value are clear at all times to viewer. This way it can be operated by anyone and they will have clear preparatory instructions without me just talking at them. I don’t want to have to sit behind a screen during the showcase.

I still don’t want it to be too game-y in a kitschy sense, so it’ll probably be closer to an interactive art piece. The thing that encourages people to interact is the main point of contention as far as my thought process, and it’s the portion I think I could use the most input on. I have a lot of ideas and I feel kind of okay-ish about them all.

Some of those ideas:

  • physical output (golan mentioned receipt printer, but open to other ideas here) example here: https://www.flickr.com/photos/thart2009/53094225345
  • scoreboard on a separate monitor
  • website with compiled images of people at their smallest
  • email sent to participants of their photo(s)
  • some other form of data visualization or image reproducing (???)

Like I mentioned above, these all feel kind of “meh.” Either I’m interested in it, but it feels disjointed from the general vibe of the project, or it fits well with the project, but I don’t really care about it as a concept. I liked Heidi’s word choice of “turtleing” and I think there are ways the game/interaction could become more specialized like that. Like, this isn’t just “getting small,” its repeating a familiar/symbolic gesture. (using my bus example,) “rush hour [bus line], with the instruction to get small, and a receipt printout with some “thanks for riding” -esque text.

I’m most concerned about unnecessarily weighing it down, which I think the above concept may do, but I think it might be (?) in my favor to think about things that way. IDK!