The Belly of the Beast: Excap Final

For my final I decided to build upon my typology from the beginning of the semester, where I captured the bottoms of bridges using photogrammetry. 

I wanted to focus on the presentation vehicle for my final, inspired by Claire Hentschker’s capture and release philosophy. I really liked the surreal and imposing quality that could be achieved by using augmented reality, which allows you to see these huge and imposing structures in your bedroom or backyard. 

So, I decided to create an app in Unity that could do this and give you a library of browsable bridges to look at, and I mostly succeeded!
There’s a lot of features I would still like to implement in the future, but for a first outing with Unity and app development in general, I’m really satisfied with the results. 

Here’s some videos of the app in action!

And finally, every bridge layered on top of each other outside, in true ExCap fashion.

Did I make this harder on myself by building a new app when there’s plenty that already exist? Oh for sure 10000 percent, but at least I can say I made an app now!

Also this class was great! One of the most fun classes I’ve taken at CMU, even if it was frustrating sometimes learning how to do things I’d never done before. I just discovered (when sitting down to do them) that FCE’s closed yesterday so this is my message to admin: More classes like this!!! I learned so much by just being given free reign to do basically whatever I wanted.

Final Project Draft: Bridge Typology Revision

For my final I decided to build upon my typology from the beginning of the semester. If you’re curious, here’s the link to my original project.

In short, I captured the undersides of Pittsburgh’s worst bridges through photogrammetry. Here’s a few of the results.



For the final project, I wanted to revise and refine the presentation of my scans and also expand the collection with more bridges. Unfortunately, I spent too much time worrying about taking new captures/better captures that I didn’t focus as much as I should have on the final output, which I wanted to ideally be either augmented reality experience, or a 3d walkthrough in Unity. That being said, I do have a very rudimentary (emphasis on rudimentary) draft of the augmented reality experience.

https://www.youtube.com/watch?v=FGTCfdp8x4Q

(Youtube is insisting that this video be uploaded as a short so it won’t embed properly)

As I work towards next Friday, there’s a few things I’d like to still implement. For starters, I want to add some sort of text that pops up on each bridge that has some key facts about what you’re looking at. I also currently don’t have an easy “delete” button to remove a bridge in the UI, but I haven’t figured out how to do that yet. Lower on the priority list would be to get a few more bridge captures, but that’s less important than the app itself at this point. Finally, I cannot figure out why all the bridges are somewhat floating off the ground, so if anyone has any recommendations I’d love to hear them. 

I’m also curious for feedback if this is the most interesting way to present the bridges. I really like how weird it is to see an entire bridge just like, floating in your world where the bridge doesn’t belong, but I’m open to trying something else if there’s a better way to do it. The other option I’m tossing around would be to try some sort of first person walkthrough in unity instead of augmented reality.

I just downloaded Unity on Monday and I think I’ve put in close to 20 hours trying to get this to work over 2.5 days… But after restarting 17 times I think I’ve started to get the hang of it. This is totally out of my scope of knowledge, so what would have been a fairly simple task became one of the most frustrating experiences of my semester. So further help with Unity would be very much appreciated, if anyone has the time!! If I see one more “build failed” error message, I might just throw my computer into the Monongahela. Either way, I’m proud of myself that I have a semi functioning app at all, because that’s not something I ever thought I’d be able to do. 

Thanks for listening! Happy end of the semester!!!!

Pittsburgh Bridges (Continued)

For my final project, I plan on using photogrammetry to expand on my Typology project of Pittsburgh’s bridges

My goal is to create a collection of bridge captures that highlights our aging infrastructure, and the beauty present within it. My intention is to show the beauty of the bridges when taken out of context, and to show the different types of erosion that can affect them.

Background

I came up with the idea for this project in September after reading this article about the state of steel bridges in the US. They found that over 25 percent of all bridges in the US will collapse by 2050, due to the extreme temperature fluctuation throughout the planet as a result of climate change. I  immediately connected this to Pittsburgh, and the Fern Hollow bridge collapse. There are almost 450 bridges within Pittsburgh city limits (446 to be exact) and I found this report from June 2024 that concluded that 15 percent of these bridges are in poor condition and at risk of failure right now.

As my project developed, I began to realize how beautiful the undersides of bridges actually are. My interest became less about their safety, and more about their beauty, especially because the undersides of the bridges are usually only understood for their utility and not their aesthetics. We often pay attention to bridges while driving across them, or seeing them from a distance, but we don’t notice them as much while driving under them, and the undersides of bridges are often just as beautiful as the top. These parts of the bridges are almost always designed to prioritize safety and functionality over beauty, but they are often incredibly beautiful anyways.

Method

I plan to use photogrammetry and Agisoft to create my 3d models. In the past I used Polycam, so I hope to expand my project and the detail within the bridges by using a more professional camera and better software. My original scans were pretty crunchy, and at times the detail didn’t translate properly, so I’m hoping that using a better camera and professional software will lead to better results. That being said, I’m more interested in volume than perfection, so I will most likely prioritize having more scans over having a couple really great ones/perfecting my process. I think the bridges are most powerful when placed in relation to each other, rather than isolated, so I want as many as I can. 

My biggest concern right now is that I’m not going to do it correctly!  It is actually incredibly challenging to do Photogrammetry for something so large. It’s hard to standardize camera angles, lighting, and takes an extraordinary amount of time to do correctly, and not many people have made tutorials for capturing something at this size. In my first iteration, I only had about a 65% success rate, which is difficult when each bridge takes so much of my time, and if I do one incorrectly, that’s an hour or more of work left unusable. This was the biggest benefit of Polycam, as I could find out in realtime if a capture had failed and potentially fix it before moving on. I’m hoping that now that I’m more familiar with photogrammetry, I should be able to get better results, but I’m worried about the inconsistent lighting conditions that come with being outside, and the reflective nature of metal in daylight. I’m trying to remain flexible, and I think it’s possible Polycam will actually be my best option, but it’s worth trying with more professional software. I am also thinking about using polycam to take the captures, but then processing them through Agisoft.

Additional Resources:

More about bridge diagnosis methods

Non paywalled article on bridge prognosis

Cause of the Fern Hollow Bridge collapse

This is Your Brain on Politics

For this project, I ended up deciding to use an EEG device, the Muse2 headband, to create real time spectrograms of the electrical activity in my brain before, during, and after the 2024 election. I actually started with a completely different topic using the same capture technique, but I realized last Monday what a great opportunity it was to do my project around the election, so I decided to switch gears.

I was really excited by this project because biology/life science is really interesting to me. An EEG is just a reading of electrical signals, completely meaningless without context, but these signals has been largely accepted by the scientific community to correlate with specific emotional states or types of feelings, and it’s fascinating how something as abstract as an emotion could be quantified and measured by electrical pulses. Elections (particularly this one) often lead to very complicated emotions, and it can be difficult to understand what you’re feeling or why. I liked the idea of using

I was inspired by a few artists who use their own medical data to create art, specifically EEGs. Here’s an EEG self potrait by Martin Jombik, as well as a few links to other EEG artwork.

Martin Jombik, EEG Self Potrait

Personality Slice I
Elizabeth Jameson’s Personality Slice

I was interested in using a capture technique that

 

Workflow

To take my captures, I tried to standardize the conditions under which they were taken as much as possible. For each one, I tried to remain completely still, with closed eyes. I wanted to reduce the amount of “noise” on my recordings- aka any brain activity that wasn’t directly related to my emotional state or thoughts and feelings. I also used an app called MindMonitor which has a convenient interface for reading the data coming out of the Muse2 headset.

For my first capture, the “control,” I tried to do it in a state of complete relaxation, while my mind was at rest. I took it on November 3rd, when election stress was at a neutral level. After that, I experimented with what I did before the capture like watching political ads, reading the news, and of course finding out the results. I then laid down with eyes closed while wearing the headset. Finally, I took one last capture 3 days later, on November 1oth, once my stress/emotional state had returned to (semi) normal.

I decided to present my captures in the form of spectrograms, which is a visual representation of signal strength over time. I found this to be easier to understand than the raw data, as it showed change overtime and provided a color coded representation of signal strenght. These spectrograms can then be distilled into single images (which capture a finite moment in time), or a moving image that shows the results over several minutes. I’ve decided to present each spectrogram as a side by side gif with time stamps to reveal the differences between each one.

Results

There are 5 different types of brain waves that are plotted on the spectrogram, ranging in frequency from  1 to 60hz. I’ve included a glossary of what each frequency/wave type might mean.

  • Delta, 1-4hz: Associated with deep sleep/relaxation, found most in children/infants.
  • Theta, 4-8hz: Associated with subconscious mind, found most often while dreaming or sleeping.
  • Alpha, 8-14hz: Associated with active relaxation, found most often in awake individuals who are in a state of relaxation or focus, like meditation.
  • Beta, 13-25hz: Associated with alertness/consciousness, this is the most common frequency for a conscious person.
  • Gamma, 25-100hz: The highest frequency brain waves, associated with cognitive engagement, found most often while problem solving and learning.

Generally, the areas of 0-10hz are always somewhat active, but in a state of relaxation you should see the most activity around 5-14hz, and barely any activity at the higher frequencies. Blue indicates areas of very low activity, while red indicates an area of high activity.

 

Gif of every spectrogram (very compressed)

I think the video is really effective at showing the differences between each over the same length of time. Prior to the election, the results show relatively low activity overall. While there is consistently a band of red on the leftmost side, the spectrogram from November 3rd is consistent with a person who is relaxed. Going into November 5th and 6th, things look very different. There’s greater activity overall, especially with regards to beta and gamma waves. In fact, there is so much more activity that there is barely any blue. Even without knowing how I felt or what was going on the day these were taken, it’s clear that my brain was substantially more active during those two days than it was before or after. I found the results to be incredibly revealing with regards to the acute impact this years election cycle had on me, especially when placed into context with my “normal” spectrograms before and after.

Person in Time WIP- A Portrait of Rumination

For this project, I’m really interested in creating a self portrait that captures my experience living with mental illness.

I ended up being really inspired by the Ekman Lie Detector that Golan showed us in class. While I’m not convinced by his application, I do think there’s a lot to be learned from people’s microexpressions.

I was also deeply inspired by a close friend of mine, who was hospitalized during an episode of OCD induced psychosis earlier this year. She shared her experience publicly on instagram, and watching her own and be proud of her own story felt like she lifted a weight off of my own chest. I hope that by sharing and talking about my experience, I might lift that weight off of someone else, and perhaps help myself in the process.

Throughout my life I’ve struggled with severe mental illness. Only recently have I found a treatment that is effective, but it’s not infallible. While lately I’ve been functional and generally happy, I would say I still spend on average 2-3 hours each day ruminating in anxiety and negative self talk, despite my best efforts. These thought patterns are fed by secrecy, embarrassment, and shame, so I would really like to start taking back my own power, and being open about what I’m going through, even if it’s really hard for me to tell a bunch of relative strangers something that is still so taboo in our culture.

So, personal reasons aside, I think it would be really interesting to use the high speed camera to take potrait(s) of me in the middle of ruminating, and potentially contrast that with a portrait of me when I’m feeling positively.  Throughout my life folks have commented on how easy it is to know my exact feelings about a subject without even asking me, just based off my facial expressions, because I wear my feelings on my sleeve (despite my best efforts!). I’ve tried neutralizing my expressions in the past, but I’ve never really been successful, so I’m hoping that’s a quality that will come in handy while making this project. If being overly emotive is a flaw, I plan to use this project to turn it into a superpower.

I’ve also contemplated using biometric data to supplement my findings, like a heart rate or breathing monitor, but I’m not totally married to that idea yet. I think the high speed camera might be enough on it’s own, but the physiological data could be a useful addition.

 

 

Looking Outwards 4

I’ve been thinking about doing some sort of self portrait related to mental illness (don’t hold me to it though!!), so I decided to research other artists who have focused on similar topics.

01.jpg

Daniel Regan’s Fragmentary

Artist Daniel Regan used his medical records to understand himself through other people’s eyes. The piece places pages of his medical records side by side with self portaits taken around the same time.

Shigeko Kubota’s Self Portrait

I’m a huge fan of analog video/capture so this piece really interested me. I think her portraits have “quiddity” but they’re also a form of performance art which is really interesting.

Personality Slice  I

Elizabeth Jameson’s Personality Slice

Elizabeth Jameson uses her own MRI’s as self portraiture. I’m really intrigued by the idea of using “functional” imaging, like MRIs and X-rays as a form of fine art.

 

 

Typology: Pittsburgh Bridges

When is the last time you looked up while driving or walking underneath a bridge?

Through this project, I set out to document bridge damage in a way that is difficult to experience with the naked eye, or that we are likely to overlook or take for granted in our day to day life.

Inspiration and Background

At the beginning of this process, I was really inspired by the TRIP Report on Pennsylvania bridges, a comprehensive report that listed PA’s most at risk bridges and ranked them by order of priority. I was interested in drawing attention to the “underbelly,”  an area that’s often difficult to access, as a way to reveal an aspect of our daily life that tends to be unnoticed.

Initially, I was planning to use a thermal camera to identify damage that’s not visible to the naked eye. After meeting with two engineers from PennDOT, I learned that this method would not be effective in the way I intended, as it only works when taken from the surface of the bridge. It would have been unsafe for me to capture the surface, so I decided to recalibrate my project from capturing invisible damage, to emphasizing visible damage that would otherwise be unnoticed or hard to detect. Here are some examples of my attempts at thermal imaging on the bottom of the bridge. I was looking for heat anomalies, but as you can see the bridges I scanned were completely uniform, regardless of the bridge material or time of day.

Process and Methodology

After thermal imaging, I moved to LiDAR (depth) based 3D modeling. My research revealed this is the favored method for bridge inspectors, but the iPhone camera I was using had a pretty substantial depth limitation that prevented me from getting useful scans. Here are a few examples of my first round.

A LiDAR scan of Murray Avenue Bridge
Murray Avenue Bridge

Polish hill, Railroad Bridge over Liberty Avenue

376 over Swinburne Bridge

These scans were not great at documenting cracks and rust, which are the focus of my project. At Golan’s recommendation I made the switch to photogrammetry. For my workflow, I took 200-300 images of each bridge at all angles, which were then processed through Polycam After that, I uploaded each one to Sketchfab, due to the limitations of Polycam’s UI.

I chose photogrammetry because it allows the viewer to experience our bridges in a way that is not possible with the naked eye or even static photography. Through these 3D captures, it’s possible to see cracks and rust that are 20+ feet away in great detail.

Here’s some pictures of me out capturing. A construction worker saw me near the road and came and gave me a high vis vest for my safety, which was so nice I want to give him a shoutout here!

Results

This project is designed to be interactive. If you’d like to explore on your own device, please go here if you’re viewing on a computer, and here for augmented reality (phone only). I’ve provided a static image of each scan in this post, as well as 5 augmented reality fly throughs, out of the 9 models I captured.

Railroad Bridge over Liberty Ave

Race St Bridge

Penn Ave Bridge

Panther Hollow Bridge

Greenfield Railroad Bridge

Swinburne Bridge

Frazier St Viaduct (376)

Allegheny River Bridge

I am quite pleased with the end result compared to where I started, but there’s still a lot of room for improvement in the quality of the scans. After so many iterations I was restricted by time, but in the future I would prefer to use more advanced modeling software. I would also like to explore hosting them on my own website.

Special thanks to PennDOT bridge engineers Shane Szalankiewicz and Keith Cornelius for their extraordinary assistance in the development of this project. 

Typology of Pittsburgh Bridges – Proposal/WIP

For my typology, I plan on using the portable (iphone) thermal camera, as well as the built in iphone camera to capture the underside of dilapidated bridges.

My goal is to create a typology of Pittsburgh’s worst bridges that highlights our aging infrastructure and safety concerns.

Background

I came up with the idea for this project after reading this article about the state of steel bridges in the US. They found that over 25 percent of all bridges in the US will collapse by 2050, due to the extreme temperature fluctuation throughout the planet as a result of climate change. I  immediately connected this to Pittsburgh, and the Fern Hollow bridge collapse. There are almost 450 bridges within Pittsburgh city limits (446 to be exact) and I found this report from June 2024 that concluded that 15 percent of these bridges are in poor condition and at risk of failure right now.

The next thing I was curious about was whether it was possible with the tools available to the studio to capture and record damage invisible or difficult to see with the naked eye.  I learned that (among many other methods) one of the ways to identify weaknesses is using an IR thermal camera.  [Jump to Section 3.3]

Method

I plan to standardize the bridge images (as best I can) by stationing myself directly under each bridge, and taking the photo with my phone pointed straight up in an attempt to unify the images and standardize the angle. I will take 2 photos (planned) of each bridge, one with the thermal camera, and one with my iphone camera.

My biggest concern right now is that it’s not going to work! Obviously I am not a structural engineer, and I have no idea if I’ll be able to see the damage even with the thermal camera. My contingency plan if I’m not able to get the thermal camera to work is to create a 3d scan of the underside of the 5-10 of the worst bridges instead, and somehow highlight the structural problems that are visible to the naked eye. That being said, I am very interested in hearing if someone thinks there’s a more effective capture method for this.

Additional Resources:

More about bridge diagnosis methods

Non paywalled article on bridge prognosis

Cause of the Fern Hollow Bridge collapse

 

Capturing a Dog Toy

I decided to experiment with the slit scan, long exposure, and panorama apps. I used the same subject for all the experiments, the face on this dog toy. I thought it would be fun to use a dog toy since I was trying to get trippy pictures.

For the slit scan camera, I learned that the subject needed to be moving to capture it properly. I was experimented with moving it in different directions and learned that if I went the opposite direction of the scan, I could get a clearer picture of my subject.

I also tried the panorama camera, but I didn’t really like the results when I went left to right, so I ended up turning my phone during the pano Instead.

Finally, I tried a long exposure cam, and I moved the mushroom toward the camera to get the final image (which is honestly my favorite of the bunch)

top to bottom: Slit Scan, Long exposure, Panorama

Photography and Science Chapter 1

I’m not very well versed on dark room photography, so the section on emulsions was the most exciting to me because I had no idea there were so many different types, or that you could actually get a different product depending on what you use.

I’ve never put two and two together and realized that X rays are actually just a special type of camera/developer! I would love to test some of these solutions out and see what new things you can learn or feelings you can elicit from the same subject just by using a different emulsion, like the tests we read about that scientists performed to fine tune the best methodology for their experiments and observations.