Final Project Draft: Bridge Typology Revision

For my final I decided to build upon my typology from the beginning of the semester. If you’re curious, here’s the link to my original project.

In short, I captured the undersides of Pittsburgh’s worst bridges through photogrammetry. Here’s a few of the results.



For the final project, I wanted to revise and refine the presentation of my scans and also expand the collection with more bridges. Unfortunately, I spent too much time worrying about taking new captures/better captures that I didn’t focus as much as I should have on the final output, which I wanted to ideally be either augmented reality experience, or a 3d walkthrough in Unity. That being said, I do have a very rudimentary (emphasis on rudimentary) draft of the augmented reality experience.

https://www.youtube.com/watch?v=FGTCfdp8x4Q

(Youtube is insisting that this video be uploaded as a short so it won’t embed properly)

As I work towards next Friday, there’s a few things I’d like to still implement. For starters, I want to add some sort of text that pops up on each bridge that has some key facts about what you’re looking at. I also currently don’t have an easy “delete” button to remove a bridge in the UI, but I haven’t figured out how to do that yet. Lower on the priority list would be to get a few more bridge captures, but that’s less important than the app itself at this point. Finally, I cannot figure out why all the bridges are somewhat floating off the ground, so if anyone has any recommendations I’d love to hear them. 

I’m also curious for feedback if this is the most interesting way to present the bridges. I really like how weird it is to see an entire bridge just like, floating in your world where the bridge doesn’t belong, but I’m open to trying something else if there’s a better way to do it. The other option I’m tossing around would be to try some sort of first person walkthrough in unity instead of augmented reality.

I just downloaded Unity on Monday and I think I’ve put in close to 20 hours trying to get this to work over 2.5 days… But after restarting 17 times I think I’ve started to get the hang of it. This is totally out of my scope of knowledge, so what would have been a fairly simple task became one of the most frustrating experiences of my semester. So further help with Unity would be very much appreciated, if anyone has the time!! If I see one more “build failed” error message, I might just throw my computer into the Monongahela. Either way, I’m proud of myself that I have a semi functioning app at all, because that’s not something I ever thought I’d be able to do. 

Thanks for listening! Happy end of the semester!!!!

Final-ish Project: Portable Camera Obscura!!!!

In this project, I built a portable camera obscura.

Quick Inspo:

Abelardo MorellCamera Obscura, View of Lower Manhattan, Sunrise, 2022

Camera Obscura- View of the Florence Duomo in Tuscany President’s Office in Palazzo Strozzi Sacrati, Italy, 2017

He makes these awesome full-room camera obscuras in his home and even hotel rooms.

I was particularly inspired by his terrain works, where he uses natural terrains and camera obscuras to mimic impressionist paintings.

Tent-Camera Image: Lavender Field, Lioux, France, 2022

FranceProduction_ReducedSize_Image01.jpg

This was motivated because I intend to use it in part of a longer-term project. Eventually, I want to get it to the level where I can record clear live videos through this.

This served as my first attempt at making a prototype.

I was given a fresnel lens with a focal length of 200mm but in my research, this would also work with a common magnifying glass. The only downside of this kind of substitution seemed to be that it would produce a less sharp image.

The basic setup is pretty basic, it’s essentially a box within a box with a piece of parchment paper attached to the end of the smaller box. The fresnel was attached with electrical tape and painter’s tape to a frame I had already made for another project.

Process:

During building the box, it was nighttime, so I played around using lights in the dark.

It was looking surprisingly ok.

I went ahead and finished a rough model.

Video Testing:

Further thoughts:

To expand, I want to get it to a better clarity and experiment with using different materials for the projection screen.

I saw this great video where they used frosted plastic and a portable scanner to extract the projected image, which I am super interested in pursuing.

The end goal is to have something I can run around town with and get reliable results, make content, and experiment with the form.

dead things in jars (pt. 2)

more documentation

 

Traditional 3D reconstruction methods, like photogrammetry, often struggle to render reflective and translucent surfaces such as glass, water, and liquids. These limitations are particularly evident in objects like jars, which uniquely combine reflective and translucent qualities while containing diverse contents, from pickles to wet specimens. Photogrammetry’s reliance on point clouds with polygon and texture meshes falls short in capturing these materials, leaving reflective and view-dependent surfaces poorly represented. Advancements like radiance fields and 3D Gaussian Splatting have revolutionized this space. Radiance fields, such as Neural Radiance Fields (NeRFs), use neural networks to generate realistic 3D representations of objects by synthesizing views from any arbitrary angle. NeRFs model view-dependent lighting effects, enabling them to capture intricate details like reflections that shift with the viewing angle. Their approach involves querying 5D coordinates—spatial location and viewing direction—to compute volume density and radiance, allowing for photorealistic novel views of complex scenes through differentiable volume rendering. Complementing NeRFs, 3D Gaussian Splatting uses gaussian “blobs” in a point cloud, enabling smooth transitions and accurate depictions of challenging materials. Together, these innovations provide an unprecedented ability to create detailed 3D models of objects like jars, faithfully capturing their reflective, translucent, and complex properties.

 

Further development:

After a brief conversation with a PhD student working with splats and drawing from my own experiences, I’ve concluded that this project will require further development. I plan to develop this further by writing scripts to properly initialize my point clouds (using GLOMAP/COLMAP) and nerfstudio (an open source radiance field training software). This movement from commercial software in beta to open source is to gain more control over how the point clouds are initialized. I’m also changing how I capture the training images. Previous methods confused the algorithm.

Final Proposal – Synth Play (Edited)

Equipment

Lots of cables, arp 2600, amplifier, gigaport, Interface? Max Msp

First step – get the system working to use cv to send out different qualities of the signal as a cv to control the arp.

There may be interactivity involved eventually in how the sound is transformed before it reaches the arp, but right now the focus is to figure out how to send output from my computer and transform it using the gigaport to send to an amplifier and out to the arp. The arp can then be impacted more remotely and have qualities of a specific sound – seems as though the case study will be my voice.

In a sentence- Learn cv well. I want to become proficient in transforming my own voice and other signals into a control signal for the arp. Maybe make interactivity further from that. I’m thinking having a granular decay might be cool.

 

 

Got the setup to work – but edit to idea.

Loss of people – degeneration of data

 

Main Edit to proposal :

My concept is focusing on the idea of losing someone close to you for whatever reason and I want to make a performance out of this using the arp 2600. The digital (capture ) component of this is focusing on removing various phenomes from my voice in real time to either single out or remove completely the specified sounds.

 

People In Space

Touch Designer & Polycam, Point cloud interact

To use Polycam to scan the environment into a point cloud model and import it into TouchDesigner. Then, using a computer webcam, capture the movements of a person in front of the camera. Combine Feedback, Time Machine, and Noise in TouchDesigner to create particle-based trailing effects related to human motion, such as flowing lines. The captured motion data will influence the imported point cloud model dynamically, creating an artistic interaction.

Steps

  1. Environment Point Cloud Import:
    • Use Polycam to scan the environment and export the point cloud model.
    • Load the scanned point cloud model into TouchDesigner.
  2. Human Motion Capture:
    • Use a webcam to capture the motion of a person in front of the camera.
    • Extract keyframe data of the movement for further processing.
  3. Feedback, Time Machine, and Noise Integration:
    • Use these tools to create particle trailing effects, such as flowing lines that respond to the motion.
  4. Point Cloud Model and Motion Interaction:
    • Map the captured human motion data to the point cloud model in TouchDesigner.
    • Dynamically influence the point cloud to create artistic effects based on human motion.

Final Project

To be honest, I’m feeling a little indecisive/lost about what to do… A little sad that this class is ending that’s for sure…

Some goals/considerations:

  1. I’m not feeling beholden to continue either Project 1 or Project 2, but I do feel both projects could be extended. I actually saw a lot more roadkill this past weekend, and even though I continue to document via my webapp, I think it’d be more powerful to have accompanying images. However, I’m not sure if I have any upcoming trips before this project is due. For Project 2, I’d be interested in exploring some expanded capture techniques that further highlight non-human labor in ways that we don’t recognize with our normal vision.
  2. I would really love to use more of the capture devices in the studio…
  3. I want to consider how we have access to an audience who are eager to INTERACT with our devices. I want to design something that makes more sense for an interactive context and make use of that.

Generally, I’d be interested in exploring ways to encourage people to think in other spatial and temporal scales.

I have lots of silly ideas that I’ll throw out (though I am not really thinking about pursuing any of them) + the theme related to the project I’m interested in:

  • Stereo macro or micro (more of a technical exercise using studio equipment)
  • Changing the feed of a microscope to something that looks at the viewer instead (subverting conventional dynamics of viewer/subject)
  • Collection of microscopic views of different common objects
  • Something that juxtaposes objects on different timescales, e.g. in the time it took X to happen N times, Y happened M times

Project 2: Person in Time “Quiddity”

I’ve always found a certain fascination in reflections and comfort in hiding behind a reflection—observing from a space where I’m not directly seen by the subject. There’s something quietly powerful in capturing moments from this hidden vantage point, where the world unfolds without the pressure of interaction. The reflection creates a barrier, a layer between myself and the moment, allowing me to observe without being observed. It offers a sense of safety, distance, and control while still engaging with the world around me.

This project emerged from that same instinct—an exploration of the subtle dynamic between visibility and invisibility,

Self-Glance Runway

For this project, the process was centered on capturing spontaneous human behavior related to this phenomenon—how we, as individuals, instinctively pause to check our reflections when presented with the opportunity.

 

Setting up at Merson Courtyard, just outside the UC building, provided an ideal backdrop. The three windows leading up to the revolving doors created natural “frames” through which passersby could glimpse themselves.

To discreetly capture these candid moments, I placed a DJI Osmo Action 4 camera on a tripod inside, a compact setup that minimized visibility, while a Bushnell Core DS-4K wildlife camera outside caught “behind the scene clips”. I took measures to make the glass more reflective by dimming indoor lights and using movable whiteboards to create subtle, controlled lighting.

     

The footage was taken on 11/07th from approximately 12:3oPM-5:30PM. And edited by hand in Premiere Pro. I wish I somehow made a system to organize these clips with a system instead of manual labor but I also don’t know how accurate it might’ve been especially in a group movement.

Reflecting on this project, I found myself confronting something unexpected. While editing the hours of footage, I felt a strange authority over the people outside the window, watching their lives unfold in these small, unguarded moments. There was something powerful in observing the casual details—how they dressed, who they were with, the lunches they carried—moments they never intended for anyone to see. But toward the end of the footage, I noticed something unsettling as the sun started to set. With the changing light, more of the indoor space I was working in started to reflect through the windows. I hadn’t timed it right; I didn’t anticipate how early the inside would become visible. Suddenly, as I was watching them, I was on display too, exposed to the very people I thought I was quietly observing.

It felt strange, almost as if I had crafted a scene meant to be invisible, only to find myself unexpectedly pulled into it. The dynamic of gaze and surveillance shifted as the light changed, turning the lens back onto me.

Clip of Theseus

Basic idea:

Mapping and rearranging pixels of one image to form another.

Or, in the more ornate words of a friend based on a basic description of the project: “The machine projects the memory of the source image onto its perception of its incoming vision from the new image. Is it hallucinating? Or just trying to interpret the new stimuli in terms of past experiences?”

Questions:

  • How much can an image’s constituent pixels be rearranged before it loses its original identity? Where/when does one thing end and another begin?
  • Can the ways the pixels are changing reflect the different timelines on which different organisms and systems exist?
  • Can Alice finish Project #2 for ExCap on time? 😬
    • (the answer is “yes?* but not to the desired effect”)
    • *for some definition of yes

Inspiration:

Generally, I’m really inspired by media theoretical work from the likes of Hito Steyerl (e.g. poor image), Trevor Paglen (invisible image), Legacy Russell (Glitch Feminism), and Rosa Menkman (Glitch Manifesto). There are also definitely general inspirations from pixel sorting tradition.

Here’s a page out of Menkman’s Glitch Manifesto:

Three topics:

There were 2.5 topics through which I wanted to experiment (none of which I’ve been able to really flesh out yet):

1) Non-human labor of lichen and moss

1a) s1n0p1a x lichen/moss collaboration

2) the story of my friend/housemate Jane and me.

🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱

Non-human labor of lichen and moss:

My original idea around highlighting non-human labor: such as the enduring role of lichen and moss in restoring post-industrial landscapes and ways in which species persist alongside and in spite of human development. Lichen and moss are keystone species/genus that survived Earth’s last major extinctions. They grow slowly over decades, centuries, millenia–existing on a timeline that exceeds our understanding. For this stage/iteration of the project, I wanted to film 1) the bridge/trains/cars of Pittsburgh and 2) lichen and moss; our changes are intertwined. For the site, I chose to film at the Pump House, the site of the 1892 Battle of Homestead–one of the most pivotal and deadly labor struggles in American history. 

Pump House

Pixels of Bridge Mapped to Pixels of Lichen:

Collected video x Collected video:

Lichen on a rock next to the Pump House

Moss/lichen on the wall of the Pump House

Collected video (I took this video) x Sample Image (I did not take these images):

stereoscopic!

I’m not too happy with the results on this one because the images just don’t… look too good? In future iterations I think I need to be more strategic with filming. I think part of it too was not being able to do stereo in macro out in the wild (but I’m trying to work on this! this seems like a good lead for future exploration).

  • s1n0p1a x lichen/moss webshop: This idea ties back to the non-human labor of lichen and moss original idea. My original plan involved a webshop selling art (or experiences?) that highlighted non-human labor and attempted to collect fees to donate to environmental organizations. One pressing question was WHAT ART? Stereoscopy relates back to the spectacle, voyeurism, and the desire to own (e.g. curiosity cabinet) and clips/art for sale bring to mind pornographic/erotic content. An idea that came to mind was quite simply combining the two via pixel remapping. Does the Free Model Release Form apply here? Might be good on my end because I’m using nude photos of myself? Even though those pixels were shattered, why do I admittedly feel some reticence with using e.g. a photo of myself unclothed. Should the lichen consent to having their likeness represented in this way? I was intrigued by this kind of intimacy. Least progress was made on this idea, but are some sample image (1st and 3rd were images from online and second was an image I took myself, since I didn’t particularly like any of the macro/stereo lichen shots I took):

The story of my friend/housemate Jane and me: Often when I refer to Jane in conversation with others I call her my “housemate,” but I immediately feel the need to clarify that we’ve also been “friends since 6th grade.” As someone who identifies as aroace and constantly tries to rally against amatonormativity; I feel inept at capturing, in conversations with others, the meaningfulness of our longstanding friendship and the ways in which we live together–sharing meals and mutual support over the years. I wanted to honor a bit of that through this project. Over the years, we’ve changed in some ways, but in many ways we’ve stayed the same.

6th grade (Left: HSV, Right: Lab)

Lab w/ max_size 1024

8th grade Yosemite trip

Recent image together (L: lab on cropped image, R: HSV)

Algorithm:
Reconstruct one image (Image B) using the colors of another (Image A) via mapping pixels to their closest color match according to Lab, HSV, and RGB. (After some texts across images, I found I generally liked results from Lab the best.) Both images are downsized (largest dimension limited to max_size parameter) to reduce computational load. A KD-Tree is built for Image A’s pixels. Each pixel in Image B is then matched to the closest unused pixel in Image A based on color similarity; ideally, every pixel in Image A is used exactly once. It gets pretty computationally heavy at the moment and definitely forced me to think more about the construction, destruction, and general features of media.

Some analysis:

The runtime should be O(nlogn) for building the KDTree and O(m*n) for querying m pixels and iterating over n unused pixels for each query, leading to O(nlogn)+O(m*n).

I documented the results of a couple initial tests and found that for images from my phone setting max_size to

50 yields results in ~0.5s
100 yields results in ~6s
150 yields results in ~35s
200 yields results in about 1.25 minutes

50 was computationally somewhat feasible for generating a live output feed based on camera input. I found results start to look “good enough” at 150–though high resolution isn’t necessarily desirable because I wanted to preserve a “pixelated” look as a reminder of the constituent pixels. In the future, I’ll spend some time thinking about ways to speed this up to make higher resolution live video feed possible, though low res has its own charm 🙂

Combinations I tried:

Image + Image

Image + Live Video

Image → Video (apply Image colors to Video)

Video frames → Image (map video frame colors to image in succession)

Video + Video

Future Work

Other than what has been detailed in each of the experiment subsections above, I want to generally think about boundaries/edge cases, use cases, inputs/outputs, and the design of the algorithm more to push the meaning generated by this process. For instance, I might consider ways to selectively prioritize certain parts of Image B during reconstruction

 

This is Your Brain on Politics

For this project, I ended up deciding to use an EEG device, the Muse2 headband, to create real time spectrograms of the electrical activity in my brain before, during, and after the 2024 election. I actually started with a completely different topic using the same capture technique, but I realized last Monday what a great opportunity it was to do my project around the election, so I decided to switch gears.

I was really excited by this project because biology/life science is really interesting to me. An EEG is just a reading of electrical signals, completely meaningless without context, but these signals has been largely accepted by the scientific community to correlate with specific emotional states or types of feelings, and it’s fascinating how something as abstract as an emotion could be quantified and measured by electrical pulses. Elections (particularly this one) often lead to very complicated emotions, and it can be difficult to understand what you’re feeling or why. I liked the idea of using

I was inspired by a few artists who use their own medical data to create art, specifically EEGs. Here’s an EEG self potrait by Martin Jombik, as well as a few links to other EEG artwork.

Martin Jombik, EEG Self Potrait

Personality Slice I
Elizabeth Jameson’s Personality Slice

I was interested in using a capture technique that

 

Workflow

To take my captures, I tried to standardize the conditions under which they were taken as much as possible. For each one, I tried to remain completely still, with closed eyes. I wanted to reduce the amount of “noise” on my recordings- aka any brain activity that wasn’t directly related to my emotional state or thoughts and feelings. I also used an app called MindMonitor which has a convenient interface for reading the data coming out of the Muse2 headset.

For my first capture, the “control,” I tried to do it in a state of complete relaxation, while my mind was at rest. I took it on November 3rd, when election stress was at a neutral level. After that, I experimented with what I did before the capture like watching political ads, reading the news, and of course finding out the results. I then laid down with eyes closed while wearing the headset. Finally, I took one last capture 3 days later, on November 1oth, once my stress/emotional state had returned to (semi) normal.

I decided to present my captures in the form of spectrograms, which is a visual representation of signal strength over time. I found this to be easier to understand than the raw data, as it showed change overtime and provided a color coded representation of signal strenght. These spectrograms can then be distilled into single images (which capture a finite moment in time), or a moving image that shows the results over several minutes. I’ve decided to present each spectrogram as a side by side gif with time stamps to reveal the differences between each one.

Results

There are 5 different types of brain waves that are plotted on the spectrogram, ranging in frequency from  1 to 60hz. I’ve included a glossary of what each frequency/wave type might mean.

  • Delta, 1-4hz: Associated with deep sleep/relaxation, found most in children/infants.
  • Theta, 4-8hz: Associated with subconscious mind, found most often while dreaming or sleeping.
  • Alpha, 8-14hz: Associated with active relaxation, found most often in awake individuals who are in a state of relaxation or focus, like meditation.
  • Beta, 13-25hz: Associated with alertness/consciousness, this is the most common frequency for a conscious person.
  • Gamma, 25-100hz: The highest frequency brain waves, associated with cognitive engagement, found most often while problem solving and learning.

Generally, the areas of 0-10hz are always somewhat active, but in a state of relaxation you should see the most activity around 5-14hz, and barely any activity at the higher frequencies. Blue indicates areas of very low activity, while red indicates an area of high activity.

 

Gif of every spectrogram (very compressed)

I think the video is really effective at showing the differences between each over the same length of time. Prior to the election, the results show relatively low activity overall. While there is consistently a band of red on the leftmost side, the spectrogram from November 3rd is consistent with a person who is relaxed. Going into November 5th and 6th, things look very different. There’s greater activity overall, especially with regards to beta and gamma waves. In fact, there is so much more activity that there is barely any blue. Even without knowing how I felt or what was going on the day these were taken, it’s clear that my brain was substantially more active during those two days than it was before or after. I found the results to be incredibly revealing with regards to the acute impact this years election cycle had on me, especially when placed into context with my “normal” spectrograms before and after.

Person in Time Final — Smalling

Inspirations/etc…

This one actually came out of the process for the last project in a roundabout way– specifically this image:

I really wasn’t a fan of this process image as is, but my obsession with holes and the desire to relate them to my/our body(ies) still stood.

So I started thinking about smallness, which is actually represented everywhere all the time when you start to acknowledge it. A more commercial example is that it’s an ever-present movie trope such as shrinking to reduce one’s carbon footprint (Downsizing) or shrinking as an act of scientific genius (Fantastic Voyage.)

But these are all movie tropes. What becomes somewhat more relatable is these really stupid stock photos, otherwise known as taking the 61, 71, etc… out of CMU campus anywhere between 5-5:30. And this is more what I was thinking, the displacement of a contorted body in a crowd, but I guess without the crowd. 

And I mean, these are also just really good images. Like this situation:

There’s something kind of perverse about this situation. 

I’m not interested in statements about shrinking my carbon footprint or really any form of political statement for that matter. I’m interested in the form of being small, the action of having to contort oneself, and how that works when it has to happen consciously, with no immediate threat or reward. 

Here are some earlier tests:

But I knew I didn’t want the entire project to be self-portraits, and I didn’t want to publicly intervene more than I needed to. So it became more of a simulatory situation. I made a “machine,” that makes people get small, with really not a whole lot of threat or reward involved. The machine was a combination of a live camera feed captured by p5/JS that uses Chroma keying techniques to record a person’s “size” (pixel value) and a makeshift greenscreen setup.

I’m going to say that this captures people at their smallest during a period of ten seconds, but outside of that I’m going to conveniently skip the explanation of what exactly is going on here because I think it will only attract critique that I’m not interested in.

In the mess of figuring out the equipment and the hard task of getting ChatGPT not to fuck up my code; this didn’t turn out the way I had talked about it turning out and I think transparency here would only cloud what the ultimate goal of the project was. I think this may be the item I take into my Excap final, as I’m interested in refining it and “making it do the thing I actually want it to do.” There were issues with the green threshold that was being keyed out, the camera setup was kind of decided last minute and I think could have benefitted from a stricter ruleset, and I’m interested in further gamifying the piece.

Here are some small people:

Here is the camera setup (I don’t have green screen photos and am not sure how necessary it is to add after my presentation, although the feedback was heard)