Final Project

Updates for final exhibition:

For the final exhibition, I was mainly thinking about how to bring everything into an interactive experience for the exhibition visitors. My exhibition station had a few components:

Large monitor: shows a variety of media to set the tone–a manifesto of ideas driving the project, an animation/introduction to the project, and a visual of pixels being remapped from one person to another.

Center screen: videos/audios of non-human kin at magnified scales

Left screen: stereo pairs of focus-stacked moss; the screen was shown with a stereo viewer to facilitate the 3D visual effect.

Right screen: red-blue anaglyphs of focus-stacked moss; red-blue glasses were provided nearby so visitors could enjoy the effect

Handheld microscope: provided so visitors can see even more of the installation at scales they wouldn’t otherwise experience.

The screens were surrounded by grass and leaves from my yard (I tried to chose something that would create minimal disruption to the natural surroundings) and some dried plants a friend gifted me in the past. This was done so the media on the screens seemed to merge into the grass/leaves. As visitors leaned in to engage with the material (e.g. the stereo viewer), they would also find themselves getting closer to and smelling the grassy/vegetal scents.

I used my phone as one of the screens, so I both forgot to take an image earlier on during the exhibition (when everything was assembled more nicely) and wasn’t able to take a photo of the full setup, but here’s an image of the overall experience.

Project summary from before:

On the last episode of alice’s ExCap projects… I was playing around with stereo images while reflecting on the history of photography, the spectacle, stereoscopy and voyeurism, and “invisible” non-human labor and non-human physical/temporal scales and worlds. I was getting lost in ideas, so for this current iteration, I wanted to just focus on building the stereo macro capture pipeline I’ve been envisioning earlier (initially because I wanted to explore ways to bring us closer to non-human worlds* and then think about ways to subvert the way that we are gazing and capturing/extracting with our eyes… but I need more time to think about how to actually get that concept across :’)).

*e.g. these stereo videos made at a recent residency I was at (these are “fake stereo”) really spurred the exploration into stereo

Anyway… so in short, my goal for this draft was to achieve a setup for 360, stereoscopic, focus-stacked macro images using my test object, moss. At this point, I may have lost track a bit of the exact reasons for “why,” but I’ve been thinking so much about the ideas in previous projects that I wanted to just see if I can do this tech setup I’ve been thinking about for once and see where it takes me/what ideas it generates… At the very least, now I know I more or less have this tool at my disposal. I do have ideas about turning these images into collage landscapes (e.g. “trees” made of moss, “soil” made of skin) accompanied by soundscapes, playing around with glitches in focus-stacking, and “drawing” through Helicon’s focus stacking algorithm visualization (highlighting the hidden labor of algorithms in a way)… but anyway… here’s documentation of a working-ish pipeline for now.

Feedback request:

I would love to hear any thoughts on what images/what aspects of images in this set appeal to you!

STEP 1: Macro focus-stacking

Panning through focal lengths via pro video mode in Samsung Galaxy S22 Ultra default camera app using macro lens attachment

Stacked from 176 images. Method=C (S=4)

Focus stacked via HeliconFocus

Actually, I love seeing Helicon’s visualizations:

And when things “glitch” a little:

Stacked from 37 images. Method=A (R=8,S=4)

I took freehand photos of my dog’s eye at different focal lengths (despite being a very active dog, she mainly only moved her eyebrow and pupil here).

2. Stereo macro focus-stacked

Stereoscopic pair of focus-stacked images. I recommend viewing this by crossing your eyes. Change the zoom level so the images are smaller if you’re having trouble.

Red-Cyan Anaglyph of the focus-stacked image. This should be viewed via red-blue glasses.

3. 360 Stereo, macro, focus-stacked

I would say this is an example of focus stacking working really well, because I positioned the object/camera relative to each other in a way that allowed me to capture useful information throughout the entire span of focal lengths allowed on my phone. This is more difficult when capturing 360 from a fixed viewpoint.

Setup:

  1. Take stereo pairs of videos panning through different focal lengths, generating the stereo pair by scooting the camera left/right on the rail
  2. Rotate turntable holding object and repeat. To reduce vibrations from manipulating the camera app on my phone, I controlled my phone via Vysor on my laptop.
  3. Convert videos (74 total, 37 pairs) into folders of images (HeliconFocus cannot batch process videos)
  4. Batch process focus stacking in HeliconFocus
  5. Take all “focused” images and programmatically arrange into left/right focused stereo pairs
  6. Likewise, programmatically or manually arrange into left/right focused anaglyphs

Overall, given that the cameral rail isn’t essential (mostly just used it to help with stereo pairs; the light isn’t really necessarily either given the right time of day), and functional phone macro lenses are fairly cheap, this was a pretty low-cost setup. I also want to eventually develop a more portable setup (which is why I wanted to work with my phone) to avoid having to extract things from nature. However, I might need to eventually transition away from a phone in order to capture a simultaneous stereo pair at macro scales (lenses need to be closer together than phones allow).

The problem of simultaneous stereo capture also remains.

Focus-stacked stereo pairs stitched together. I recommend viewing this by crossing your eyes.

Focus-stacked red-blue anaglyphs stitched together. This needs to be viewed via the red-blue glasses.

The Next Steps:

I’m still interested in my original ideas around highlighting invisible non-human labor, so I’ll think about the possibilities of intersecting that with the work here. I think I’ll try to package this/add some conceptual layers on top of what’s here in order to create an approximately (hopefully?) 2-minute or so interactive experience for exhibition attendees.

Some sample explorations of close-up body capture

Final Project Draft – Explorations into 360 Stereo Macro

Project summary:

(pass out glasses)

On the last episode of alice’s ExCap projects… I was playing around with stereo images while reflecting on the history of photography, the spectacle, stereoscopy and voyeurism, and “invisible” non-human labor and non-human physical/temporal scales and worlds. I was getting lost in ideas, so for this current iteration, I wanted to just focus on building the stereo macro capture pipeline I’ve been envisioning earlier (initially because I wanted to explore ways to bring us closer to non-human worlds* and then think about ways to subvert the way that we are gazing and capturing/extracting with our eyes… but I need more time to think about how to actually get that concept across :’)).

*e.g. these stereo videos made at a recent residency I was at (these are “fake stereo”) really spurred the exploration into stereo

Anyway… so in short, my goal for this draft was to achieve a setup for 360, stereoscopic, focus-stacked macro images using my test object, moss. At this point, I may have lost track a bit of the exact reasons for “why,” but I’ve been thinking so much about the ideas in previous projects that I wanted to just see if I can do this tech setup I’ve been thinking about for once and see where it takes me/what ideas it generates… At the very least, now I know I more or less have this tool at my disposal. I do have ideas about turning these images into collage landscapes (e.g. “trees” made of moss, “soil” made of skin) accompanied by soundscapes, playing around with glitches in focus-stacking, and “drawing” through Helicon’s focus stacking algorithm visualization (highlighting the hidden labor of algorithms in a way)… but anyway… here’s documentation of a working-ish pipeline for now.

Feedback request:

I would love to hear any thoughts on what images/what aspects of images in this set appeal to you!

STEP 1: Macro focus-stacking

Panning through focal lengths via pro video mode in Samsung Galaxy S22 Ultra default camera app using macro lens attachment

Stacked from 176 images. Method=C (S=4)

Focus stacked via HeliconFocus

Actually, I love seeing Helicon’s visualizations:

And when things “glitch” a little:

Stacked from 37 images. Method=A (R=8,S=4)

I took freehand photos of my dog’s eye at different focal lengths (despite being a very active dog, she mainly only moved her eyebrow and pupil here).

2. Stereo macro focus-stacked

Stereoscopic pair of focus-stacked images. I recommend viewing this by crossing your eyes. Change the zoom level so the images are smaller if you’re having trouble.

Red-Cyan Anaglyph of the focus-stacked image. This should be viewed via red-blue glasses.

3. 360 Stereo, macro, focus-stacked

I would say this is an example of focus stacking working really well, because I positioned the object/camera relative to each other in a way that allowed me to capture useful information throughout the entire span of focal lengths allowed on my phone. This is more difficult when capturing 360 from a fixed viewpoint.

Setup:

  1. Take stereo pairs of videos panning through different focal lengths, generating the stereo pair by scooting the camera left/right on the rail
  2. Rotate turntable holding object and repeat. To reduce vibrations from manipulating the camera app on my phone, I controlled my phone via Vysor on my laptop.
  3. Convert videos (74 total, 37 pairs) into folders of images (HeliconFocus cannot batch process videos)
  4. Batch process focus stacking in HeliconFocus
  5. Take all “focused” images and programmatically arrange into left/right focused stereo pairs
  6. Likewise, programmatically or manually arrange into left/right focused anaglyphs

Overall, given that the cameral rail isn’t essential (mostly just used it to help with stereo pairs; the light isn’t really necessarily either given the right time of day), and functional phone macro lenses are fairly cheap, this was a pretty low-cost setup. I also want to eventually develop a more portable setup (which is why I wanted to work with my phone) to avoid having to extract things from nature. However, I might need to eventually transition away from a phone in order to capture a simultaneous stereo pair at macro scales (lenses need to be closer together than phones allow).

The problem of simultaneous stereo capture also remains.

Focus-stacked stereo pairs stitched together. I recommend viewing this by crossing your eyes.

Focus-stacked red-blue anaglyphs stitched together. This needs to be viewed via the red-blue glasses.

The Next Steps:

I’m still interested in my original ideas around highlighting invisible non-human labor, so I’ll think about the possibilities of intersecting that with the work here. I think I’ll try to package this/add some conceptual layers on top of what’s here in order to create an approximately (hopefully?) 2-minute or so interactive experience for exhibition attendees.

Some sample explorations of close-up body capture

Final Project

To be honest, I’m feeling a little indecisive/lost about what to do… A little sad that this class is ending that’s for sure…

Some goals/considerations:

  1. I’m not feeling beholden to continue either Project 1 or Project 2, but I do feel both projects could be extended. I actually saw a lot more roadkill this past weekend, and even though I continue to document via my webapp, I think it’d be more powerful to have accompanying images. However, I’m not sure if I have any upcoming trips before this project is due. For Project 2, I’d be interested in exploring some expanded capture techniques that further highlight non-human labor in ways that we don’t recognize with our normal vision.
  2. I would really love to use more of the capture devices in the studio…
  3. I want to consider how we have access to an audience who are eager to INTERACT with our devices. I want to design something that makes more sense for an interactive context and make use of that.

Generally, I’d be interested in exploring ways to encourage people to think in other spatial and temporal scales.

I have lots of silly ideas that I’ll throw out (though I am not really thinking about pursuing any of them) + the theme related to the project I’m interested in:

  • Stereo macro or micro (more of a technical exercise using studio equipment)
  • Changing the feed of a microscope to something that looks at the viewer instead (subverting conventional dynamics of viewer/subject)
  • Collection of microscopic views of different common objects
  • Something that juxtaposes objects on different timescales, e.g. in the time it took X to happen N times, Y happened M times

Clip of Theseus

Basic idea:

Mapping and rearranging pixels of one image to form another.

Or, in the more ornate words of a friend based on a basic description of the project: “The machine projects the memory of the source image onto its perception of its incoming vision from the new image. Is it hallucinating? Or just trying to interpret the new stimuli in terms of past experiences?”

Questions:

  • How much can an image’s constituent pixels be rearranged before it loses its original identity? Where/when does one thing end and another begin?
  • Can the ways the pixels are changing reflect the different timelines on which different organisms and systems exist?
  • Can Alice finish Project #2 for ExCap on time? 😬
    • (the answer is “yes?* but not to the desired effect”)
    • *for some definition of yes

Inspiration:

Generally, I’m really inspired by media theoretical work from the likes of Hito Steyerl (e.g. poor image), Trevor Paglen (invisible image), Legacy Russell (Glitch Feminism), and Rosa Menkman (Glitch Manifesto). There are also definitely general inspirations from pixel sorting tradition.

Here’s a page out of Menkman’s Glitch Manifesto:

Three topics:

There were 2.5 topics through which I wanted to experiment (none of which I’ve been able to really flesh out yet):

1) Non-human labor of lichen and moss

1a) s1n0p1a x lichen/moss collaboration

2) the story of my friend/housemate Jane and me.

🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱🌱

Non-human labor of lichen and moss:

My original idea around highlighting non-human labor: such as the enduring role of lichen and moss in restoring post-industrial landscapes and ways in which species persist alongside and in spite of human development. Lichen and moss are keystone species/genus that survived Earth’s last major extinctions. They grow slowly over decades, centuries, millenia–existing on a timeline that exceeds our understanding. For this stage/iteration of the project, I wanted to film 1) the bridge/trains/cars of Pittsburgh and 2) lichen and moss; our changes are intertwined. For the site, I chose to film at the Pump House, the site of the 1892 Battle of Homestead–one of the most pivotal and deadly labor struggles in American history. 

Pump House

Pixels of Bridge Mapped to Pixels of Lichen:

Collected video x Collected video:

Lichen on a rock next to the Pump House

Moss/lichen on the wall of the Pump House

Collected video (I took this video) x Sample Image (I did not take these images):

stereoscopic!

I’m not too happy with the results on this one because the images just don’t… look too good? In future iterations I think I need to be more strategic with filming. I think part of it too was not being able to do stereo in macro out in the wild (but I’m trying to work on this! this seems like a good lead for future exploration).

  • s1n0p1a x lichen/moss webshop: This idea ties back to the non-human labor of lichen and moss original idea. My original plan involved a webshop selling art (or experiences?) that highlighted non-human labor and attempted to collect fees to donate to environmental organizations. One pressing question was WHAT ART? Stereoscopy relates back to the spectacle, voyeurism, and the desire to own (e.g. curiosity cabinet) and clips/art for sale bring to mind pornographic/erotic content. An idea that came to mind was quite simply combining the two via pixel remapping. Does the Free Model Release Form apply here? Might be good on my end because I’m using nude photos of myself? Even though those pixels were shattered, why do I admittedly feel some reticence with using e.g. a photo of myself unclothed. Should the lichen consent to having their likeness represented in this way? I was intrigued by this kind of intimacy. Least progress was made on this idea, but are some sample image (1st and 3rd were images from online and second was an image I took myself, since I didn’t particularly like any of the macro/stereo lichen shots I took):

The story of my friend/housemate Jane and me: Often when I refer to Jane in conversation with others I call her my “housemate,” but I immediately feel the need to clarify that we’ve also been “friends since 6th grade.” As someone who identifies as aroace and constantly tries to rally against amatonormativity; I feel inept at capturing, in conversations with others, the meaningfulness of our longstanding friendship and the ways in which we live together–sharing meals and mutual support over the years. I wanted to honor a bit of that through this project. Over the years, we’ve changed in some ways, but in many ways we’ve stayed the same.

6th grade (Left: HSV, Right: Lab)

Lab w/ max_size 1024

8th grade Yosemite trip

Recent image together (L: lab on cropped image, R: HSV)

Algorithm:
Reconstruct one image (Image B) using the colors of another (Image A) via mapping pixels to their closest color match according to Lab, HSV, and RGB. (After some texts across images, I found I generally liked results from Lab the best.) Both images are downsized (largest dimension limited to max_size parameter) to reduce computational load. A KD-Tree is built for Image A’s pixels. Each pixel in Image B is then matched to the closest unused pixel in Image A based on color similarity; ideally, every pixel in Image A is used exactly once. It gets pretty computationally heavy at the moment and definitely forced me to think more about the construction, destruction, and general features of media.

Some analysis:

The runtime should be O(nlogn) for building the KDTree and O(m*n) for querying m pixels and iterating over n unused pixels for each query, leading to O(nlogn)+O(m*n).

I documented the results of a couple initial tests and found that for images from my phone setting max_size to

50 yields results in ~0.5s
100 yields results in ~6s
150 yields results in ~35s
200 yields results in about 1.25 minutes

50 was computationally somewhat feasible for generating a live output feed based on camera input. I found results start to look “good enough” at 150–though high resolution isn’t necessarily desirable because I wanted to preserve a “pixelated” look as a reminder of the constituent pixels. In the future, I’ll spend some time thinking about ways to speed this up to make higher resolution live video feed possible, though low res has its own charm 🙂

Combinations I tried:

Image + Image

Image + Live Video

Image → Video (apply Image colors to Video)

Video frames → Image (map video frame colors to image in succession)

Video + Video

Future Work

Other than what has been detailed in each of the experiment subsections above, I want to generally think about boundaries/edge cases, use cases, inputs/outputs, and the design of the algorithm more to push the meaning generated by this process. For instance, I might consider ways to selectively prioritize certain parts of Image B during reconstruction

 

Quiddity Proposal

For this assignment, I’d like to create a website selling some kind of stereoscopic footage or other “micro-media” I’ve captured about micro worlds/worlds happening at timescales other than ours–with a bit of a twist.

When users buy, they are taken through several steps that highlight non-human (including what we consider living and non-living under non-animist frameworks) and human labor processes. As a simple example:

This is a clip of a video of a small worm wriggling in some kind of soil substrate. The final price might include

1 min x $20/60 min worm acting fee = $0.33 (fee is arbitrary at the moment, but I will justify it somehow for the completed project, though there are obvious complications/tensions around the determination of this “price”)

1 min x $20/60 min soil modeling fee = $0.33

5 min x $20/60 min computer runtime fee = $1.65

5 min x $20/60 min artist labor fee

Etc

The calculation of costs will help bring attention to aspects of the capture process that people might not normally think about (e.g. are we mindlessly stepping into another’s world?), but I think care needs to be taken with the tone taken.

Since some of the “participants” can’t be “paid” directly, we mention to buyers that this portion of the cost will be allocated to e.g. some organization that does work related to the area. For instance, the worm/soil might link to a conservation organization in the area. The computer runtime step might link to an organization that relates to the kinds of materials extracted to make computers and the human laborers involved in that process (e.g. Anatomy of an AI). There will also be information about the different paid participants (e.g. information about the ecosystem/life in the ecosystem the video was filmed in, something of an artist bio for myself in relation to the artist labor).

I will aim for final prices that make sense for actual purchase–as a way of potentially actually raising money for these organizations. If the final totaled labor costs result in something too high, I will probably provide a coupon/sale.

To avoid spamming these organizations with small payments, the payments will be allocated into a fund that gets donated periodically.

Besides the nature of the footage/materials sold, a large part of this project would be thinking/researching about how I actually derive some of these prices, which organizations to donate to, and the different types of labor that have gone into the production of the media I’m offering for sale.

Background from “existing work” and design inspirations:

Certain websites mention “transparent pricing”

Source: Everlane 

Other websites mention that X amount is donated to some cause:

I’m also thinking of spoofing some kind of well-known commerce platform (e.g. Amazon). One goal is to challenge the way these platforms promote a race to the bottom in pricing in ways completely detached from the original materials and labor. If spoofing Amazon, for instance, instead of “Buy Now,” there will be a sign that says “Buy Slowly.”  

Nica had mentioned the possible adjacencies to sites selling pornography (where else do you buy and collect little clips like that) and NFTs. And in considering this project, I’m also reminded of the cabinet of curiosities. Ultimately, all of these (including stereoscopy) touch on a voyeuristic desire to look at and own.

What I initially had in mind for this project was a kiosk in an exhibition space where visitors can buy art/merchandise. I’m still thinking about how to make the content of the media more relevant to the way in which I want to present it, so open to suggestions/critical feedback!! I think there are a couple core threads I want to offer for reflection:

1. desire to look and own, especially in a more “art world” context (goal would be to actually offer the website/art objects in an exhibition context to generate sales for donations). What would generate value/sales? Would something physical be better (e.g. Golan had mentioned View-master disks)

2. Unacknowledged labor, including of the non-human

3. Building in the possibility for fundraising and general education. Thanks!

Trash, Decay, and Large-Scale Smooshing

First, Littered MVMNTS [https://www.instagram.com/litteredmvmnts/]: kind of an inverse portraiture of a human mimicking the motion of trash through choreographed movements. I like how the artist is using his art to bring attention to an environmental issue (and utilizing instagram/tik tok platforms to do so, e.g. he selects 15 second snippets), the choices of costume, and that–in line with his message–he picks up the trash when he’s done.

Next are these silver gelatin photographic prints of religious statues defaced by Khmer Rouge looters between 1975-1979 developed by Zhi Wei Hiu (and captured by Zhi’s uncle). Zhi developed these images, which were stored for 30-40 years in non-archival conditions (the photos themselves were taken by his uncle), resulting in signs of fungal growth on the negatives. Zhi further coats the surface of the paper with zinc oxide, an abrasive which captures the motion of a silver stylus across the paper. I thought this was an interesting example of “temporal capture” because the photographic duplicates of the real objects, left to the devices of nature and time, also capture the decay of the captured object.

In this photo Zhi is demonstrating how they want the photo to be displayed for an upcoming exhibition

Finally, Pipilotti Rist’s Open My Glade (Flatten), which shows a video projection of her seemingly pressed against a glass surface, moving side to side, on the large windows at the New Museum’s entrance. This work appeals to me as an example of portraiture because it’s kind of grotesque (and with some scale) while being in a genre and exhibition setting that usually aims for flawlessness. I also like the paradoxes in this image–how we are both given and denied a sense of corporality and how the display medium is suggested by the media (window), but is not the one that caused the effect (pressed against glass).

Roadkill Ritual

CW: Animal Death

incense emoji | AI Emoji Generatorincense emoji | AI Emoji Generatorincense emoji | AI Emoji Generatorincense emoji | AI Emoji Generatorincense emoji | AI Emoji Generatorincense emoji | AI Emoji Generatorincense emoji | AI Emoji Generatorincense emoji | AI Emoji Generator

I document through photo/geolocation and burn digital incense for roadkill–lives sacrificed for the current automotive-focused vision of human transit–encountered during routes driven in my car.

Inspiration:

Last summer to fall, I spent a lot of time driving long distances, and I was struck by the amount of roadkill I’d see. This strikes me every time I’m on the road because of several projects I’ve been working on these past two years around the consequences of the Anthropocene and the reconfigurations of our transit infrastructures for automobility. 

Workflow

Equipment:

  • Two GoPro cameras w/ recommended 64gb+, two GoPro mounts, two USB-C cables for charging
  • Tracking Website: a webpage to track which locations you’ve burned digital incense for dead animals. I made this webpage because I realized I needed a faster way to mark geolocation (GPS logging apps didn’t allow left/right differentiation, voice commands were too slow) and to indicate whether to look left or right in the GoPro footage. Webpage also meant I could easily pull it up on my phone without additional setup.
  • Blood, sweat, and tears?

Setup of the two roadside GoPro cameras mounted to the left and right sides of the car to film the road on both sides. The cameras are set to minimum 2.7K, 24fps; maximum 3K, 60fps.

 

My workflow is basically:

1) appropriately angling/starting both cameras before getting into the car

CONTENT WARNING FOR PHOTOS OF ANIMAL DEATH BELOW

Sample Images:

Test 1 (shot from inside of car, periodic photos–not frequent enough)
Memo: Especially after discussing with Nica, decided to move away from front-view shots since they’re so common/you don’t as good a capture.

Test 2 (shot from sides of car)
Memo: Honestly not bad and makes the cameras easier to access than putting them output, but I wanted to capture the road more directly/avoid reflection/stickers in the window, so I decided to put the cameras outside

Test 3: Outside
The final configuration I settled on. I did aim my camera down too low at one point and believe I missed some captures as a result. It was generally difficult to find a good combination of good angle + good breadth of capture. Some of these photos are quite gory, so click into the folder of some representative sample images below at your own caution.

Typology Machine Sample Images – Google Drive

2) using the incense burning webpage to document when/where I spotted/”burned incense” for roadkill

3) stopping the recording as soon as I end the route (and removing the cameras to place safely at home)

4) finding appropriate images by finding the corresponding time segment of the video. For now, I’ve also used Google My Maps to map out the routes and places along the routes where I’ve “burned digital incense” in this Roadkill Map. CW: Animal Death

 

Key principles:

A key principle driving this work is that I will only collect data for this project on trips that I already plan to make for other reasons; because by driving, I’m also contributing to roadkill (inevitably with, at minimum, insects–who were not included in this project due to time/scope). This both means that each route traveled is more personal and that I am led to drive in particular lanes and particular speeds/ways.

Design decisions:
My decision to add the element of “burning incense” for the roadkill I encountered arose out of three considerations:

1) acknowledging time limitations–algorithmic object detection simply takes too long on the quantity of data I have and I found the accuracy to be quite lacking

2) wanting a more personal relationship with the data; I always notice roadkill (also random dogs 🫰) when I drive and I wanted to preserve something of that encountering and the sense of mourning that drove this project through a manual/personal approach to sifting through the data. Further, when I was collecting data with the intention of sifting through algorithmically for roadkill, I found myself anxious to find roadkill–which completely started distorted how I wanted to view this project.

3) due to both the limitations of detection models and my own visual detection (also in some ways it’s not the safest for me to be vigilantly scanning for roadkill all the time), I inevitably cannot see all roadkill (especially with some being no longer legible as animals due to being flattened/decayed). This means I cannot really hope to accurately portray the real “roadkill cost”—at best, I can give a floor amount. However, through the digital incense burning, I can accurately portray how many roadkill animals I burned “digital incense” for.

Future Work

In many ways, what I built for the Typology Machine project is only the beginning of what I see as a much more extensive future of data collection. Over time, I’d be curious about patterns emerging, and as I collect more data, I would be interested in constructing a specific training data set for roadkill. Further, I’d like to think about how to capture insects as well as part of future iterations (they would probably require a different method of capture than GoPro). I also think that for the sake of my specific project, it would be nice to have a scrolling window filming method–such that things are filmed in e.g. 5 second chunks and when the Left/Right button is pressed, the last 5 seconds prior to the button press are captured/saved, allowing for higher resolution and FPS without taking up an exorbitant amount of space. It would also be interesting to look through this data for other kinds of typologies–some ideas I had were crosses/shrines at the sides of the road, interesting road cracks, and images of my own car caught in reflections. I would also like to build out the design of the incense burning website more and create my own map interface for displaying the route/geolocation/photo data–which I didn’t have much time to do this time around.

Roadkill Toll

Idea 1: 

I have this idea about capturing the “roadkill” cost of a given trip (vs e.g. toll or gas). Context: last summer to fall, I spent a lot of time driving between Pittsburgh and NYC, and I was struck by the amount of roadkill I’d see. I was also especially affected because I had spent the year prior to that thinking about the consequences of the Anthropocene and the reconfigurations of our transit infrastructures for automobility. 

I started looking for examples of previous work done on this subject and found these quotes

  • The great irony of roadkill is this: Its most conspicuous victims tend to be those least in need of saving. Simple probability dictates that you’re more likely to collide with a common animal—​a squirrel, a raccoon, a white-​tailed deer—​than a scarce one. The roadside dead tend to be culled from the ranks of the urban, the resilient, the ubiquitous.
  • But roadkill is also a culprit in our planet’s current mass die-​off. Every year American cars hit more than 1 million large animals, such as deer, elk, and moose, and as many as 340 million birds; across the continent, roadkill may claim the lives of billions of pollinating insects. The ranks of the victims include many endangered species: One 2008 congressional report found that traffic existentially threatens at least 21 critters in the U.S., including the Houston toad and the Hawaiian goose. If the last-ever California tiger salamander shuffles off this mortal coil, the odds are decent that it will happen on rain-​slick blacktop one damp spring night.
  • Astronomical as those numbers for larger animals may be, they pale in comparison with the amounts of insects and other smaller creatures that perish on the road. To get a handle on that, Arnold van Vliet of Wageningen University & Research in the Netherlands and his colleagues devised a citizen science project specifically focused on insect mortality. Drivers were asked to take a daily photograph of all the insects squished on their license plates, record their car’s mileage and then scrub the license plate to start with a clean slate the next day. By extrapolating from the nearly 18,000 dead insects thus tallied, the group came up with estimates that, if extended globally, would mean that 228 trillion insects are killed each year on the world’s 36 million kilometers of roads.
  • Here are the relevant links for the quotes above:

I talked with Nica and we concluded that focusing on insects would perhaps capture the “least visible/highest impact” with a relatively simpler setup. I thought about perhaps capturing the exact moments that different insects collide with my windshield. I still have to think through the exact capture system, but the car environment does help with some of the space/power logistics. One constraint for this project I would like to impose is that I will not intentionally drive around to collect data for this project; rather, I’d collect data through regular use of my car. I would like to think of the capture system built as something that could allow people to collect open source and crowd-sourced data about this topic. Visually, I want to represent the data collected in a way similar to road logistics shown on Google Maps. Overall, I hope this project will serve as a first interaction on building a setup for collecting roadkill data on future trips (personally but perhaps in a more public way as well) and uploading visual representations of the data.

Beyond the Human Eye

I think what I see as the “artistic opportunity” is well-summarized by this quote:

“[Photographic observation] had always revealed objects too small, too fast, too complex too slow, and too far away to be seen with the eye”—specifically, the human eye.

I’ve been working a bit recently (perhaps following in a wave of work challenging the trends of the Anthropocene) on projects involving the other/more than human (here’s a small example!). I think part of what new “methodological/scientific/scientistic approaches to imaging” allows is a humbling of the human self as we realize the multiplicity of worlds beyond our seeing and imagining. The way we perceive, take in, internalize the world is like one moment of a slit scan. To me, the artistic opportunity here comes from understanding beyond ourselves, and in the process, learning a bit more about our capabilities and boundaries along the way.

Also, unrelated, but does the history of photography introduced at the beginning of the text e.g.

Relate at all to modern gear head pixel peeping culture?

Oh the places you’ll go!

You won’t lag behind, because you’ll have the speed.

-Oh, the Places You’ll Go!

Some thoughts while working on this assignment

1. How can we use these tools in ways slightly different from intended? When do these tools not behave as expected/glitch? How can we do this intentionally? What does that reveal about the method of capture?

2. How can I make the method of capture recede in comparison to other considerations? Very media theory…

3. Do I spontaneously capture and look for interesting moments later or do I plan more in advance? Other life events might have helped answer this more than artistic direction…

Methods used: Panorama, Timelapse, Slo-mo

Slight explanation on this one: this was captured while I scrolled through my photos from a summer dance workshop while camping in the mountains.