After our final critique of the Typology machine project, I was contemplating how I could have delivered the concept in a more efficient and direct way. Some of the feedback mentioned how the 3d captures were not as clear and the details that I wanted to show were not present. Therefore, I wanted to attempt the project again in a different light. (no pun intended…)
I have a lot of shoes at home, around 10. Some are more worn out than others. Some I wear more often than the special-occasion ones. I simply wanted to photograph them with different lights. Original image, image with only diffused light, image with only specular light and image with UV light.
I was more considerate of the presentation of the typology this time. I initially tried to mask out the shoe but I decided to keep it with the background. I wanted a sense of environment that would be absent if I were to mask it out of context.
For the layout of the images, I took inspiration from paper dice templates–how cardboard boxes look when taken apart. I thought it would be interesting because the concept is that the shoes were photographed inside the shoe box.
When you first click on the pages it takes some loading time but without any action the images will slowly change from one state to another.
What I find interesting about this set of typology is that for the leather shoes (shoe2) it is almost pitch black with no details for the diffused light and uv light images but for the specular it almost replicates the figure of the original image. Through the changes we can see how the material completely bounces off the light. In comparison to the slippers (shoe1), which you can hardly make out the lines of the shoe in the specular light image.
TLDR: I take an embodied approach by intentionally physically colliding with students on campus who are enraptured by their devices to capture their reactions in real-time once they’ve escaped the digital landscape as a way or revealing how deeply technology influences our awareness, everyday interactions, and infiltrates our lives through subtle surveillance.
Research Question
How does cellphone use impact college students’ awareness of their surroundings and interactions with others while walking to class, especially their perception to subtle digital surveillance?
Inspirations
My project was initially inspired by several works brought up at the project’s introduction as initial inspiration as to what is a typography. I was drawn to these typographies primarily because they focused on capturing individuals’ emotional reactions in real-time, within consistent photographic environments, the only thing that changes is the individual depicted. The capture of spontaneous emotions like shock or surprise was something I was dying to emulate in my own work.
Later that week, after Nica and Golan introduced the project, I found campus students on their phones repeatedly running into me, AND an overwhelming majority never said sorry for the collisions. I kept noticing a ridiculous number of people completely absorbed in their devices, visibly bumping into random individuals as they walked. This made me want to illustrate just how embedded phones have become in our lives—especially for young college students—who can’t even put their phones down while walking. Our generation has literally grown up alongside technology’s rapid evolution into incredibly smart, handheld devices, and I wanted to explore this phenomenon of our attachment to technology and its silence surveillance over us.
Initially, I wanted to “go big or go home” and capture the paths of individuals on their phones using a drone to create a timelapse of their movements. However, critiques grounded me and encouraged a more interactive approach to capturing the collisions. One critique group reminded me of the effectiveness of using my own body as the medium that collides with these individuals, allowing my body to literally become part of the experiment—a point that Nica heavily encouraged 🙂
After the critiques, my idea shifted drastically: I would use my body to become the obstacle.
I wanted to gather some data to analyze the project in some way, Tega and the critiques also introduced the element of the qualitative interview, allowing me to ask the single question, “Could I ask what you were doing on your phone just now?” before walking away to see what happens. This added an element of qualitative research was something I was truly SO eager to incorporate, aiming to understand what was truly capturing individuals’ attention and analyze if there was any overlap.
Then came the question of how to capture this. I grappled with using a camera with a 360-degree lens or strapping one on, wondering how I could subtly film the entire footage without asking for permission. Ethics was heavily discussed, and since the footage is in a public place, turns out it’s all good–no permission required . After discussing my concerns with Golan, he suggested the PERFECT solution: a Creep Cam.
What is a Creep Cam, you may ask? It’s a small 45-degree prism used by predators on the subway to subtly look up people’s skirts. So, why not repurpose this unfortunately very clever idea in a completely new way?
Now that I had my concept and method of capture, it was time to bring it to life.
How Did I Develop My Machine?
I was going to 3D print mirrored acrylic, but encountered challenges with permission and printing on a weekend. Instead, I used a small mirror from my NARS blush compact, cardboard, black electrical tape, tape to cover 2/3 of my iPhone camera, and my personal phone to achieve the machine. I combined the cardboard with the compact mirror at a 45-degree angle and used black electrical tape to make the machine blend in with my phone. I also made sure that on the day I went to capture, I was wearing a black top, so it was even more difficult to tell that I was subtly filming people. Additionally, I wanted to look as put together as possible, aiming to draw people’s attention upwards rather than downwards. A little manipulative, but a success all the same!
How Did I Develop My Workflow?
Experimentation & a Small Set of Rules
I limited myself to the area between the CUT and the Mall, specifically near Doherty Hall to the UC. I needed to make sure I wasn’t actively making or forcing a collision; as long as I set myself on the same path, starting to walk and looking down at my phone—and capturing the video of anyone bumping into me or avoiding my path—that was fair game. I wasn’t sure if I needed to stay in place, be more active during rush hours, or walk around the cut, but I needed to experiment and see.
I wrote an empty backpack to fit the part, then started to walk around to see what happened and what individuals said.
Part of the process that was way more complex than I thought
1. More Time = More Awareness
It turns out that individuals are a bit more aware than I expected because I gave them more time to notice. What does this mean? Walking around the CMU grounds takes some time, and moving from one end of Doherty to the UC gives individuals more time to spot me than I would have liked. Interestingly, even though I provided them with this extra time, most people didn’t care or would purposely walk toward me to assert their presence and make a point of getting off their phones.
2. Green Tape on CMU Sidewalks Controls Traffic Flow
During rush hour, people behaved much like cars on either side of the road; the green tape on the CMU sidewalks made it easier for traffic to flow, creating an unspoken rule of staying in your lane that wasn’t there before. I had much more success colliding with people when I wasn’t fighting with the green tape separating the lanes. I often caught people on the diagonal paths from CFA to CS or from CFA to Doherty, or just in the Doherty area where more people were coming in or out right at rush hour.
3. Big Hallways don’t work!
I also attempted to capture interactions in hallways, but the larger hallways like Doherty and Baker contributed to the “stay in your lane” mentality.
4. Where did most of the bumps occur?
I realized that most of the bumps occurred when I was bombarded during rush hour by a variety of people, or during ridiculously slow times of the day when almost everyone was in class, a time when individuals knew it was okay to let their guard down.
Did I Succeed or Fail?
Overall, I would consider the project successful.
I had several people run into me, many react and walk around to avoid my path, and numerous individuals told me what they were looking at on their phones without questioning what I was doing (except for one person out of 30). This could be attributed to my nonchalant approach when asking the question, but I believe most people were still in a state of shock or confusion, which made it difficult for them to process my question. Even more, most people would turn their phone to show me, a true stranger, what they were looking at on their phone!!!!! I may just have a truly trusting demeanor haha! A majority of the individuals who answered my question said “grade scope” (CMU’s grading portal for exams/quizzes), “Checking my grades,” “finding my computer,” “scrolling instagram,” “texting my friend”. I would love to continue the project to gather more quantitative and qualitative insights: does time affect the answers gathered? Are there more phones out during exam week? More phones in the morning than later in the day? How many individuals will say they were on the same application?
The most fascinating aspect was that almost NO ONE recognized the invisible camera. No one asked me what I was up to or if I was recording them. No one told me I couldn’t do it, simply because they didn’t notice the camera! When I reflect on how I conducted the experiment, I’m completely sure that my behavior was a bit odd; just walking up to you asking a question, saying thank you, and then walking away. I was genuinely surprised that no one questioned me. The subtlety of not noticing the camera was a bit scary as it relate to the subtle surveillance perspective. However, when I shared my project with friends, they said, “That’s my worst nightmare, never do that to me!” The subtlety of the camera and the angle of looking down at my phone without noticing created a strong impact for the piece emphasizing the subtlety of surveillance and the impact of technology on our daily lives.
Where do I go from here?
I really want to continue this project!!! More quantitaitive/qualitative aspects to play with / analyze, more reactions of Shock/confusion to capture. I think this project, after having all this time to test and experiment, would benefit from just overloading on typographic captures. Similar to the initial inspirations I stated in the beginning. The only hinderance is being known as the person waiting to bump into people on the way to class…
In this project titled Sunday Ingredients, I photographed the objects I touched with my hands throughout the day on and arranged them sequentially based on the time of interaction, creating a list of objects. These items constitute my day on that Sunday—The Ingredients of 9.29.2024.
I believe that daily life is made up of countless ordinary, trivial matters, and each of these small events is triggered by an object. For instance, drinking water as an activity might be initiated by a bottle of mineral water, or perhaps by a combination of a cup and a faucet. Therefore, the mineral water bottle itself, or the combination of a cup and a faucet, can imply an action—in this case, drinking water. When we document a series of objects, we can imagine and deduce a sequence of behavioral trajectories. For example, if the table shows remnants such as a candy wrapper, a book, a pair of glasses, and a mouse, we might visualize that the owner of these objects was perhaps sketching while eating candy at some point. However, we cannot definitively state that the owner removed their glasses, read a book, ate a candy, and then played with the mouse this morning. This is because these objects form an overlapping trace of different moments, each leaving behind its own mark.
Desk Composition
The hand is the agent of intentional choice. It serves as the bridge connecting my mind to the outside world. If I were an ant, the hand would be my antenna; if I were an elephant, it would be my trunk. The act of touching an object with the hand represents the embodiment of will. When an object is actively touched by the hand, it reflects the need of that particular moment. Thus, when the touched objects are linked together in chronological order, they record my life. If I touch the faucet, refrigerator door handle, milk, cup, and bread in succession, these objects collectively suggest the event of breakfast.
Breakfast Sequential Items
In this project, I documented the main objects my hands actively touched on Sunday, 9.29, starting from when I woke up until I lay in bed preparing to sleep. I sorted them chronologically and divided them into eight time segments.
The Ingredients OF 9.29.2024.
Practice part 2: Collage
After collecting these objects that reflect my activities throughout the day, I wanted to use these “ingredients” to create a collage that represents my day. I took a photo of my hand from that day and used all the objects I interacted with as materials to collage an image of my hand. This collage, titled Hand of 9/29/2024, symbolizes my day.
HANDS OF 9.29.2024
I took 65 photos of the objects I touched that day, cropped each one into 5×5 small squares, and obtained a total of 1,675 material images.
65 Ingredient Item’s Photoes
Library Resource
For the target image, I converted it into a grayscale image and divided it into 15×20 small squares. Then, I compared each target square with the material squares in the library one by one and selected the closest match to replace the original square.
In my comparison function, I used OpenCV’s Canny Edge Detection, ORB (Oriented FAST and Rotated BRIEF), and SSIM (Structural Similarity Index Measure) to perform a comprehensive similarity analysis between the target image and images from the library.
Convert the target image and library images into edge maps using Canny edge detection.
Use ORB to compare the similarity between the edge maps of the target image and the library images.
If the ORB similarity score is greater than the set threshold value (e.g., 0.5), then proceed to evaluate further.
Scoring mechanism: Calculate the SSIM value between the grayscale version of the original target image and the grayscale library image. Then, combine the SSIM score with the ORB score using a weighted formula. The weights are yet to be determined and represented as variables for now.
Machine: Bubble Fisheye Camera & Typology: Selfie with Rich Context
The Machine will create portraits that are extracted from de-warped photographs of soap bubbles, which includes steps of: (1) Create Bubbles (2) Capture Bubbles (3) De-fishing Bubbles.
Bubbles are natural fisheye cameras that capture the colorful world around them. If we take a photo of a bubble with a camera, we will accidentally capture ourselves in the bubble. This gave rise to the idea of making a bubble capture machine, and the later interactive design of the project was influenced by portal picture 360 painting . I think fisheye cameras compress the rich content of the real world, and the process of viewing its image is a possibility of exploring changes from every parts.
I researched the formula for making giant bubble solution: it is
If you want to replicate giant bubbles, there are two tips: 1. The prepared solution must be left to stand for 24 hours. The longer the better the bubbles. 2. Compared with iron wire, cotton and linen ropes can be dipped into more solutions.
In order to verify the technical feasibility of defishing, we used a real mirror sphere for experiment. We tried and compared the Photoshop technology flow and the processing technology flow. Among them, Photoshop Adaptive Wide Angle: (1) Edge pixels are lost. (2) Customizable parameter adjustment capability is poor. Processing Programming: (1) Can obtain specific data from fisheye cameras (2) Customized adjustment parameters (3) More interactive when coding on one’s own. Therefore, I chose processing. The general idea is to extract the output photo pixels and map them back to 2D fisheye pixels from 3D space through triangular change mathematical calculations. Thanks to golan for figuring out the underlying principle with me. We tried different mathematical formulas and finally found the right one!!
Let me talk about the technical points of this pipeline: – Bubble Photograph setting: (1)Autofocus (2)Continuous shooting mode(3)High-speed shutter. – Programmable adjustable parameters: (1)thetaMax (2)Center of circle + Radius, which I drawn in the canvas dynamically. – De-fish Criteria: Refer to the original curves and straight lines in the real world. – Explore Interaction:(1)De-fisheye for different focus points based on the location of the user mouse (2)Print parameters.
Some objective challenges and solutions. A. Whether:I prepared the bubble water and waited for 24 hours to start one week in advance, however, it rains every day in Pittsburgh. If the wind is strong, the bubbles will burst, not to mention the rainy day, and to capture the reflection of the bubble, the requirement for sunlight is very high. So, I can only wait for the weather to be sunny. B. Photograph of Moving Bubbles:The first time I took a group of photos of bubbles, I found that the pixels of the photo were enough, but the bubbles would be blurred when I focused on them because they were always flying. Later, I used the continuous shooting mode, autofocus, and high-speed shutter, and finally take clear photos of the moving bubbles. C. Bubble Shape:Fisheye restoration hopes that the bubbles are round, but the bubbles are affected by gravity, the surface solution is uneven, and coupled with wind disturbances, round bubbles are really valuable. D. Unwarp Fisheye Image:Fisheye Mathematic and Processing. Thanks to Golan & Leo!!! 🙂
Surprise! The fisheye image captured by the bubble machine actually has two shadows of me. This is because the lower inverted image is from the bubble’s rear surface acting as a large concave mirror. However, the upper upright image is from the bubble’s front acting as a convex mirror!!
Some Findings: I.Round bubbles are better capture machines! II.Dark environments are better for capturing bubble images! III.Don’t blow bubbles too big, they will be deformed! From my perspective,
“Bubbles =Fisheye Camera + Prism Rainbow Filter for Selfie With Rich Context”
Firstly, The surface of the bubble can contain up to 150 unbalanced film layers, reflecting different colors of sunlight.Secondly, I like this capture machine because of its randomness: shooting angle, weather, bubble size and thickness will bring unexpected surprises to the final selfie!
Pennies are meant to be non unique. They are worth exactly one cent because it was decided that they be worth exactly one cent. They are the smallest form of currency in America, and so much be thought of as entirely homogeneous and uniform. In reality though, pennies are anything but. We’ve al experinced getting for change a particularily grimy penny, maybe corroded, maybe green, maybe scratched up and tarnished, then, after taking a quick glance at its year dropping it in a tip jar on in a pant pocket, never to see the light of day again. We don’t generally get the chance to see the entire visual space pennies can occupy, and appreciate their weirdnesses and differences across their entire aesthetic spectrum. Hence, Penny Space.
I scanned 1149 pennies with a high resolution scanner, wrote python scripts to crop them, embed them into vectors, and dimension reduce those vectors into the 2D grid you see above.
So, what’s my project? Alright, well, the first part of it was spending a lot of time nitpicking over ideas.
I knew I wanted to do something with sound, and then I knew I wanted to do something on technophony. If a soundscape is composed of biophony, geophony, and anthrophony— technophony is electro-mechanical noise, subcategory of human-noise.
To me technophony seems to fall into background-noise-noise-pollution and sounds meant to communicate with humans. Ex. ventilation drone and a synthetic voice or beep.
The first is interesting to me because of how subtle and unnoticed yet constant and invasive it is. The second is interesting because by giving machines sensors and reactive ques, they gain a sense of agency where they otherwise shouldn’t have any (language is typically considered a trait only of things with sentience). If these two ideas are combined, you’re presented with a world where you’re constantly engulfed in sentient actors that are completely invisible.
There’s a point that centralized a/c isn’t particularly human-reactive or communicative— it’s only sensing room temp, it also doesn’t have a voice. However, a lot of a/c.s will cycle on and off in a pattern, which creates an image where you are inside of a giant thing that is breathing very very slowly. There’s other things like this— streetlights that turn on and off at dusk and dawn have nocturnal sleep-wake cycles.
What I’ve ended up doing — I’ve been trying to get speech-to-text transcription to work on technophonic noises.
An extremely subtle sound is indistinguishable from room tone, and it feels like I am not recording any one specific technophonee.
Okay, I can abandon interviewing a/c units, and try particuarly clangy radiators or faulty elevators. Great, yup.
Whisper (python library) transcriptions will give wildly different results on the same file— the input and the process are both random- bunk.
Vosk (python library) transcriptions vary little enough that it’s an actual methodology, but that means it’s too good at filtering out anything not human speech.
Where I’m actually at:
I can generate spectrograms (librosa).
I can transcribe a file via Whisper or Vosk to a .txt file.
I can input an audio file, output a .mp4 with captions time-stamped to word-level (via vosk in videogrep + moviepy).
Getting word level timestamps out of a whisper .json fucking sucks dude.
Initially, I took interest in the polarizer sheets, which I can divide the diffused light and the specular light with. I was particularly interested in the visual of image with only the diffused light.
(Test run image: Left (original), Right (diffused only))
My original pipeline was to extract materials from real life object and bring it into a 3D space with the correct diffused and specular maps.
After our first crit class, I received many feedbacks on my subject choice. I could not think of one that I was particularly interested in, but after seeking advice, I grew interested in capturing flowers and how some of them display different patterns when absorbing UV light. Therefore, I wanted to capture and display the UV coloration of different flowers that we normally do not see.
I struggled mostly with finding the “correct” flower. Other problems that came with my subject choice were that flowers wither quickly, they are very fragile and quite small.
(Flower I found while scavenging the neighborhood with the bullseye pattern, but it withered soon after.)
After trying different programs to conduct photogrammetry, RealityScan worked the best for me. I also attached a small piece of polarizer sheet in front of my camera because I wanted the diffused image for the model; there was not a significant difference since I couldn’t use a point light for the photogrammetry.
Here is the collection:
Daisy mum
(Normal)
(Diffused UV)
Hemerocallis
(Normal)
(Diffused UV)
(The bullseye pattern is more visible with the UV camera)
Dried Nerifolia
(Normal)
(Diffused UV)
My next challenge was to merge two objects with different topology and UV maps so that it has two materials on one model. Long story short, I learned that it is not possible…:’)
Some methods I tried on Blender were
Join two objects as one bring the two UV maps together than swap them
Transfer Mesh Data + Copy UV Map
Link Materials
They all resulted in a broken material like so…
The closest to the result was this, which is not terrible.
Left is the original model with original material; middle is the original model with UV material “successfully” applied and Right is the UV model with UV material. However, the material still looked broken, so I thought it was best to keep the models separated.
This final version is made using a small recorder from Amazon, ffmpeg, and Sennheiser’s AMBEO Orbit.
Around 240 clips are extracted by limiting the shorted lengths and db level, so that ffmpeg extracts every clip above -24db and longer than 4seconds. From there I manually sorted through them to find 160 that are actually garbage bags falling. Some clips are loud but not garbage bags falling, such as the rustling of the ziploc bag as I tape the recorder to the chute.
Then the clips are concatenated from lowest db level to highest. I used google text to speech AI plugin with ffmpeg to insert a robot voice saying the numbers before each clip.
AMBEO Orbit is a VST Plugin that can be used inside audition to imitate the effect of a binaural recorder. I manually edited the concatenated clip to achieve a similar effect.
All scripts were written with the help of ChatGPT and YouTube tutorials.
I also tried versions of surround sound, binaural effect using pydub, concat from short to long, and concat without the robot voice counter. You can find them and the python scripts I used here:
I followed tutorials on YouTube to train a Tensorflow model with the 160 recordings of bags falling, and around 140 clips of silence and other loud but not bags falling recordings. It is meant to learn from the spectrograms like this:
👆example of my result from the preprocessing function of the training model
I got a model, but it failed to recognize the sound of garbage bags falling in a 2h long recording (which I had left out of the training data).
Due to the time limit, I don’t understand the vocabulary tensor flow uses (like these), which could be the issue:
It may also be the training clips I used that are not well chosen. This will take more time to learn and change.
3. Graphing
I was able to use Matplotlib with ffmpeg to dynamically graph the timestamps of the video at which the garbage bags were falling. This would work better if the learning model could successfully distinguish garbage bags from other sounds.
4. Anaglyph binaural plugin
This is also a VST3 plugin that has a few more features than AMBEO Orbit, but it did not show up in audition or garbage band after installing.
I document through photo/geolocation and burn digital incense for roadkill–lives sacrificed for the current automotive-focused vision of human transit–encountered during routes driven in my car.
Inspiration:
Last summer to fall, I spent a lot of time driving long distances, and I was struck by the amount of roadkill I’d see. This strikes me every time I’m on the road because of several projects I’ve been working on these past two years around the consequences of the Anthropocene and the reconfigurations of our transit infrastructures for automobility.
Workflow
Equipment:
Two GoPro cameras w/ recommended 64gb+, two GoPro mounts, two USB-C cables for charging
Tracking Website: a webpage to track which locations you’ve burned digital incense for dead animals. I made this webpage because I realized I needed a faster way to mark geolocation (GPS logging apps didn’t allow left/right differentiation, voice commands were too slow) and to indicate whether to look left or right in the GoPro footage. Webpage also meant I could easily pull it up on my phone without additional setup.
Blood, sweat, and tears?
Setup of the two roadside GoPro cameras mounted to the left and right sides of the car to film the road on both sides. The cameras are set to minimum 2.7K, 24fps; maximum 3K, 60fps.
My workflow is basically:
1) appropriately angling/starting both cameras before getting into the car
CONTENT WARNING FOR PHOTOS OF ANIMAL DEATH BELOW
Sample Images:
Test 1 (shot from inside of car, periodic photos–not frequent enough)
Memo: Especially after discussing with Nica, decided to move away from front-view shots since they’re so common/you don’t as good a capture.
Test 2 (shot from sides of car)
Memo: Honestly not bad and makes the cameras easier to access than putting them output, but I wanted to capture the road more directly/avoid reflection/stickers in the window, so I decided to put the cameras outside
Test 3: Outside
The final configuration I settled on. I did aim my camera down too low at one point and believe I missed some captures as a result. It was generally difficult to find a good combination of good angle + good breadth of capture. Some of these photos are quite gory, so click into the folder of some representative sample images below at your own caution.
2) using the incense burning webpage to document when/where I spotted/”burned incense” for roadkill
3) stopping the recording as soon as I end the route (and removing the cameras to place safely at home)
4) finding appropriate images by finding the corresponding time segment of the video. For now, I’ve also used Google My Maps to map out the routes and places along the routes where I’ve “burned digital incense” in this Roadkill Map. CW: Animal Death
Key principles:
A key principle driving this work is that I will only collect data for this project on trips that I already plan to make for other reasons; because by driving, I’m also contributing to roadkill (inevitably with, at minimum, insects–who were not included in this project due to time/scope). This both means that each route traveled is more personal and that I am led to drive in particular lanes and particular speeds/ways.
Design decisions:
My decision to add the element of “burning incense” for the roadkill I encountered arose out of three considerations:
1) acknowledging time limitations–algorithmic object detection simply takes too long on the quantity of data I have and I found the accuracy to be quite lacking
2) wanting a more personal relationship with the data; I always notice roadkill (also random dogs 🫰) when I drive and I wanted to preserve something of that encountering and the sense of mourning that drove this project through a manual/personal approach to sifting through the data. Further, when I was collecting data with the intention of sifting through algorithmically for roadkill, I found myself anxious to find roadkill–which completely started distorted how I wanted to view this project.
3) due to both the limitations of detection models and my own visual detection (also in some ways it’s not the safest for me to be vigilantly scanning for roadkill all the time), I inevitably cannot see all roadkill (especially with some being no longer legible as animals due to being flattened/decayed). This means I cannot really hope to accurately portray the real “roadkill cost”—at best, I can give a floor amount. However, through the digital incense burning, I can accurately portray how many roadkill animals I burned “digital incense” for.
Future Work
In many ways, what I built for the Typology Machine project is only the beginning of what I see as a much more extensive future of data collection. Over time, I’d be curious about patterns emerging, and as I collect more data, I would be interested in constructing a specific training data set for roadkill. Further, I’d like to think about how to capture insects as well as part of future iterations (they would probably require a different method of capture than GoPro). I also think that for the sake of my specific project, it would be nice to have a scrolling window filming method–such that things are filmed in e.g. 5 second chunks and when the Left/Right button is pressed, the last 5 seconds prior to the button press are captured/saved, allowing for higher resolution and FPS without taking up an exorbitant amount of space. It would also be interesting to look through this data for other kinds of typologies–some ideas I had were crosses/shrines at the sides of the road, interesting road cracks, and images of my own car caught in reflections. I would also like to build out the design of the incense burning website more and create my own map interface for displaying the route/geolocation/photo data–which I didn’t have much time to do this time around.