In my work, I want to depict the degradation of memories over time despite the seeming permanence of technology. Out of Memory is intended to turn a series of technical failures into something meaningful and aesthetically pleasing.
Part of the human condition is to have emotional attachments to mortal beings and ephemeral situations, which compels us to avoid loss by all means possible. Our collective fear of loss begets unique features of the human psyche, like the fear of dementia and obsession with documentation. Now more than ever, photography is seen as a tool for ‘capturing’ memories in ‘high fidelity,’ ‘lossless’ formats. But unless we were to create a simulation of equal size to our universe, a virtual space will always be a compressed version of its real-life counterpart. Like the human brain, which transforms memories with every re-encoding and imbues them with the context of their recollection, there is no perfect way to digitally recover a memory. Having experimented with photogrammetry over the course of my time in Experimental Capture, I realized that this method is a good visual and conceptual representation of my ideas about memory.
The three subjects I illustrated in this project are intended to represent three facets of emotional connection: the construct of the self (Self Portrait,) relationships and environments (The Late Spring,) and personal/family histories (Light room.) The first is an attempt from back in February to make a 3D model of myself using photogrammetry. Due to my imperfect technique, the model was full of aberrations and my head split into four. The second image documents my attempt to model my housemate striking a Tae Kwon Do form in the backyard of our house in Vermont, a place I never would have stayed for so long if it weren’t for the pandemic. The third shows the favorite stuffed animals of my little brother, who died when I was 9, posed in what used to be his bedroom. The room has since been renewed by years of my other little brother’s life there and neutralized by generic guest room decor once he moved out, but the emotional presence remains. The room is full of bright, reflective surfaces that caused the model to glitch and shafts of light to take physical form.
Below, you can see the original, uncolored pen drawings I made from the photogrammetry. Below that are screenshots of the photogrammetry models themselves.
I’m not sure if this counts as ‘app misuse’, but we did use a photography technique in an unexpected way. I had my housemate Katy, who is a Tae Kwon Do first don (the first level of black belt) and friend Rachel*, who is a fourth don (almost instructor level) do various forms with glowing nunchucks that we ordered online. Rachel was able to critique Katy’s form from the photos alone, so we essentially invented a weird and laborious critique system.
This was my first time trying long exposure photography, so the photos are not in focus — it was difficult enough to get the shutter speed and aperture right. Katy has ordered more glowing nunchucks in different colors, so the next time I do this I’ll try to get the focus right. We also didn’t have a perfectly dark background for the first few photos, so we went to the abandoned quarry (hence the stripy pattern in some of them.)
If I had more time this week, I would’ve wanted to trace the results in Adobe Illustrator to make graphic ‘logos’. Though the photo is noisy, I especially love the script-like form of the image above.
I came across a 3D photo inpainting demo from researchers Shih et al via a twitter post from ML artist Gene Kogan. The neural networks involved create a depth map of the input, then synthesize new color and depth content to (somewhat) realistically fill the spaces between. As Kogan’s experiment shows, it even works on artificially-created images like Deep Dream output.
I wanted to use their Colab to restore the dimensionality of old family photographs. This photo is particularly interesting to me — I think it shows something about love, home, and maybe privilege. Beyond the initial shock factor of bringing these two-decade-old moments ‘to life,’ I thought it was interesting how the algorithm failed to do so perfectly. For example, it couldn’t understand the simple checkered pattern on my mom’s shirt and distorted it. The profile of her face is also imperfect, making the scene feel more like a paper cutout. The kitchen is also a little wobbly, as if I’m viewing the scene through thick glass. As usual, ML has created a visual equivalent of the distortion and degradation of memories in the human brain (… insert my BHA Capstone here.)
Due to my frustration with my output in this class, art in general, and life, I am definitely behind schedule with my final project. Ultimately, I’ve decided to switch directions and work on something that may be less interesting as a capture method, but is more meaningful to me personally (and will hopefully be more meaningful to the viewer as well.)
Over the past couple weeks, I had my roommate Katy pose several times for Tae Kwon Do photogrammetry. We tried it with 5 people (the entire isolation pod) taking photos from different angles, but this led to background noise that reduced Katy herself to a sad blob. This is where I was at last Thursday, from which point I’ve worked on three further plans.
Plan A: A few days later we tried again with two photographers, which a lot better but still not perfect. The model of Katy’s body lacked detail and the nunchuks she was holding never rendered correctly. Even using Metashape on her gaming GPU with 16GB of RAM, it did not have enough memory to attempt a higher-quality mesh. I’m considering attempting to model in the nunchucks myself using Maya.
Plan B: I tried extracting frames from a 20-second video of Katy holding a kick (I had to learn FFMPEG to do this, and command line tools are scary) which output over a thousand high-quality images. Metashape is thought long and hard about these, but it cut off Katy’s head. Overall, this process has been frustrating because I’ve made decent photogrammetry of my own face before, so I don’t know what I’m doing wrong this time. Maybe the lighting was optimal in the STUDIO but not here, or maybe Katy was shifting her balance slightly which can’t really be helped.
Plan C: The content I am interested in working with is my old family video and photos. I want to create a virtual space representing what my little brother’s room would have looked like in early 2008. I’ve been experimenting with the Colab notebook for 3D photo inpainting, which produces awesome images, but the actual 3D objects it produces are fairly useless height maps. Therefore, the space will incorporate two techniques. First, I will ask my parents to take photogrammetry-style pictures of my brother’s old stuffed animal dog. No matter how fucked up the model is, I will use it. In fact, it’ll be better if it’s a complete mess. I’ll also incorporate illustration by attempting to redraw the rug he had, which had train track graphics on it, and the rest of his stuffed animals. Overall, I hope this combination of 3D and 2D elements would make for an interesting virtual space.
I will be making several fairly lightweight offerings for the remainder of the semester, unless one of the experiments works well enough to turn into a legitimate final project.
These will include:
Film my girlfriend cooking with the mini heat camera.
Experiments with my housemate and another member of our isolation pod who are Tae Kwon Do black belts.
We tried doing photogrammetry with the other five of us taking pictures/ videos while she held a kick, but the results were very messy even after Metashape spent two days processing. I am going to try again ASAP with just me taking photos.
She ordered glow-in-the-dark nunchucks so this weekend we’ll try some long-exposure nunchuck painting.
Try to produce better/more interesting results with the slit-scan program I wrote in p5.js. This may have more to do with my choice of video input than improving the program itself.
What I really want to do is make something with the digitized VHS tapes from my childhood, but I am stuck on what form it will take.
Photogrammetry didn’t work very well because I couldn’t find a clip where everyone was staying still as the camera moved.
I also tried the Topaz video upscaling app because I thought it would interesting to have the algorithm recreate the memories more vividly. Unfortunately, it only made the artifacts worse (which is kind of poetic but didn’t look so great.)
Any suggestions are welcome. I feel like this is a valuable source with high emotional valence (at least to me) but I don’t know what to do with it.
Mostly, I just want to make something nice before I graduate 🙁
I made an infographic of my girlfriend’s scars from being a sous chef. She also has a gnarly scar further up her arm from trying to stand on a plastic reindeer as a kid. She fell off the reindeer and impaled herself on the antlers 🙁
Documentation of a walk around downtown Burlington, Vermont on a rare 60° day (it’s snowing now.) It’s even emptier than usual as most of the businesses are closed. I wanted to capture the sense of abandonment and muted color palette. I also like seeing how ‘beautiful’ images are created by humans attempting to reconstruct nature.
I tried to take a self portrait in my girlfriend’s glasses as we played Animal Crossing. However, there wasn’t enough contrast between the TV and the ambient lighting (it was dark outside) to see anything other than the silhouette of my hand.
Experiment 2: Skate Park Teapot
We thought it looked like a fisheye skate park video. Unfortunately, the teapot is a little too dirty.
Experiment 3: Yet Another “Funny” Zoom Virtual Background
I wish I had a better webcam so this looked less chaotic, but it’s still funny.