I was captivated by the music video from Oren Lavin who used pixilation to create a beautiful animation. I think it would be interesting to apply Piero Glina & Martin Borst cmyb layered photography with pixilation to create an animation that overdramatizes movement but still has a stop motion feel to the video. The process would be to create almost like a time-lapse of the subject but to make four frames turn into a single frame that is in the cmyb layered style. I think this would be interesting to apply this to dogs taken in the situation of having to do tricks before getting treats. Their movement will be over exaggerated, and it might enhance some of their mood and emotions.
Category: Projects
Pollinators: temporal capture idea
Inspired by David Rokeby’s Plot Against Time series, I thought it would be interesting to apply this “long exposure” (or what looks like video processing) idea towards the subject of pollinators in my community farm’s herb garden (Garfield Community Farm).
In particular, I’m interested in a (slightly) more micro scale than Rokeyby’s typical camera position. One photographer’s work, David Liittschwager’s “biocubes,” provides a compelling new form factor to study the motion of pollinators in one space. This is because the diversity of pollinators in an herb garden is critical to that garden’s health. An abundance of pollinators (bees, flies, other insects) can be best appreciated up close; a ton of movement and activity happens in the herb garden in a very small space.
Liittschwager’s study of life in 1-cubic foot, coupled with Rokeyby’s video processing for movement could provide an interesting look into the pollinators of Garfield Community Farm’s herb garden.
Olivia’s Temporal Capture Idea
So as I watched the lecture and scanned through the lecture pages, I kept thinking about how these methods can be used right now. My brain, as probably most people’s, is preoccupied by the emotions going on right now. With the capture methods of timelapse, slowmotion, and photogrammetry I kept thinking about how they can be used to capture the emotions/thoughts/actions of people right now going through this point in history. I do not know what it would reveal, but I think the layers of emotions many people seem to be wearing right now is interesting and could possibly reveal how individuals are processing and coping.
My main idea is to make a music video which follows either one or multiple people as they do their normal day to day activities but then something is done to reveal other layers of emotion that exist. Either slowing down on elements of their face that hint at other emotions or using photogrammetry to copy their head then explore what is happening inside using Unity.
Temporal Capture Idea – Flickering Human
I’ve known types of videos like this Tetris video, but never knew that these forms of videos were known as pixillations. Another cool video I stubbed upon recently was this Gatorade ad where they were able to trigger water droplets falling to create the shape of a human in a running pose, and stitch these poses together to make a human walk cycle.
I think it would be cool to have a capture system that would stitch together frames of people moving left to right in a walk cycle, but each frame switches to a taller person. The person could be walking in a 3Q motion that is away from the camera, so that they appear to get smaller, yet since the capture system would switch to frames of taller people, their height would stay relatively constant during the duration.
Above is a sketch of the mechanism, where each person would have a walk cycle video of them stepping on the same footholds (for alignment purposes), and in post-processing a frame frame each person can be taken to create the walk cycle.
Temporal Capture Idea
With a lot of these examples, we’re able to understand somewhat, what we’re looking at. I found it interesting with the drill cam, one won’t really have any idea what they’re looking at unless the see the video footage right before the camera started spinning. Since they’re limited to using color alone to identify the scene, I was was curious about what would happen if I asked people to identify a scene based on what they thought the color “looked” like.
My next thought, was wondering if I took a series of videos of different living communities, would anyone be able to pin-point which was which? For example, a gated community in an affluent neighborhood versus a run-down apartment block in the ghetto. My guess is the former would be lusher and greener while the latter would be more muted tones. I realize this isn’t a very realistic project, but I think it could yield some surprising results about what we think communities look like.
Person in Time – Julie’s Closet
I have an incredible friend in the costume design program, Julie Scharf, and she is an artwork in herself. She is incredibly dedicated to her vintage clothing collection, the history and practice of performance costume, and queer imagery in the entertainment industry–and since seventh grade, she has not worn the same outfit twice.
Her, as a stylist, and I, as a photographer, and both of us as queer artists, have partnered on an indefinitely-long project of creating a critical photographic anthology of queer costume. This is not nearly a detailed enough description of it, but the idea is still in development and we don’t want to reveal too much about it yet.
However, for about a month now, I have been photographing Julie’s outfits and her accompanying performances on many days of the week. Some of my favorites so far:
I would like to use the Person In Time project to create a work that would contribute to this larger project. The relevant “time” component here is that we are documenting Julie over time, which is in itself based on the historical timelines of costume, queerness, and performance. Julie and I are interested in expressing our ideas non-traditionally (media more queer than photography), so ExCap provides a perfect opportunity to start.
First, I would be most excited to computationally create my own slit-scan camera and take strange images of Julie and her outfits with it. This was inspired by Golan’s description of my last project as slit-scanned spaciotemporal sculptures. I wasn’t exactly sure what he meant by “slit-scanned”, so I looked it up and I am absolutely obsessed with it. Slit-scanning is essentially a long-exposure photography technique, except instead of layering entire frames taken over time on top of each other, mere slits of the frame are captured and stitched chronologically left to right. This is the photography of time, not space. The images below are just a few of the incredibly beautiful applications of this technique.
Since I’d be making the camera myself, I see the potential for a lot of experiments as well: I could order the slits left to right, as is normally done, but I could also go right to left, up and down, and randomly, to name a few.
My second idea is to create a video like Kylie Minogue’s Come Into My World. I don’t know how I’d be able to do this easily without the precision of the robot arm, so I guess I’d program the robot arm to film videos of Julie doing different performances in an exact circle and layer them on top of each other.
Finally, I also think it would be interesting to document Julie’s outfits with photogrammetry instead of regular photography, perhaps suspending them from the ceiling with string (which I could remove in post-production) to get 3D versions of this:
Person in Time Early Ideas
[Draft – two main ideas]
- Isolating interactions with specific objects, and maybe the manipulation of things like keys or peeling an orange, and digitally removing the objects from the final capture. I’m specifically interested in eating, moving objects from the world to a specific point on your body and focusing intensely on the minute motions of how our hands and bodies manipulate our environment.
- I was also thinking about expressing the effects of relativity and the delay of perception due to the finite speed of light, and scaling it down to alter how we perceive bodies in motion. This idea doesn’t apply as directly to a specific human motion as of now, but I may work on incorporating a specific connection.
Proposal: Person in time
I’d like to make an app or piece of software that tracks you when you touch your face. It is out of a desire to train myself to not touch my face in light of the upcoming corona virus plague that is (speculatively) coming. I would wear a handless chest mount for my phone, and potentially take a photo and play a sound every time the hands touch the face. These would ideally be compiled to a video of all such instances in real time.
I am thinking of using Unity to make a face-tracking app with ARKit among other tools – technical suggestions appreciated!
ordering one of these:
Person In Time: Draft Ideas
- Use photogrammetry on home videos from the early 2000s to reconstruct the actions of a deceased family member, whose memory is fading from my mind.
- Do some sort of data visualization (tSNE/UMAP, a GAN, a searchable archive?) on 20K+ images downloaded from the Tumblr I used consistently from ages 13-19.
- Do something with my girlfriend’s cooking skills. Maybe I could film her through the heat camera while she sautés some crazy thing, or film the fermentation of kimchi for a timelapse…
Person in Time Project Ideas
1.) Body as Loop
I’ve been working on a zoetrope with 3d printed lithophanes (3d prints thin enough to shine a light through so that you can see an image) for my Sculpture After The Internet Class and I would like to expand it for this second project. The way the motor is set up I believe I can attach a centre point so that I could have multiple of these zoetropes running off the same motor, ie have lots of loops in sync. With this in mind, I’m pretty interested in the idea of the body as loop (the current iteration has me jumping as it’s framework) and so I would like to have multiple zoetropes with multiple body loops (walk cycle, swinging arms, jumping, rolling on the floor etc) running at the same time.
2.) Love in the Motion Capture Century
Largely inspired by Ayako Kanda and Mayuka Hayashi’s x-ray portraits, I would like to create a series of either animations or still portraits using couples in the motion capture studio. I am particularly interested in how the system is going to figure out the space/blocked sensors between people and whether or not they’ll turn in a symbolic digital blobs under these conditions. [I also want to expand these into large scale/human scaled silk screen prints given the time] // [Could also try to incorporate parallel DepthKit recordings and map the motion tracking to them? Not sure how to technically do this]
3.) Disney’s brain is in a jar
Using photogrammetry (and indeed it’s capability and need for surface) I want to create a series of prints that will essentially make a pseudo 3d model of a human head. This would be silkscreens on a series of plastic sheets to build up a 3d form through looking through them. I am particularly interested in how it is going to negotiate the space inside the form, and indeed dialogue over capturing the inside of bodies through technologies meant for the surface of things.