Policarpo-final

360 Ings (April 2020)

A 360° video performance of hectic and frisky actions and gestures overlapped in the domestic space.

The stay-at-home order has enabled the transfer of our outdoor activities and operations into the domestic space. What used to happen outside of our walls, now happens infinitely repeated in a continuous performance that contains each of our movements.

Return home indefinitely produces an unsettling sentiment of oddness. For some people, this return will affect identically to that man in by Chantal Akerman’s ‘Le déménagement,’ that stands bewildered to the asymmetry of his apartment. For me, it represents an infinite loop of actions that kept me in motion from one place to another.  A perpetual suite of small happenings similar to the piano pieces that the composer Henri Cowell titled ‘Nine Ings’ in the early 20th century, all gerunds:

Floating
Frisking
Fleeting
Scooting
Wafting
Seething
Whisking
Sneaking
Swaying

360 Still image

In ‘360 Ings’ those small gestures and their variations are recorded in the 360-degrees that a spherical camera can capture, isolated and cropped automatically with a machine learning algorithm, and edited all together to multiply the self of the enclosed living.

A 360 interactive version is available on Youtube.

Recording

I used  the Xiaomi Mi Sphere 360 Camera which has a resolution of 3.5K video, and therefore the sizes of the video files were expected to be quite big. For that reason, I had to reduce the time of the performance under 5 min (approx. 4500 frames at 15fps). Time and space planning has been fundamental.

The footage provided is then remapped and stick into a panoramic image using the software provided by the company.

Background subtraction

After the recording process, I used a machine learning method to subtract the background of an image and produce an alpha matte channel with crisp but smooth edges to rotoscope myself in every frame. Trying many of these algorithms has been essential. This website provides a good collection of papers and contributions to this technique.  Initially, I tried MaskRCNN and  Detectron 2 to segment the image where a person was detected, but it did not return adequate results.

Mask RCNN segmentation results

Deep Labelling for Semantic Image Segmentation (DeepLabV3) produces good results, but the edge of the object was not clear enough to rotoscope me in every frame. A second algorithm was needed to succeed in this task, and this is exactly what a group of researchers at University of Washington are currently working on. ‘Background Matting’ uses a deep network with and adversarial loss to predict a high-quality matte by judging the quality of the composite with a new background (in my case, that background was my empty living room). This method runs using Pytorch and Tensorflow with CUDA 10.0. For more instructions on how to use the code, here is the Github site (they are very helpful and quick responding).

Rotoscoping process
Editing

With the new composite, I edited every single action on time and space using Premiere. The paths of my movements often cross and I had to cut and reorganize single actions into multiple layers.

Premiere sequence

Finally, I produced a flat version of the video controlling the camera properties and transition effects between different points of view using Insta360 Studio (Vimeo version).

References

Spaces Of Quarantine – Project Info

The Project:

The goal of this project is to create a tiled map of our spaces of quarantine using photogrammetry and high resolution 3d renderings. A ‘spaces of quarantine’ is a 4 walled area where you’ve spent a ton of time during this crisis. This instructional page will walk you through the steps to generate and share a model of your space. If you have any questions at all feel free to reach out to me via email or any other means.

Examples:

Photogrammetric scan of my space of quarantine:

High resolution rendering of my space from above:


What to do:

  1. Download and get to know the display.land app
    1. Download app
    2. Watch Intro Video
  2. Do some test scans. Below are some key pointers for photogrammetry
    1. Try to prevent shift in light, like shadow and fast moving light sources
    2. Do many sweeps of an object
    3. Move the camera around an object or over a surface — we want to get as many angles as possible on stuff
    4. Try to capture texture — the algorithm needs do be able to differentiate between points in the image, texture and variation in surface helps
    5. Avoid reflective surfaces — this really confuses things
    6. Be patient! It can take between 45 min to 5 hrs to process a model
  3. Scan your space of quarantine – this is the specific 4 walled space where you feel you’ve been spending a ton of time during this crisis.
  4. Share the model. Below are instructions on how to do this in app
    1. Hit share
    2. Press share 3d mesh
    3. Chose OBJ format
    4. Share via email with me (dbperry@andrew.cmu.edu)

 

Cameras on Tools

I was inspired by some of the moving camera projects, especially the one with the camera on the shovel. It’s such a simple concept that still provided an interesting (and enjoyable) perspective on a common tool. This got me thinking of other tools/devices I’d like to mount a camera to:

  • A hammer – I think it would be really delightful to repeatedly slam the viewer down towards a surface. As long as one could effectively stabilize the camera.
  • A pottery wheel – I thought it would be fun to watch a lump of clay sitting still in front of your be transformed by a rapidly spinning hand. And apparently, I’m not the only one who thinks so, because this one has been done by Eric Landon. It’s a pretty neat video!
  • A paintbrush – This idea kind of reminds me of the interactive art project from last spring where you could draw with a stylus and “ride” it in real time with a VR headset. That really let you imagine yourself as one with your own writing instrument. But I think it could be differently informative to look down the handle of a paintbrush with a regular camera. You could watch the bristles as they are dunked into paint and rubbed across the paper in different directions. I think that moving camera would be interesting to apply to this subject because we are so used to thinking of painting/drawing as a person moving their tool across a surface. If the surface is the thing moving, would people watching be able to identify what is being painted? It could also help some people appreciate the differences in brushes and painting techniques more, by highlighting how some bristles are more flexible or more absorbent. Maybe I just still have typologies on the brain, but a set of recordings with different brushes and different paints or papers could be very interesting.

Bedroom Timelapse

Looking at the timelapse examples and it got me thinking about the messes in my bedroom. Certain messes, like dirty laundry, accumulate for a couple days and then disappear when I do laundry. Other messes, like the stuff on my desk, just accumulate. There are still other messes that don’t change at all. Taking a panorama (or other form of 360 degree or wide angle photo) of my room everyday might reveal how these piles of things change over time.

‘Fixed’ Lift

I am very interested in how Dirk Doy’s Fixed series are able to relocate the subject of a video narrative by centering the location of a moving object in a scene. Following his method, I thought it will be interesting to track the position of moving elements along the facade of a building such as construction cranes, window washing climbers, or furniture lifts. This new perspective could create the impression of the whole city being pushed upwards. Here, I made a short clip based on a YouTube video:

pixillation(ish) of a dog

I was captivated by the music video from Oren Lavin who used pixilation to create a beautiful animation. I think it would be interesting to apply Piero Glina & Martin Borst cmyb layered photography with pixilation to create an animation that overdramatizes movement but still has a stop motion feel to the video. The process would be to create almost like a time-lapse of the subject but to make four frames turn into a single frame that is in the cmyb layered style. I think this would be interesting to apply this to dogs taken in the situation of having to do tricks before getting treats. Their movement will be over exaggerated, and it might enhance some of their mood and emotions.

Pollinators: temporal capture idea

Inspired by David Rokeby’s Plot Against Time series, I thought it would be interesting to apply this “long exposure” (or what looks like video processing) idea towards the subject of pollinators in my community farm’s herb garden (Garfield Community Farm).

In particular, I’m interested in a  (slightly) more micro scale than Rokeyby’s typical camera position. One photographer’s work, David Liittschwager’s “biocubes,” provides a compelling new form factor to study the motion of pollinators in one space. This is because the diversity of pollinators in an herb garden is critical to that garden’s health. An abundance of pollinators (bees, flies, other insects) can be best appreciated up close; a ton of movement and activity happens in the herb garden in a very small space.

Slightly closer than this framing would be an optimal scale for observing pollinators in the garden. The pollinator activity in this beebalm patch would be off the charts!

Liittschwager’s study of life in 1-cubic foot, coupled with Rokeyby’s video processing for movement could provide an interesting look into the pollinators of Garfield Community Farm’s herb garden.

 

Olivia’s Temporal Capture Idea

So as I watched the lecture and scanned through the lecture pages, I kept thinking about how these methods can be used right now. My brain, as probably most people’s, is preoccupied by the emotions going on right now. With the capture methods of timelapse, slowmotion, and photogrammetry I kept thinking about how they can be used to capture the emotions/thoughts/actions of people right now going through this point in history. I do not know what it would reveal, but I think the layers of emotions many people seem to be wearing right now is interesting and could possibly reveal how individuals are processing and coping.

My main idea is to make a music video which follows either one or multiple people as they do their normal day to day activities but then something is done to reveal other layers of emotion that exist. Either slowing down on elements of their face that hint at other emotions or using photogrammetry to copy their head then explore what is happening inside using Unity.

Temporal Capture Idea – Flickering Human

I’ve known types of videos like this Tetris video, but never knew that these forms of videos were known as pixillations. Another cool video I stubbed upon recently was this Gatorade ad where they were able to trigger water droplets falling to create the shape of a human in a running pose, and stitch these poses together to make a human walk cycle.

I think it would be cool to have a capture system that would stitch together frames of people moving left to right in a walk cycle, but each frame switches to a taller person. The person could be walking in a 3Q motion that is away from the camera, so that they appear to get smaller, yet since the capture system would switch to frames of taller people, their height would stay relatively constant during the duration.

Above is a sketch of the mechanism, where each person would have a walk cycle video of them stepping on the same footholds (for alignment purposes), and in post-processing a frame frame each person can be taken to create the walk cycle.

Temporal Capture Idea

With a lot of these examples, we’re able to understand somewhat, what we’re looking at. I found it interesting with the drill cam, one won’t really have any idea what they’re looking at unless the see the video footage right before the camera started spinning. Since they’re limited to using color alone to identify the scene, I was was curious about what would happen if I asked people to identify a scene based on what they thought the color “looked” like.

My next thought, was wondering if I took a series of videos of different living communities, would anyone be able to pin-point which was which? For example, a gated community in an affluent neighborhood versus a run-down apartment block in the ghetto. My guess is the former would be lusher and greener while the latter would be more muted tones. I realize this isn’t a very realistic project, but I think it could yield some surprising results about what we think communities look like.