MarthasCatMug – Final: Volumetric Video of Hair

For my final, I revisited my goal of capturing hair. I aimed to use photogrammetry in video form (also called “volumetric” or “4D” video) to try and capture moving hair. There were a lot of unknown factors going into the project that attracted me. I didn’t know how I was going to obtain many cameras, how I could set up a rig, how I could do the capturing process, or how I could process the many images taken by the many cameras into 3D models that could become frames. I wasn’t even sure if I’d get a good, bad, or an unintelligible result. I wanted the chance to do a project that was actually experimental and about hair though.

In preparation for proposing this project I was looking into the idea/concept of hair movement and on that subject, what I found were mostly technical art papers on hair simulation (ex. this paper talks about obtaining hair motion data through clip in hair extensions). Artistically though, I found the pursuit of perfectly matching “real” hair through simulations a bit boring. I want the whimsy of photography and the “accuracy” of 3d models at the same time.

21 photos, 18 aligned, very big model, about 425k vertices

My process started with an exploration of the photogrammetry software: Agisoft Metashape which comes with a very useful 30-day free trial in the standard version. I experimented around with taking pictures and videos to get the hang of the software. My goal here was to see if I could find the fewest amount of photos (and therefore cameras) that would be needed to create a cohesive model. It turns out that number is somewhere just below 20 for a little less than 360 degree coverage.

I was able to borrow 18 Google Pixel phones (which all had 1/8th, 240 ftps slow motion), 18 camera mounts, a very large LED light, several phone holders, a few clamps, and a bit of hardware from the Studio. I was then able to construct a hack-y photogrammetry setup.

Since the photogrammetry rig seemed pretty sound, the next step was to try using video. After filming a sample of hand movements, manually aligning the footage and exporting each video as folders of jpegs, I followed the “4D processing” Agisoft write-up. This- no joke- took over 15 hours (and I didn’t even get to making the textures).

manually synchronizing video
hand test, 720 frames (this was overkill)

Aligning the photos took a few minutes (I was very lucky with this); generating a sparse point cloud took a bit over an hour; generating the dense point cloud took four; and generating the mesh took over 10. I didn’t dare try to generate the texture at that point because I was running out of time. I discovered here that I’d made a few mistakes:

  1. I forgot the setup I made is geared towards an upright object that is centered and not hands so this test was not the best to start with
  2. Auto focus :c
  3. Auto exposure adjustment :c
  4. Overlap should really be at about 70%+
  5. and “exclude stationary tie points” is an option that should only be checked if using turntable

So, what next? Cry ? yes :C but also I did try to wrangle the hair footage I have into at least a sliver of volumetric capture within the time I had.

I think that in a more complete, working, long form, I’d like for my project to live in Virtual Reality. Viewing 3D models on a screen is nice but I think there is a fun quality of experience in navigating around virtual 3D objects. Also, I guess my project is all about digitization: taking information from the physical world and not really returning.

MarthasCatMug – Final Proposal

For my final project I want to motion capture hair. My self defined goal to capture (for now is just my) hair and movement in an actually experimental way. This is a combination of my other two projects and an attempt at pushing my experience with experimental capture. I don’t want to capture things in with the utmost degree of reality as I tried to with my hair clump scans but I also want a sort of immediate visual recognition.

At the moment I’m hoping to do a sort of photogrammetry video/ volumetric capture of my hair as I move or dance through a space and then key out anything that is not (for example) red in all of the textures. I’ve been looking into possibly using agisoft metashape to do the photogrammetry with and ffmpeg to process image textures with. To be honest, I’m not sure exactly how the technical pipeline will work.

– I’m not concerned with reality, hair simulation, etc., I am a little concerned/interested in how the subject of hair or movement may affect my exploration. I also don’t foresee the ability to be super expansive.
– Alternatively I want to stick motion capture reflective tape pieces onto my hair and use the ideate motion capture studio to get data.

Circles in Chinese Dance Moves

I’ve been wanting to integrate my background in dance into my art practice for a while now and Person in Time was the perfect project to start with. Originally I was interested in exploring basic/fundamental movement because, to me, the “quiddity” of those movements felt super apparent in my own body while dancing. However, after actually doing the motion capture, what I could see was that the most visually interesting aspect of my recorded motion was not the minute movement details (they were certainly there, but not really readable). What was cool to see was the circles that I was creating. Circular movements are a key component to many Chinese dance moves including the ones I performed. Funnily, I’d actually forgotten that was part of the movement theory that I’d been taught many years ago.

Balls: “点翻身 (Diǎn fānshēn)” turns in place

Balls: “串翻身 (Chuàn fānshēn)” traveling turns

Balls: “涮腰 (Shuàn yāo)” bending at waist

Activity/Situation Captured: circular movements in Chinese Dance moves

Inspiration: This motion capture visualization video that includes some Chinese traditional dance samples, a bunch of SCAD Renderman animations that use mocap data (notably, these do not create meshes or objects), and Lucio Arese’s Motion Studies:

Process:

This project had a two-fold process: performing and interpreting. For the performing part I needed to research, decide, and rehearse my movements before arranging a motion capture session. I started with some residual knowledge from when I used to do Chinese classical dance but watching recordings helped me re-familiarise myself with the movements. I decided on four small sections of choreography: “turning (with arms) in place,” a traveling version of turning (with arms), a large leaning movement, and some subtler arm movements.

I was able to arrange a time with Justin Macey to get into the Wean 1 motion capture studio and record all of my dance movements. Justin sent me all of the motion capture data we’d captured in .fbx format and I was able to import all of if directly into Blender. Inside blender, I wrote and then ran a Python script to extract the location data of each of the bones inside of the the animated armatures into text files (many thanks to Ashley Kim for helping me here c: ).

Mocap Data in .fbx in blender: 点翻身 (Diǎn fānshēn) “turn over”; 串翻身 (Chuàn fānshēn) “string turn over”; 涮腰 (Shuàn yāo) “waist”; 绕手腕 (Rào shǒuwàn) “around the wrist”

The next step I took was to write and a script to turn every point into spheres, this is initially where I stopped.

The interpreting process as a whole has been open ended and non-conclusive. I originally wanted to actually copy the SCAD Renderman method (link: http://www.fundza.com/rfm/ri_mel/mocap/index.html) before realizing that since it was renderman, the effects were only in the renders. I’d also hoped to use blenders meta balls because they have a cool sculptural quality but those posed a challenge to give separate materials to (also it crashed my entire file once so I gave up on them).

Even after figuring out a pipeline, it took a long time (for me) to do relatively little developmentally and (though I also didn’t really optimize the mesh-creation process) each cloud of 500-3000 balls took around 30 minutes to generate and then finagle into an exportable .obj format. As for the motions’ readability, it was OK: I found the results to be somewhat cool as abstract forms  (I was familiar with the movements and shapes at this point). I played around a lot with assigning different colors across frames or body part to varying visual success I think (was a bit burnt out by all the coding so I did this by hand in excel and this gradient website).

On Nov. 1, during work time in class, while trying to figure out ways to get Sketchfab to display correctly on WordPress, Golan took a look at the models I had so far and said I should visualize motion trails instead of just disparate points. He was able to very quickly create a processing sketch that took every frame of data (I’d been using every 5th or 10th) and make lines and then turn those lines into an exportable mesh.

Example of alternate way to see motion with both points and lines (did not screen record well):

After exporting the .obj files, I took the models back into blender and turned the lines into tubes before exporting the final models you can see below.

Lines: “点翻身 (Diǎn fānshēn)” turns in place

Lines: “串翻身 (Chuàn fānshēn)” traveling turns

Lines: “涮腰 (Shuàn yāo)” bending at waist

MarthasCatMug – Person in Time Proposal

I want to use motion capture of dance movements to create 3d models. The classical Chinese ballet and many folk/minority group dances I learned when I was young share many basic movements. I’m currently rediscovering those components and am interested in capturing myself attempting these isolated movements in mo-cap. I want to then turn the recording of those movements into still 3d sculptures. I even have a proper display method this time: Sketchfab!

I’m basing my ideas for movements from an instructional video I found online but I think I will be consulting my old dance teachers on movements. As for the mo-cap, I’ve been told Justin Macey runs a motion capture studio in Wean and that I should contact him. There also seems to be a bit of documentation on how to turn motion capture data not into animation, but into 3d sculpture. Mostly I’ve found student project documentation pages but I am confident that I will be able to create models from motion capture points. The main hurdles I have are thankfully separate: to capture the motions and to figure out a workflow to create models.

Some MoCap Visualization Projects I’ve been looking at:

https://sdm.scad.edu/faculty/mkesson/vsfx705/wip/best/winter18/karlee_hanson/mocap-data.html

https://sdm.scad.edu/faculty/mkesson/vsfx705/wip/best/fall12/ziming_liu/parsing-mocap-data/index.html

MarthasCatMug – Two Cuts

I understand the reading to be saying that the cuts are separations between the result and the process of experimentation. Result referring to the sight of something, the quiddity, the theory. Process referring to learning and connection, what parameters of motion I create. Without either, I would be just capturing a person moving without meaning.

For my Person in Time project, I want to capture expressive /theatrical movement through motion capture and in formal forms of dance, a sort of full range of motion in myself. I’ve been thinking about Chinese classical dance and its fundamentals lately and how they were introduced to me really messily but also how they are as rigid as the western ballet fundamentals (just less,… square seeming). The cuts, as they would be applicable to my proposed project, would have a display of an animated skeleton as a result and be about learning cultural dance movements as a process.

11 clumps of hair

For my typology machine, I sourced 11 clumps of hair from 11 people and scanned them at high resolution to create indirect portraits. I was really interested in how I could capture hair and retain the gross detail but keep an element of readability. Hair is different in it’s inherent structural attributes: the color, thickness, bends, lengths are all varied. But I think what’s interesting about these hair clumps also comes from their overall forms. We can see the result of personal human decisions in the forms each clump takes.

How people treat the hair they collect— if it’s in a mat because they keep hair in their brush, if they don’t have much because they throw much of it away, if the hair was from a shower and super clean/shiny because it doesn’t carry as many oils, or if they collected their hair from their surroundings— are all interesting things to look out for within the forms of the hair clumps.

In general, I was interested in capturing a piece of human bodies that sloughs off, but not in a super gruesome way. Another thing that was important to me was that the samples were not just pieces of hair that were cut off; hair strands hold a lot of historical information about us and I think having whole strands was really important to me. I wanted to present many peoples’ hair clumps (and not just a couple strands) together because we become accustomed to dealing with our own no-longer-attached-to-the-scalp hair but even then, it’s not usually examined so closely.

I was really inspired by Byron Kim’s Synecdoche, and also indirect portraiture as a general concept. I think limiting the field of view with which one can look at a subject is an interesting way to make observations.

I struggled quite a bit with the scanning process, trying to set up parameters that would work for all samples took a very long time. I was also unable to get the format of a web book to work though perhaps in the future if I collect more peoples’ hair and had more time it might be cool to pursue.

My process was to ask people for any hair they could spare and then scan it. Generally, it was taken from brushes, clothing, or their showers. I received their hair in bags or papers and scanned them at the highest resolution the flatbed scanners could go (12800 dpi at 2″ squares) and at an arbitrary but standard 5″ square (once again at largest resolution possible, 4800dpi) to capture the entire form of each clump of hair. Each file was a tif that was over 1.5GB. I cropped the highest resolution scans to be .25″ squares and then upsized the images by 4 for the final typology.

For anyone interested, here is the link to the unprocessed scans

Thank you to Ashley Kim, Ashley Kim, Ella, Em, Jean, Leah, Pheht, Selina, Soomin, and Stacy for the hair. And thank you to Golan and Nica for assistance and guidance throughout this project.

MarthasCatMug – Typology Machine Proposal

Idea 1: textures of photogrammetry scans
Originally I wanted to see if I could reconstruct, by hand, the subject of 3d object from a scan’s UV map and texture by printing on fabric or something similar. But there are other methods of messing with UVs and texture maps that aren’t so daunting so I’ve done a bit of a test for drawing over textures and I wonder if asking subjects to decorate their own textures might be a viable typology.

Idea 2: Hair
I’ve had a passive fascination with hair for a while which has resulted in a bit of a collection. There’s a lot of Victorian hair art and writing and their existence is a pretty interesting contrast to modern conceptions of not-attached-to-a-person hair as gross.

I’m interested in exploring the nature of hair as a creative medium with both my own and other peoples’ hair (or even just the images of a person’s hair) as what will be captured/typologised. I’ve made a sample of embroidered hair and so far I’ve mostly just found that it’s very difficult to work with.

MarthasCatMug – SEM of a drain fly

The object I brought was a bug that I found out is actually a drain fly. They are super fuzzy and it was really cool to see how there were almost no surfaces, at any magnification, that did not have furry looking structures.

One place that was not fuzzy were the eyes, Donna explained to me that each individual ball is an eye and that the surface irregularities were actually an artifact of the fly being dried out.

Here is an image of the wing, due to the fly not being flat, a lot of the images ended up being of a very narrow depth of field.

You can see a bacteria on one of the fly’s feet! c:

MarthasCatMug – Reading 1

I really enjoyed Reading 1, I was not super familiar with the particulars of historic photographical processes so learning about that and the motivation of scientific documentation was pretty fascinating. Although it makes perfectly good sense, I was really struck by how the reading mentioned “standardisation was not one of photography’s strong points” because I think I have always perceived the digital cameras that surround me as mass produced standard tools. I also find the dual “notion that photography was a malleable medium” while also being “indiscriminate” a really interesting pairing. This, I think, is a sentiment that still exists.

When it comes to imaging made possible through scientific means, the first thing I could think of was IBM Research’s atom-sized smallest animation, “A Boy and his Atom” (2013). Not only are people able to capture images small enough to see individual atoms, but they were also able to exercise control over individual atoms and manipulate them into sequential images that became a full out animation. I think the pairing of control with imaging is a super fascinating concept when it comes to capture.

Looking Outwards… at some dead bodies! (MarthasCatMug)

Frances Glessner Lee’s Nutshell Studies of Unexplained Death were dollhouse dioramas made in the early half of the 20th century and were created based off of documentation from real crime scenes. Originally meant to train investigators in proper methods of forensic analysis, they were made as “pure objective recreations” according to the Smithsonian.

I find the contrast between the meticulous craft of miniature creation and educational tool interesting. The Nutshells’ origin in Lee’s subversive use of domestic skills to host dinners for investigators instead of high society people and how interested and ultimately accepted she was in the male dominated homicide and forensic fields is also super fascinating.

She was inspired initially by George Magrath, a friend of her brother’s, and she’s inspired many subsequent artists such as Ilona Gaynor, Abigail Goldman and Randy Hage. In fact, just the other day I came across a murder miniature Instagram account called @theminiaturemurderhouse.

Smithsonian’s article talks about how part of the draw of these dioramas, the narrative, also ushers in perhaps problematic aspects of subjectivity: “she makes certain assumptions about taste and lifestyle of low-income families.” I also find the goal of some singular or narrative “ultimate truth” when it comes to emulating reality to be a bit awkward. I can understand though, that at this point in history, Lee’s dioramas of murders contributed to real and important developments in forensic analysis.

Links:

https://www.smithsonianmag.com/arts-culture/murder-miniature-nutshell-studies-unexplained-death-180949943/

https://americanart.si.edu/exhibitions/nutshells (contains VR scenes)