Work in Progress

Using corporate knowledge transfer methods to make a portrait of my grandfather

Background: (in all seriousness) I have one grandparent left, and we’ve been getting closer over the last few years. He expresses a sort of regret over not having been an active part of my childhood (thanks to living on opposite ends of the country). I want to help close this gap, find a way to capture his wisdom, and consider the knowledge-based “heirlooms” he might be passing down.

What do we do in the face of this problem? We turn to the experts. When you google “intergenerational knowledge transfer” or even “family knowledge transfer,” all you get is corporate knowledge transfer methods. What happens if I apply these methods to my family as if it were an organization? Going down this rabbit hole makes for a super weird and fun project.

 

Intro: 

Method derived from: Ermine, Jean-Louis. (2010). Knowledge Crash and Knowledge Management. International journal of knowledge and systems science (IJKSS). 1. 10.4018/jkss.2010100105.

  • “Inter-generational knowledge transfer is a recent problem which is closely linked to the massive number of retirements expected in the next few years.”
  • “This phenomenon has never occurred before: this is the first time in the history of mankind that ageing is growing like this, and, according to the UN, the process seems to be irreversible.”
  • “According to the OECD’s studies, this will pose a great threat to the prosperity and the competitiveness of countries.”
  • “[We can tackle this] inter-generational knowledge transfer problem with Knowledge Management (KM), a global approach for managing a knowledge capital, which will allow a risk management in a reasonable, coherent, and efficient way.”
  • “We propose a global methodology, starting from the highest level in the organization” —> for me, this means the patriarch of the family.

 

The Method: 3 phases

  1. Strategic analysis of the Knowledge Capital.
    1. First, identify the Knowledge Capital. Last week, I called my grandpa and gathered a list of every single thing he had done that day.

      I also supplemented this list with some more characteristics, gathered through a call with my sister.
    2. Next, perform an “audit” to identify the knowledge domains that are most critical, by using the following table:
      every domain here will get a score; those with the highest score are the most important.

       

  2. Capitalization of the Knowledge Capital. Now that I know the most important task(s), convert the tacit knowledge of how to do it (them) into explicit knowledge. Or, in other words,

    “collect this important knowledge in an explicit form to obtain a ‘knowledge corpus’ that is structured and tangible, which shall be the essential resource of any knowledge transfer device. This is called ‘capitalisation’, as it puts a part of the Knowledge Capital, which was up to now invisible, into a tangible form.”

    As a fun aside, the last line there is essentially the corporate-speak definition of “experimental capture.” 🙂 To do this, I’ll be drawing on interview techniques from Rachel Strickland’s Portable Portraits, as well as some “Knowledge Modeling” techniques from the corporate article, such as  the “phenomena model,” “concept model,” and “history model.”

    • Phenomena model: describe the events that need to be controlled/known/triggered/moderated to complete the task
    • Concept model: mental maps – may ask him to draw a diagram of how he does a task
    • History model: learn more about the “evolution of knowledge.” Ask why – hear the story behind a particular artifact, etc.
    • TL;DR – the capture technique is a mixture of video, drawing, and interview, depending on the task (and whether or not video creates too many technical difficulties for a 90 year old :))

     

  3. Transfer of the Knowledge Capital. This is all about how the Knowledge Corpus is disseminated.  This usually goes in the form of a “Knowledge Book” (or a “Knowledge Portal” if it’s online). Furthermore, to ensure successful transfer it’s often good practice for the recipient to do something actionable with the Knowledge Corpus.
    • Existential Question: since I am literally a body of genetic transfer… am I the recipient, or am I the actual Knowledge Corpus itself?
    • Either way, I plan to take action by recreating (and recording) the task(s) myself, relying on my own memory as a source of tacit knowledge. If I am the “recipient” of the book/portal, this will be my way of proving successful transfer via action. If I am the corpus itself, this will be a final capture method to convert tacit knowledge into explicit knowledge and create the book/portal.

 

Ta-da! I have created a kind of fucked up heirloom.

Final Project Update

Results as of last Thursday.

Due to my frustration with my output in this class, art in general, and life, I am definitely behind schedule with my final project. Ultimately, I’ve decided to switch directions and work on something that may be less interesting as a capture method, but is more meaningful to me personally (and will hopefully be more meaningful to the viewer as well.)

Over the past couple weeks, I had my roommate Katy pose several times for Tae Kwon Do photogrammetry. We tried it with 5 people (the entire isolation pod) taking photos from different angles, but this led to background noise that reduced Katy herself to a sad blob. This is where I was at last Thursday, from which point I’ve worked on three further plans.

Plan A: A few days later we tried again with two photographers, which a lot better but still not perfect. The model of Katy’s body lacked detail and the nunchuks she was holding never rendered correctly. Even using Metashape on her gaming GPU with 16GB of RAM, it did not have enough memory to attempt a higher-quality mesh. I’m considering attempting to model in the nunchucks myself using Maya.

The results of Plan A. Katy has all her body parts but her nunchucks have mysteriously disappeared.

Plan B: I tried extracting frames from a 20-second video of Katy holding a kick (I had to learn FFMPEG to do this, and command line tools are scary) which output over a thousand high-quality images. Metashape is thought long and hard about these, but it cut off Katy’s head. Overall, this process has been frustrating because I’ve made decent photogrammetry of my own face before, so I don’t know what I’m doing wrong this time. Maybe the lighting was optimal in the STUDIO but not here, or maybe Katy was shifting her balance slightly which can’t really be helped.

The results of Plan B. Where did her head go?

Plan C: The content I am interested in working with is my old family video and photos. I want to create a virtual space representing what my little brother’s room would have looked like in early 2008. I’ve been experimenting with the Colab notebook for 3D photo inpainting, which produces awesome images, but the actual 3D objects it produces are fairly useless height maps. Therefore, the space will incorporate two techniques. First, I will ask my parents to take photogrammetry-style pictures of my brother’s old stuffed animal dog. No matter how fucked up the model is, I will use it. In fact, it’ll be better if it’s a complete mess. I’ll also incorporate illustration by attempting to redraw the rug he had, which had train track graphics on it, and the rest of his stuffed animals. Overall, I hope this combination of 3D and 2D elements would make for an interesting virtual space.

Illustrated rendition of the reuslts of Plan B.

 

Project update

Done

  • Added opencv to find / smooth contours
  • Updated Ml5.js to use most recent BodyPix model from Tensorflow (the new model has pose estimation built in, more customization options)
  • Played around with video size / queuing results to speed performance

Left to do

  • create interface that prompts people to respond
  • save / play responses

 

Final Project Update

Making slow but steady progress with my project. Have roughed in the body tracking system to allow for feedback created by movements to reveal another image on top of the background.

TouchDesigner Network
TouchDesigner Network

Next steps:

  • Refining edges of the feedback system with the body tracking
  • Building a playback system to pipe in ip cams
  • Calibrating projector output with kinect input.

FP Ppdate

map test 4.20

I have continued to refine my mapmaking process and am  pleased with the 2D results in getting everything to run very quickly with a larger dataset, more like what I will have in performance.

However, I have been trying to think of a more easily interactive way to view the content that is created during the show. Currently, I am making a wordpress page for the performance that populates the images into a 3D spinning globe plugin. While this plugin is a good starting point, I am trying to get inside of its three.js  components in order to customize it further to my visual needs. Which would be repeated images, and more images in the grid pattern of the globe, and that it would already rotate on its own (are at least some initial ideas I have). Below is a video of a work in progress of the 3D gallery element.

Final Project Progress Update

My final project, as it stands now, will be comprised of multiple animated models of peoples spaces of quarantine. At this point I’ve experimented with a photogrammetry model of my own space – and am hoping to start collecting models from willing participants within the next few days. This project is giving me a chance to experiment with animating and  manipulating photogrammetric models in blender.

 

 

Here is a test of high resolution image viewer that I found:

 

Some other options for this kind of hosting are: Sirv, OpenSeadragon, Zoomify, or others

End of Semester Progress

I am mostly on track with my project. I managed to make a Colab notebook based on Detectron2’s tutorial notebook which has:

  • A widget text box that takes in a URL
  • Uses youtube-dl to download the video from the URL
  • Uses Detectron2 on (for now) one of the frames
  • Outputs the mask based on the images (as a numpy array which can be used with openCV)

What I need to do next is:

  • Differentiate what segmentation info I care about rather than all info at once
  • Use the mask to segment my photo
  • Collage the output photos
  • Create logic to protect against bad input
  • Create a front end interface

I am still deciding on what I want the output collages style to be.

 

Final Project Update

For my final project, I’m going to be making a short 360 video set in my grandparents’ home of 50 years, featuring mostly still photos (no animation) and my grandparents’ voices. Right off the bat, I’ll say: I am behind where I would like to be, because it took me so long to decide on a narrow topic. I still think I can finish it, but without as much time to polish as I’d like.

I decided to focus on stories from Christmastime, specifically of adapting family traditions for unusual circumstances. There are a few reasons for this. First, a hugely disproportionate number of our family photos and photos in that house are of Christmas celebrations. So I can really fill the space with these images, from all angles. Second, it’s a theme we have multiple interesting stories for, and one that feels very relevant to our current situation. It may not be Christmas, but families are adapting their rituals and traditions for the quarantine, often in ways that resemble the stories I’ve collected.

Once I landed on this theme, I set several things in motion that are currently still underway: I told my grandparents about this topic and arranged a phone call with them in which I will record their voices for the project, and I also reached out to my mom and aunt (who are in possession of all our old physical and digital photo albums) for ALL the Christmas photos they have. I should be getting those in late this week, and doing the editing early next week.

In the meantime, enjoy the following images of my family at Christmas, and recordings of my grandparents telling some holiday stories (recorded last semester for a different project, but I may just end up using them because the quality is pretty good).

I won’t likely be able to use all of the photos I’m going to be sent, but I bet I can fit a lot in. The film will be “set” in the living room, where all of these photos were taken.

Final Project WIP Update

I got face tracking working with Zoom!

It requires at least two computers: one with a camera that streams video through Zoom to the second, which processes the video and runs the face tracking software, then send the video back to the first.

Here’s what happens on the computer running the capture:

  1. The section of the screen with the speaker’s video from zoom is clipped through OBS.
  2. OBS creates a virtual cam with this section of the screen.
  3. openFrameworks uses the virtual camera stream from OBS as its input and runs a face tracking library to detect faces and draw over them.
  4. A second OBS instance captures the openFrameworks window and creates a second virtual camera, which is used as the input for Zoom.

This is what results for the person on the laptop:

There’s definitely some latency in the system, but it appears as if most of it is through Zoom and unavoidable. The virtual cameras and openFrameworks program have nearly no lag.

For multiple inputs, it becomes a bit more tricky. I tried running the face detection program on Zoom’s grid view of participants but found it to struggle to find more than one face at a time. The issue didn’t appear to be related to the size of the videos, as enlarging the capture area didn’t have an effect. I think it has something to do with the multiple “windows” with black bars between; the classifier likely wasn’t trained on video input with this format.

The work around I found was to create multiple OBS instances and virtual cameras, so each is recording just the section of screen with the participant’s video. Then in openFrameworks I run the face tracker on each of these video streams individually, creating multiple outputs. The limitation of this method is the number of virtual cameras that can be created; the OBS plugin currently only supports four, which means the game will be four players max.

End-Of-Semester Plan

I’ve been struggling to come up with interesting projects or endeavors these past few weeks, although I’ve been hoping that I will arrive at some interesting discovery as I experiment with a few things.

One of these things is an Intel RealSense Depth camera (borrowed from the studio), which I’ve been working on ways of moving camera data, live or recorded, to work with Unity. I had a few ideas about ways of algorithmically distorting this depth data, and warping the interaction between the camera’s sense of space and time.

Less related to “capture” is a speed tufting tool that I’ve been learning to use, which I will be rigging to do tufting automatically within 2D coordinates. I’ve had an idea for a while of creating a live interaction between performance or world movement and embroidery or other forms of textile fabrication. I was thinking this would be possible using the depth camera, which does a good job of interpreting 3D activity, and whatever configuration of motors I decide on for the project. This is a concept that I have been considering working into a final project for another course I’m currently taking, algorithmic textiles, which would involve writing path algorithms specifically catering to the automated tufting system.

Aside from these, I have found the offerings so far to be refreshing and stimulating exercises, so I wouldn’t mind continuing with a set of those as I work on some of this stuff.