Oscar – final

Title: History City

Description: Your browsing history says a lot about you. What if there was a way to visualize it? Take a look at all the pure (and not so pure) things you’ve been up to in a city-like visualization of your history.

History City layout

What is Your Project: a history visualizer that builds a city based on the type  and frequency of sites you visit.

What Makes Your Project Interesting: we’re never used to looking back at our browsing history because of how its portrays a darker, more secret side of us that we wouldn’t show others. Normally we want to forget/clear our history, but instead of running away from our browsing history, we can now visualize it.

Contextualize Your Work: visually I was inspired by the Three.JS app Infinitown (https://demos.littleworkshop.fr/infinitown) that tried to produce a bright, low-poly city that would stretch on forever. Rather than just random building placement, I wanted to use data to help generate the city.

Evaluate Your Work: The resulting work looks visually impressive, although with more time I would like to focus on city diversity (getting more distinct cities when you input more distinct browsing histories).

Documentation:

Loading a city with a high pure value.

Loading a city with a low pure value.

Clicking on assets to view what sites they are backed by.

The Process:

Given a JSON of a person’s browsing history, the parsing script initially cleans up entries by grouping together link for the same domain name and accumulating site frequency. On a second pass, the script tries mapping each url to an asset type [“building”, “house”,”tree”] depending on the contents of the webpage. The script scrapes the inner html and extracts the top K unique words that frequently occur in the site that don’t occur in other sites. Using these words and a synonym dictionary, the script maps the url to an asset type depending on the connotation of these words:

“building” – corporate, shopping, social media

“house” – personal, lifestyle

“tree” – learning, donating

After the mapping is computed, the Three.JS interface loads an instance of each asset. For each url, the corresponding asset is placed, and its size is determined by its frequency. Using Three.JS’s ray-tracer, we can compute the cursor intersection with an asset. On the instance an asset it hovered over, a portion of its material will turn to an orange highlight, and clicking the asset will open a new tab with the url that asset represents.

Building, Cloud, House, and Tree assets.

Roads are generated around 5×5 cell regions. Starting in the center and moving in a spiral motion, each new asset it placed in the initial center 5×5 region while not overlapping with any other asset. If no room exists after a set number of tries, then it moves to the next 5×5 region. This continues to expand the city the more url’s there are to load.

A purity variable captures the ratio of buildings + houses to trees. A more pure city will have no clouds/pollution and no cars on the road polluting. A non-pure city will be dark and have a lot of polluting cars on the road with a lot of dark clouds covering the sky.

Lighting computed using directional lighting, with a range visible by the wireframe. Every material has accept and receive shadows enabled.

 

Final Project – Slime Language

Slime Language (2020)

Slime Language is an ongoing multi-part gestural performance and video series creating slang sign language from hip hop lyrics. These alternative signs are meant to visualize and decipher a spoken phrase and transform them into a signed idiom.

This idea began in 2019 while I was trying to create a photo series on the black fugitive image and coded languages by using poses influenced by sign language.

(3 out of 26 images from the original “Slime Language” series)

However, I did not consider this as the final rendition of the idea; since, I felt like I captured the fugitive image aspect but not the coded language concept. For the Person in Time assignment for this class, I chose to revisit this series to work on balancing out the coded language aspect. This resulted in a QWERTY like system for your hands using latex gloves based on finger phalanx sections and letter frequency in English.

The project became a speaking system using a cross of the English alphabet, sign language, a Ouija board, tattoos, and gang signs.

The phalanx sections of the fingers (proximal, middle, and distal) were used to plot the English alphabet on the hand. By pivoting and pointing with the thumb, words and sentences could be formed to create silent and performative communication between individuals. The placement of the letters were determined by its frequency in the language and accessibility on the hand.

Yet again, I was not content with the rendition of the concept. The movements were awkward and slow. I also tried to do machine learning with Google Teachable Machine but the movements were too small for it to detect changes. By using the Teachable Machine, I also realized that having your hands straight out in front of you for the messages you form to be read was an uncomfortable positioning in general. Most importantly, the execution felt as if the concept strayed from my body of work.

I then began looking into Christine Sun Kim and her work with sign language and music. By using the movements in signing, she created these drawings/diagrams and I knew this was more of the direction I wanted to go. My work as a whole deals with black culture and hip hop, and the most recent rendition of “Slime Language” just did not have those parts of my work. I then began looking into sign language slang and sign language translators for concerts. After this round of research, I decided to turn Slime Language into a series of movements creating slang/idioms signs based off of lines in songs. As of now the series has 30 installments.

(Full Video: 3 minutes 45 seconds, 30 segments)

(10/30 Segments)

Developing this project made me subconsciously develop a structure for the language that was based more than an exact gesture representation for each word. For instance, a clawed “heart pumping gesture” was used to indicate life and a rolling hand gesture indicate continuing time. Most gestures contained 3 parts: the main action, a disruption to the action, and an ending to the action. This 3 part “syntax” aided in creating a fluid movement as well as a shortened signed idiom.

Joyce-final

a momentary stay

Snap shots of a person’s transient, solitary body in her quarantine home. 

This work lives on this webpage.

In these snap shots, my body is keyed out to create windows, where two videos of the same view in different time scales are overlaid.

Story

During the first a month and a half of quarantine, I stayed in a friend’s apartment. I was originally living in a dorm, stuck in a small, half-basement room with a terrible kitchen. It was miserable so I asked to stay at my friend’s empty apartment while she was with her family in another city.

My friend’s home was full of her personality. When I first arrived, it was as if she just left — there were dishes drying on the rack, clothes in the hamper, and snacks in the fridge. I lived carefully, knowing that in a few weeks, I would leave, and hopefully I would not leave many traces behind. There was a sense of friction between the space and my body, because I was a guest here without the host. I wanted my presence to be brief and light, but as time went by, I blended with the space and felt more and more at home. It really was a cozy apartment! The friction became more of a warm touch.

In this project, I lean into the distance between my body and my temporary home. In the snap shots, my keyed-out body creates windows that allow for juxtaposition between the same space in two different time scales.

Process
  • Taking videos of myself in green suit doing mundane things
  • Overlay videos in After Effect
  • Convert video to gifs using ezgif.com/video-to-gif
  • Create basic webpage to present the gifs (plain html+CSS+javascript; Link to Github repo)

Inspirations

In Search of Lost Time, novel by Marcel Proust
“The Figure A Poem Makes” by Robert Frost
4/51 Dolna St and Headache by Aneta Grzeszykowska

Three Transitions (1973) by Peter Campus

Next steps:
  • gif compression using ffmpeg

 

Final Project Update 2

I’ve managed to segment with color and can also export the images with alpha data. It exports images with a given tag to look for (using the COCO dataset). The plan now is to take the images into something like PIL and collage them there. I can also move towards being more selective about frames chosen.

Final Project Update

This project started out with the idea of using a fax machine to capture people’s experiences in quarantine. It’s shifted a little bit, so the direction I’m taking it now is to fax people a daily task that is generated by an AI. To generate these tasks, I’m using GPT-2.

So far, I’ve trained a GPT-2 algorithm off of a dataset of 101 sample questions I wrote. So far, I’ve experimented with a number of variables to try to tweak the text generator, but it’s only spitting out questions directly from the source material.

a subsection of the sample questions I wrote.

 

 

An example of output from GPT-2. All of these questions are lifted in full from my 101 sample questions.

 

Where I’m going from here:

I need to continue tweaking the GPT-2 to get new questions output from it. After that, I need to work on automating faxes so that I can send these questions out.

WIP

A house ghost trying to pass time.

Questions

  • What should be the format of presentation?
    • A video essay, or video poem? Spoken words and ambient sound accompanying the videos?
    • An interactive web game?

Work In Progress April 21st

I have been experimenting with different ways of implementing the 3D rolling shutter effect using the Intel RealSense depth camera. So far I’ve been able to modify an existing example file, which allows you to view live depth feed using the Python library.

The following is what the output looks like raw from the camera. As you can see it displays a point cloud of the recorded vertices.

From here I wrote a function to modify the vertices being displayed in the live view. Vertices from each new frame get positioned in a queue based on their distance from the lens, and the live view only shows points at the beginning of the queue. After each frame this queue gets shifted so that vertices farther from the camera get fed to the live view a little while after the vertices that are right in front of it.

This video shows the adjustments made to the 3D vector array, which isn’t reflected by the color map, resulting in this interference color pattern. In this example the sampling is reversed so that farther vertices appear sooner than closer vertices.

The main issue I’ve come across is the memory usage of doing these computations live. This clip was sped up several times to get that fluidity of motion because of the dropped frame rate. The next thing I plan on doing is getting the raw data into Unity to make further changes with the previous code I wrote.

Work in Progress

Using corporate knowledge transfer methods to make a portrait of my grandfather

Background: (in all seriousness) I have one grandparent left, and we’ve been getting closer over the last few years. He expresses a sort of regret over not having been an active part of my childhood (thanks to living on opposite ends of the country). I want to help close this gap, find a way to capture his wisdom, and consider the knowledge-based “heirlooms” he might be passing down.

What do we do in the face of this problem? We turn to the experts. When you google “intergenerational knowledge transfer” or even “family knowledge transfer,” all you get is corporate knowledge transfer methods. What happens if I apply these methods to my family as if it were an organization? Going down this rabbit hole makes for a super weird and fun project.

 

Intro: 

Method derived from: Ermine, Jean-Louis. (2010). Knowledge Crash and Knowledge Management. International journal of knowledge and systems science (IJKSS). 1. 10.4018/jkss.2010100105.

  • “Inter-generational knowledge transfer is a recent problem which is closely linked to the massive number of retirements expected in the next few years.”
  • “This phenomenon has never occurred before: this is the first time in the history of mankind that ageing is growing like this, and, according to the UN, the process seems to be irreversible.”
  • “According to the OECD’s studies, this will pose a great threat to the prosperity and the competitiveness of countries.”
  • “[We can tackle this] inter-generational knowledge transfer problem with Knowledge Management (KM), a global approach for managing a knowledge capital, which will allow a risk management in a reasonable, coherent, and efficient way.”
  • “We propose a global methodology, starting from the highest level in the organization” —> for me, this means the patriarch of the family.

 

The Method: 3 phases

  1. Strategic analysis of the Knowledge Capital.
    1. First, identify the Knowledge Capital. Last week, I called my grandpa and gathered a list of every single thing he had done that day.

      I also supplemented this list with some more characteristics, gathered through a call with my sister.
    2. Next, perform an “audit” to identify the knowledge domains that are most critical, by using the following table:
      every domain here will get a score; those with the highest score are the most important.

       

  2. Capitalization of the Knowledge Capital. Now that I know the most important task(s), convert the tacit knowledge of how to do it (them) into explicit knowledge. Or, in other words,

    “collect this important knowledge in an explicit form to obtain a ‘knowledge corpus’ that is structured and tangible, which shall be the essential resource of any knowledge transfer device. This is called ‘capitalisation’, as it puts a part of the Knowledge Capital, which was up to now invisible, into a tangible form.”

    As a fun aside, the last line there is essentially the corporate-speak definition of “experimental capture.” 🙂 To do this, I’ll be drawing on interview techniques from Rachel Strickland’s Portable Portraits, as well as some “Knowledge Modeling” techniques from the corporate article, such as  the “phenomena model,” “concept model,” and “history model.”

    • Phenomena model: describe the events that need to be controlled/known/triggered/moderated to complete the task
    • Concept model: mental maps – may ask him to draw a diagram of how he does a task
    • History model: learn more about the “evolution of knowledge.” Ask why – hear the story behind a particular artifact, etc.
    • TL;DR – the capture technique is a mixture of video, drawing, and interview, depending on the task (and whether or not video creates too many technical difficulties for a 90 year old :))

     

  3. Transfer of the Knowledge Capital. This is all about how the Knowledge Corpus is disseminated.  This usually goes in the form of a “Knowledge Book” (or a “Knowledge Portal” if it’s online). Furthermore, to ensure successful transfer it’s often good practice for the recipient to do something actionable with the Knowledge Corpus.
    • Existential Question: since I am literally a body of genetic transfer… am I the recipient, or am I the actual Knowledge Corpus itself?
    • Either way, I plan to take action by recreating (and recording) the task(s) myself, relying on my own memory as a source of tacit knowledge. If I am the “recipient” of the book/portal, this will be my way of proving successful transfer via action. If I am the corpus itself, this will be a final capture method to convert tacit knowledge into explicit knowledge and create the book/portal.

 

Ta-da! I have created a kind of fucked up heirloom.