Person In Time Project Ideas

I’m pretty excited about this project and sort of in awe of the possibilities; I haven’t really gathered my thoughts enough to have fully fleshed-out ideas, but some areas of interest that have stuck with me:

Elder Wisdom/Intergenerational Communication

  • Marcell Esterhazy’s timelapse of his grandfather eating a bowl of soup has seriously stuck with me since we watched it in January. It’s so captivating to me.
  • My family is originally from Pittsburgh (despite this year being my first time living here) – my family roots are all over the city, my parents met here, grandparents and great grandparents lived a few blocks from where I live now, etc. Not very many people are left…(though they are all buried in the same Jewish cemetery in Millvale). But this is a special place for me to be living right now. This project might be an opportunity to capture that somehow…though I’m not sure what that would look like.
  • I’ve been thinking a lot about intergenerational communication lately, passing down knowledge, etc. How might I capture elder wisdom in a novel way? This was somewhat inspired by a previous classmate who captured elder wisdom through a series of interviews. With the tools in this class, I could definitely go beyond audio.
  • To build upon Marey’s motion capture suit of the “average male gait,” (see photo below) I am interested in capturing the gait of other bodies – seniors, children, etc. through chronophotography or stroboscopy.
Marey’s chronophotography in the original motion capture suit

 

The Digital Grotesque

I’m also very interested in this concept of the digital grotesque, and am really drawn to artists using motion capture to distort the human form. Cool 3D World projects, copy/pasting parts of the human body (like adding an arm to a knee), the walk cycle projects, and animations of bodies sort of “melting” or being pulled through holes in the floor…all of these are super intriguing to me.

I don’t have a specific idea yet for this one, but I would love to learn how to work with motion capture and particles and distortions in general. (I remember Nica saying something about knowing how to do this…we should talk :))

People in Time Project Ideas

Here’s a few ideas I’m mulling over for the project. For this project I’m really interested in exploring the unique dynamics and kinematics of human body. Though there are some interesting ideas listed below, I don’t feel that my final framing is contained within.

  1. Walking to a Beat*: A video processing algorithm that generates skeletons for walking people, measures the frequency of the gait and picks a song where the beat matches that frequency. Songs would be presorted by tempo.
    1. https://www.youtube.com/watch?v=6Ron-Ikenfc : films often put music over someone walking or doing some other mundane task to give it this sense of being intense, important, or intentional — this scene from spider man shows what happens when that audio is removed. What would happen if we created a system where that audio was added back in?
    2. https://github.com/aubio/aubio
  2. Movie Path*: An algorithm that processes scenes from movies and uses photogrammetry combined with skeleton tracking to figure out exactly the orientation and path of a camera relative to someone in scene. Use that information to generate a path for the robot arm and film someone to add into the original scene.
    1. Which films?
    2. How contemporary would the films be? — There would definitely be a bias towards contemporary film.
    3. Scale? Films use giant booms to hold and guide cameras – would UR5 robot arm be able to achieve  the necessary motion for this project?
      1. Maybe we could put a squirrel into the scenes?
    4. Similar Idea: Use the algorithm to look at individual movie scenes and encode camera position relative to the floor and the focal plane — sort all scenes this way. Make this searchable by camera position. Use this to find movie scenes that could accept characters from other videos — put squirrels into films? 
  3. On Body Camera Rigs: There are a lot of camera rigs that stabilize the jitter and motion that is inherent in handheld filming, could I create a system that does the opposite? — That moves and responds to human input. The goal here would not be to create a jittery noisy video, rather to create a system that dances and moves and rotates in response to some human input
    1. maybe slow motion camera would be good?
    2. Could a gimbal system be repurposed to do this?
    3. https://www.videomaker.com/buyers-guide/camera-rig-buyers-guide
    4. https://www.adorama.com/cdagcjib.html?gclid=EAIaIQobChMIkfjP2_WB6AIVlozICh36ZgzDEAQYAiABEgJjV_D_BwE&utm_source=adl-gbase
  4. Long Exposure People Pixel Averaging: capture the trails of people by making an algorithm that averages pixels over time with wait the pixels where people have been (a pixel has both position in grid and position in time)
  5. People responsive systems*: set up a set of highly responsive systems, that will rotate in the slightest amount of air in a populated place set up a video camera to capture the rotations and frame the entire scene through these responsive systems
    1. Also: put responsive things into the air that people could move like smoke or micro bubbles or dust
  6. Dust cam: Create a camera set up in front of a light source at just the right angle so that it will pick up the microscopic dust particles in the air
    1. As people walk past the dust particles would get disturbed in a distinct way.
  7. Novels shots:
    1. Building cut away like the hallway scene in oldboy
      1. Creating a very flat side scroller  way of interacting with this
      2. Could an algorithm take like 3 wide angle shots and stitch together a super flat image?
      3. http://www.cs.technion.ac.il/~ron/PAPERS/ieee_mds1.pdf
      4. https://mathematica.stackexchange.com/questions/34264/how-to-remap-a-fisheye-image
      5. https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html?highlight=remap
      6. Maybe video photogrammetry (where a set of frames for photogrammetric model is generated at each point in time)
    2. Other shots that might make people look a little funky/be fun:
      1. Fisheye lens
      2. Parallelized imagery with hyperbolic mirrors(see bellow)
      3. schlieren optics
        1. https://en.wikipedia.org/wiki/Schlieren_photography
        2. https://pdfs.semanticscholar.org/3267/9a2ab1d35774a4b859323e5d7548efb45660.pdf
    3. Side Scrolling Everywhere*: (Connected to above — Novel shots idea):
      1. Use a series of cameras and a flattening/warping algorithm to create super flat scene that looks like a side scroller game

Hallway scene from oldboy

 

PersonInTime – Ideas

3 Ideas I had for the project:

  • Similar to the Phodography work we saw in class, hook up a camera around a person’s neck and have it take a picture every time their heartbeat goes up. The photos then can be grouped together based on the heartbeat of the person to show what settings/scenes lead to a raised heartbeat, or a calmer one.
  • Capture the heart rate of a person over time, as well as their geo location. Use the data and save it to google maps to identify what the most stressful locations are to a person. Potentially create a database of results for people to see where the general stressful locations are.
  • Capture the conversations of a person to find the frequency of how much they talk during the day. Potentially combine with gps to find where the person talks.

Person in Time Project Ideas

These all feel only tangentially related to the prompt, but I also think they all sound like a lot of fun, so…

  1. Exquisite Choreo – You know the game where someone draws a picture, then someone else describes the picture, then a third person draws a picture based just on the description, then someone else describes that, and so on? It’s like that but with dance. I would do a few rounds: someone dances to a clip of music and I record them, then someone else watches that video and tries to write a description for the movement. I show that description to another dancer, et cetera. Alternatively, I just do a series of video with the instruction “Copy this Movement,” like telephone. Alternatively, I just do Exquisite Corpse but with dance, asking “What should come next?” with limited context.
  2. Maybe more of a typology, but… there is this awful door into my residence hall that abruptly stops after you pull it a few inches open. I have enjoyed watching people’s disappointed faces when trying to open this door, especially when I can tell this has happened to them before. I’d love to photograph/film a series of reactions to this. Maybe I could even rig something to do that automatically?
  3. Study for Fifteen Points but YOU are the points – I am currently obsessed with Study for Fifteen Points and I thought it could be fun to recreate it with people instead of robot arms. There are a few ways to do this: firstly, I could give each person a stick with a light at the end, teach them how they should move it, and have them “perform” this somewhere. The “person” they depict could be larger than life, outdoors in the evening somewhere on campus. Could be fun. Alternatively, I could make this something of a game for fewer people, where they are each controlling an interesting point or two and trying to make something that a computer will recognize as a walking. This would force people to consider how their joints move.

15 Bricks

 

Summary

“15 Bricks” is a partial typology of Lego airplanes that are built with the same 15 Legos. These were obtained by asking different CMU students to build an airplane using all of the pieces in a set I chose. I gave 20 people the same 15 Lego pieces and asked each of them to use all the pieces to build an airplane. With so little to work with, they had to reduce the shape of a plane to its most important features, and each person had a unique approach.

Development Process

Initial Ideation/Planning

Of all the typology examples in lecture, I was particularly inspired by the ones that involved many people each contributing something creative, like Kim Dingle’s “The United Shapes of America.” I think these can serve as indirect portraits of the individual participants while also revealing patterns in our collective thoughts about something. I brainstormed things I could ask people to do and/or materials I could ask them to work with, and I landed on Lego as a fun, accessible medium that’s well suited for 3D shapes.

My initial vision as described in my proposal was that I would have a large (hopefully effectively limitless) pool of Legos that participants could use. I then would ask each one to build a spaceship, and I’d record a video of their construction process. I expected highly varied ships, and to really let people’s creativity run wild. Under this scenario, Lego would be simply the medium people were using to construct their spaceships, and it would be the theoretical spaceship designs that were ultimately being recorded and compared in the typology. That didn’t make as much use of the unique properties of Lego as I’d have liked, so I moved away from it and towards the version I ended up using: one where the limited Legos pose a very tight constraint on their construction.

Playtesting and Fine-Tuning Procedure

Now I had to make a few decisions, including what I was going to ask people to make and what bricks I was going to give them to do it. I decided on the first answer in a discussion with Golan. We went with “airplane” because it’s a fairly simple, familiar, and well-defined shape. Unlike, spaceships or cars, there isn’t too much variance in commercial plane design, so I can be pretty sure that most participants have the same thing in mind when they begin (though it’s okay if not all do). This would make it easier to meaningfully compare people’s constructions.

As for what Legos to use, I started by creating a sample set from a box of Legos Golan loaned me. I had a few heuristics in mind: if I believe that a certain piece would only be used one way by everyone (ie. a wheel or windshield) then it was off limits. I wanted to stick to pretty classic, blocky Legos. I also used only white pieces so color was a factor; I wanted people to focus on shape. I also wanted the set to have enough pieces that I got highly varied solutions (can you imagine how boring the 2-piece solutions would be?) but few enough that people had to sacrifice some details.

At about this time, I decided that I wanted to require that all Legos be used in the final construction. I thought that as long as this didn’t significantly hamper people’s creative freedom, it would be a lot more satisfying. The results become “ways to rearrange these same Legos into airplanes” instead of just “many Lego airplanes.”

A potential Lego set I was considering.
A beautiful biplane made by a pilot participant.
This pilot participant forgot to include wings on his airplane.

I tried out my brick set on many people before finding the one I ended up using. One very long flat piece from the original set was used by everyone as part of the wings, so it was no longer interesting, so I removed it. I also reduced the number of bricks slightly, as most people were finishing with bricks to spare and struggling to find uses for the leftover ones (I liked a little bit of this, as it gave me interesting features like wing flaps, exhaust trails, and wheels made out of bricks. But it got to be too much with some of my sets). I was also very pleased with my preliminary results, since the airplanes I was getting looked quite different from each other.

Interestingly, after I decided on my 15-brick set, I found this video of a similar 15-brick Lego building experiment (in this one, participants are given 15 random bricks and told to make anything they want, something I definitely did not want to do). So I guess I’m not the only one who found that to be a pretty good number!

Data Collection

When I first started collecting my airplane designs formally, I wasn’t sure how I would present them. So, just to be safe, I thought I should capture the entire building process, in case I wanted to display the videos of every plane being created or make a list of all the steps people took.

My very enticing CUC table

I set up a table in the University Center with the very enticing poster “Play with Lego, Get Candy.” I had a camera set up pointing at the Lego-building area, and I would record any time someone was building anything. I also had people sign a release saying I could use their design in this project and optionally use the footage of their hands in my documentation (which I ended up not doing). I got about fifteen volunteers this way. Some of them found very creative approaches to the problem that weren’t considered “valid” Lego constructions. Initially, I allowed people to do this, but I later decided that I shouldn’t, so a few contributions had to be thrown out.

Once I knew that all I was going to need from people was their plane design (not the footage of their hands), collection was even easier. I’d carry around my little bag of Legos, and when I had a moment, would ask the people around me to make airplanes with them. Then I’d photograph every angle of the planes that were made. After this, I had a total of 20 valid airplanes.

Encoding the Airplanes

In my initial research for this project, I came across Mecabricks, an in-browser tool for designing Lego models. It has thousands of Lego bricks that you can use, a great interface for moving them and snapping them together, and it’s totally free. Best of all, it has an extension for Blender that lets you animate and render nearly photo-realistic images of your Lego models (if I had bought the paid version, I could have chosen just how scuffed up I wanted the surface of the bricks to look, and how many fingerprints they had on them–how cool!). If you want to make a Lego animation for some reason, I can’t recommend this enough. My only complaint is that the documentation for some features is pretty poor.

Me using Mecabricks in a browser

I decided to model and render people’s airplanes in Blender because I wanted their contribution to be not the physical figure they created, but their design. Mecabricks models let one uniquely describe a Lego creation with no extraneous detail–exactly what I wanted. It also allowed me to render them all with exactly the same lighting/camera conditions, which is always good for a typology. Finally, it’s a format that people can potentially explore in 3D (Mecabricks has a model-sharing website for this), without the loss I would incur doing something like photogrammetry. The only downside of this decision is that it meant I had to learn Blender, which while probably valuable was not very much fun.

Animating in Blender

After I put all the models in Blender, I had the idea of animating transitions between them to really drive home the combinatorial element of this: it’s all the same blocks being rearranged every time. I ordered the airplanes based on my own subjective preferences, putting two next to each other when they had something interesting in common that I wanted to highlight. Then I animated all of the bricks moving, which took WAY longer than I thought, but also was pretty rewarding! I find the resulting video (top of this post)  extremely satisfying to watch.

Claimed a row of computers for rendering

Results and Discussion

Below is an image of all 20 rendered airplane designs, ordered by participant first name.

I have really enjoyed comparing and contrasting them so far. Here are some things I’ve noticed:

  • Most people used the small sloping bricks as the nose of the plane, but a few did put them on the wings or tail.
  • Many people used the two 2×6 flat bricks as the wings, but they were also popular as support bricks for the body of the plane. The 2×8 flat brick was a common substitute, but some people made their wings out of thicker pieces.
  • People had pretty different approaches to the tail end of their airplanes, with many adding some small crossbar, some making a very pronounced tail, and some just letting their airplanes taper off.
  • People LOVE to make their airplanes symmetric. 19 of the planes are symmetric up to overall shape, and 18 of these are symmetric even down to the exact Lego (somewhat impressive given the fact that I included some odd-sized Legos with no matching partner).
  • A few categories of airplane have emerged, like those that are just plus signs, or those that are triangular.

The similarities/patterns above are especially evident in the animated video, I think. You can, for example, watch the slope pieces sit there near the front for many planes in a row.

I’m really happy with this project, but I do wish there were a little more to it. I would love to try this again with a different prompt. Maybe I can get 20 people each to make a Lego chair? Or maybe I can keep scaling down the number of Legos and see at what point they converge with many people making the same thing? Hopefully I can pursue this in the future–I’ve had a lot of fun with this project so far!

Blink

Pre Warning: This post contains flashing images.

Disclaimer: I do not own any of the footage used, they are the property of the associated film studios listed below:

The Godfather (1972)- Paramount Pictures, Raging Bull (1980) – United Artists, The Matrix (1999) – Warner Bros., Whiplash (2014) – Sony Pictures Classics, Jaws (1975) – Universal Pictures

 

For my Typology machine I wanted to explore Walter Murch’s theory (as proposed in his book In The Blink Of An Eye) that if a film captures its audience that the viewers will blink in somewhat unison. Taking this, I expanded it and asked a series of questions:

1.) Will the audience blink in unison?

2.) Does Genre/The content of the film affect the rate at which people blink?

3.) Are there specific moments/edits you would expect the viewer to blink on?

4.) To what extent can the frames that one misses when one blinks summarise the watched scene?

How did I do it? 

Using Kyle McDonald’s BlinkOSC, I set up as below (just in a different space):

I had four participants watch the footage for me. These four cases were all recorded in the same room, I was sat on the same side of the table as them about 3 feet down so that they couldn’t see my screen but so that I could see the monitor and make sure the face capture was stable. I had Blink OSC modified so that whenever the receiver monitored a blink it would save a PNG of the frame at that instance they blinked, on top of this I had another version of the receiver going which oversplayed a red screen whenever someone blinked which was screen recorded.

I had the viewers watch 5 clips in this order:

1.) The opening sequence of The Godfather (1972) (WHY? This was edited by Walter Murch, so as to test Murch’s theory on his own work)

2.) The montage sequence from Raging Bull (1980) (WHY? Raging Bull is largely considered one of the best edited movies ever (according to a survey by the Motion Picture Editors Guild) and I chose this montage sequence not only as a very specific kind of editing, but also as a kind of editing that is fairly familiar to people (the home video))

3.) The bank sequence from The Matrix (1999) (WHY? A very action heavy scene with not only person but environmental destruction)

4.) The ‘Not Quite My Tempo’ sequence from Whiplash (2014) (WHY? Not only a very emotionally heavy scene, but also a scene from a contemporary movie lauded for its editing)

5.) The shark reveal scene jump-scare from Jaws (1975) (WHY?  A jump-scare, and another movie lauded for it’s editing)

They watched these will slight breaks in-between (time not only for me to setup the receiver for the next scene, but also for the viewers to somewhat emotionally neutralise).

The Results 
Firstly: Will the Audience blink in Unison?

The short of it: no.

At no point in any of the five clips did I have all 4 people blink together, I had a couple instances of doubles. I sorted these out by editing the videos into a 4 channel video and then scrubbed through for all the instances overlapped blinks happened. [The videos are too large for this website but they will live here for interested parties.]

Godfather:

Raging Bull:

The Matrix:

Whiplash:

Jaws:

Does Genre affect blinking?

To crutch the numbers on this:

For the Godfather I had an average of 109 blinks, over a 6:20 minute video = 0.29 blinks per second

For Raging Bull I had an average of 48 blinks, over a 2:34 minute video = 0.31 blinks per second

For the Matrix I had an average of 79 blinks over a 3:18 minute video = 0.4 blinks per second

For Whiplash I had an average of 80 blinks over a 4:21 minute video = 0.3 blinks per second

For Jaws I had an average of 17 blinks over a 1:24 minute video = 0.27 blinks per second

From this I can make the somewhat tenuous conclusion (I need to do the actual mathematics) that Action does indeed cause us to blink more.

Are there specific moments or edits you expect a viewer to blink on?

I would hazard a tenuous no.

The ‘jump-scare’ from Jaws:

The chair throwing scene from Whiplash:

While in this second case it is harder to distinguish an exact moment, I feel that the communal (/only 2 real responses at a time) is somewhat of a factor in deciding that there is no definitive communal moment when everyone blinked.

Summarising the scenes?

Now we get into a much more subjective domain. The below gifs are from all the collected blink frames.

We can conclude a individual’s sentiment of what was missed:

or

The shared communality of what is missed. (These are all presently different speeds, but I will re-edit them later)

On top of this, we can also scale this up to the original frame rate, which at a level explores notions of linearity and indeed the functioning of our eye.

Then there is a collectiveness of individual viewed content that I feel also spells a potentially interesting discourse in comparing our individual collectiveness of what we view through what we missed.

Going further:

1.) I had the stark realisation after I had recorded all my content that all of the clips I choose were predominantly male filled.  Thelma Schoonmaker (Raging Bull) and Verna Fields (Jaws) were both the main editors of their respective movies, but I feel this isn’t enough of a hallmark to balance it out. As such more varied/inclusive content rage should be a must going forward.

2.) This also in turn spells an interesting notion towards different contents of drama that might be worth looking at, ie how different kinds of emotion might affect how we blink.

3.) I feel more abrupt jump-scares would be worth looking at/Horror as a genre overall. The jump-scare itself has evolved a fair amount through time, and the contemporary horror might be worth comparing to older horror.

4.) All of my clips were viewed on a monitor and in singular format and then compared. While more indicative how the impact of how technology has affected how we view film today, I feel the idea of the group aura that is a part of viewing film in a cinema is worth looking at. This would entail a much more large scale capture, and mostly like a re-orientation of the technology I use.

 

 

 

 

Typology of Lenses

Many people are completely dependent on corrective lenses without knowing how they work; this project examines the invisible stresses we put in & around our eyes every day. 
I have worn glasses since I was 5 years old, and contacts since I was 8, but I’ve never stopped to think about their construction. In learning about polarized light, however, I realized that transparent plastic is not perfect; in fact, it is subject to stress that can be seen clearly when place between two perpendicular polarizing filters.
Source: Google Images
The goal: view the stresses in corrective lenses.
I: Contact Lenses
What are the stresses on my contact lenses? Is there a difference in stress between a fresh pair and a used pair? 
I wear daily lenses, which means I throw them out every night. Rather than throw them out, I saved my used contacts for a week, so I could compare their stresses with fresh ones.
The Machine
The machine was pretty simple:
  • Soft contact lenses (used & unused)
  • Linear polarizing filters (2)
  • DSLR with macro lens
  • Light table

I was excited to use polarizing filters because of how clearly they highlight points of stress; I hoped to see the tiny stresses in the things I put in my eyes every single day.
So I put a contact lens between 2 filters and…nothing happened.
Thus entered the experimentation phase.
Experiments
First, I did some research – why did this not work? Despite the objective “failure,” it was honestly really fascinating to learn how contacts are built.
Next, I tried a bunch of ideas:
  • What happens if you squish it? Stretch it?
  • Let it dry out & shrivel up?
  • Let it dry out and shatter it?
  • What about in the packaging?
At least the packaging looked pretty cool!
stretched, dried, and shattered lenses next to a hard plastic “fake” lens (which worked properly)
Somehow, even shattering the lens did not constitute “stress” for it. Crazy, right?
II: Glasses
Luckily, I had also brought my glasses as a backup plan.
What are the stresses on my glasses lenses? What do lens stresses look like for different people, types of frames, prescriptions, etc.? 
The Machine: 
  • Glasses (15 distinct pairs)
  • Linear polarizing filters (2)
  • DSLR with macro lens; fstop 8 & lowest ISO setting
  • Light table
process sketch

 

These photos were immediately more interesting. The hard plastic of the glasses made for better results with the polarization filters, plus the frames themselves exert those stresses.
To me, the most interesting part of it was not the stresses of a single frame, but in the comparison. Different frames show stress in drastically different ways. Some key interesting ones:
No imperfections at all, save for a tiny dent in the lower left. (Might be because of how the lens is attached – it rests in a little metal shelf, rather than screwed directly into the frame)
Wire frames tended to yield some more colorful results:
Thank you to everybody who donated your glasses to the cause – I really appreciate it!
III: Insights & Reflections
  • Glasses are a proxy for people. There’s a story to each lens/frame/wearer. Everyone apologizes for their glasses being dirty.
  • We put a lot of trust on glasses without knowing how they really work. They are not as perfect as we think.
  • “Experimental capture” is no joke – it truly is an experiment.
  • I succeeded in keeping it simple. Throughout my masters program, I’ve learned the importance of “doing one thing well” rather than trying to do it all. Especially in this class, there’s a lot of pressure to use all of the tools. I’m grateful I made a relatively low-tech machine; this helped me learn, adapt, and focus on the storytelling.
  • I failed in anticipating some of the challenges that could arise – I gave myself enough time for plan A but could have left more for plan B.
  • Selecting which frames to photograph is part of the machine. With more time, I could have been more strategic about location and context for choosing subjects.
IV: Future Opportunities
Now That I’ve learned a lot about lenses (for better and for worse), I’m curious about their imperfections. How might I visualize the stresses I found? Does this stress affect what the viewer actually sees?
Seam Carving
Seeing these distortions through the polarizing filters, I was immediately reminded of a funhouse mirror.
Source: Google Images
Thanks to a suggestion from Kyle, I learned about Seam Carving and explored this a bit. Seam carving is usually used for resizing an image without losing “important” data; it preserves the spaces with the most “energy” and algorithmically carves away the less valuable things in between.
For scaling down, this pretty much looked like a “crop.” Not very interesting here. But for scaling up, I started to see some fun distortions.

UV Experiments
In trying to figure out why the polarizing filters didn’t work, I also learned that some contacts have UV blockers. Here’s what a few different brands look like under UV light:
source: google images
With a UV camera, some interesting future experiments could arise. If I took a UV self-portrait, what would my eyes look like?

The Virtual Artifact Gallery

This virtual gallery  displays tangible objects captured from real life, accompanied by their mundane and iconic representations from popular video games.

 

I originally had an idea to curate a similar gallery, which would include samples of real textures alongside their renderings in different styles of illustration. I wanted to capture the process of reconstructing tangible perception in a medium as abstracted as illustration. I eventually found that the variations between styles weren’t cohesive enough to create the type of collection I wanted, so I thought of collecting 3D models, specifically low-polygon models created for video games. This type of modeling is a medium that is abstracted enough by its limited detail, and can assume endless varieties of recognizable forms. It also incorporates some of the gestural aspects of illustration that I was trying to capture, through both the applied image textures and the virtual forms themselves.

I thought of using video game models for my typology when I was browsing  The Models Resource looking for models to use for a different experiment. The Models Resource  is a website that hosts 3D models that have been ripped from popular video games. I first noticed that a lot of the models had been optimized for speed and had interesting ways of illustrating their detail with a limited amount of information.

The following are some of the titles from which I sourced models:

  • Mario Party 4 [2002] – Wario’s Hamburger
  • SpongeBob SquarePants Employee of the Month [2002] – Krusty Krab Hat
  • SpongeBob SquarePants Revenge of the Flying Dutchman [2002] – Krusty Krab Bag
  • Scooby-Doo Night of 100 Frights [2002] – Hamburger
  • Garry’s Mod [2004] – Burger
  • Mario Kart DS [2005] – Luigi’s Cap,
  • Dead Rising [2006] – Cardboard Box
  • Little Big Planet [2008] – Baseball Cap
  • Nintendogs Cats [2011] – Cardboard Box, Paper Bag
  • The Lab [2016] – Cardboard Box
  • Animal Crossing Pocket Camp [2017] – Paper Bag

I decided to use photogrammetry to bring physical objects as seamlessly as I could into virtual space. I wanted to contextualize these artifacts which had been stripped from their intended contexts alongside virtual objects which we might consider to be “more real” due to being representations of photography, rather than abstract visualizations.

I think that this collection of hamburgers is the most jarring, as it frames a convincing representation of something commonly ate and digested as the plastic and rigidly designed product it truly is. The bun of this McDonald’s double cheeseburger was stamped by a ring out of a sheet of dough, the patty was stamped into a circle by another mold and the cheese was cut or sliced into a regular square – all automated processes of physical manufacturing done with the intent of assembling this product for somebody’s enjoyment.

I noted that many of the examples I found were mostly or entirely symmetrical. I think this symmetry reflects our physical ideals of the products we interact with and consume. A virtual object can exist in an absolutely defined state, rather than existing as the expression of this definition which is only similar enough within manufacturing tolerances. These manufacturing tolerances exist in most of the objects we interact with, and even the food we consume. We seem to use symmetry to indicate the intentionality of an object’s existence, which coincidentally or not, is also used for making most organic bodies. When a video game character is designed asymmetrically, it is usually done so deliberately to call attention to some internal imbalance or juxtaposition.

In retrospect, the disruptions of the photogrammetry – especially in the empty box model – seemed to manifest the essence of our fleeting, tangible perceptions of certain objects compared to others. The items that served as packaging for a product, to be discarded or recycled, are missing information as if easily forgotten.

 

 

 

Sciurious

My typology machine is a system for capturing ‘human speed’ videos of squirrels in my back yard interacting with sets that allowed for their behaviors to be anthropomorphized in a humorous and uncanny manner. It was important to me that the sets and the animals interactions felt natural but off, playing with the line between reality and fiction.

I used the Edgertronic high frame rate camera to capture these images, and over the course of 4 weeks have trained the squirrels in my yard to repeatedly come to a specific location for food.

Initial question: what lives in my backyard? what brings them out of their holes? and can I get them to have dinner with me?

Inspiration

Typology

Project link: https://www.lumibarron.com/sciuridaes
instagram @sciuriouser_and_sciuriouser

Process

Bloopers:      https://www.lumibarron.com/copy-of-sciuridaes

Sets Images:

Set Up:

Observation window set up – 100ft Ethernet cable leading from camera in greenhouse to laptop – image projected onto an external monitor for larger view and to allow other work to be done while waiting for the squirrels to make their way to the table.

an initial table set up – while still attempting to get the squirrels to sit in chairs. Soon realized that this was not feasible. Squirrels are incredibly stretchy.

Challenges:

One of the greatest challenges arised from the limitations of recording time and processing/saving to memory card speed capabilities of the Edgertronic. At a full resolution image (1280X1280) at 400fps the longest possible recording time capped at 12.3 seconds. The then processing time for this footage took just over 5 minutes. This heavily limited the amount of footage or interaction captured, as often once the video was done saving to memory, the squirrel was long gone with much of my tiny china in tow.

I ended up setting the image capture time to a shorter amount – approx. 2 seconds. This made for a significantly faster processing time and more opportunities to capture multiple shots of a squirrels interaction with the set. This also meant, however that if an interaction was captured too soon or too late, the interesting/looked for interaction was often cropped short or missed entirely. With this camera a motion tracking trigger system would not have been a useful tool to use for this project.

Another limitation/challenge was the dependency on very bright light for good images on with the high frame rate recording. Clouds, indirect sunlight, shadows from trees, too little sunlight all significantly darkened the images. I was not able to find a set up for the external light that kept it safe from the elements for an extended time and successfully lit the set as well. Footage had to be taken in a limited time window and luck of weather played a very large part in the success of the images.

Initial ideation:

Project Continued

This is a project that I will be continuing to work on throughout the semester for as long as the camera is available. The squirrel feed has attracted deer recently and I would like to begin to play with sets for different backyard creatures at different frame rates (from insects to deer, chipmunks and groundhogs). This is a project that I have enjoyed doing immensely and I have set up a system that allows me to maintain my other work activities while simultaneously observing the animals.

24 Hour Thermal Time Lapse

24 Hour Thermal Time Lapse captures the thermal flow of spaces people actively inhabit and present the 24 hour long findings in 2 minute compressions that would otherwise be invisible across both time and temperature.

Joseph Amodei
MFA Video/Media Design in the School of Drama

 

My project was a process of using the technology of thermal capture and employing it over an entire day to create time lapse observations of spaces full of human activity. I often begin my work with a specific pointed conceptual and political outcome, and here I wanted to disrupt my normal way of working by forefronting a capture process that observed spaces of interaction and showed them in a manner that would not be otherwise visible – through the movement of temperature and through the cycle of an entire day compressed into two minutes. The spaces I captured were a costume production shop, a design studio, KLVN coffee shop, and the space outside of my studio window which is a street on a bus line on the edge of Pittsburgh and Wilkinsburg.

Thermal Capture System at KLVN Coffee:

Thermal Capture System at KLVN Coffee Thermal Capture System at KLVN Coffee

My Thermal Time Lapse Capture System:

Thermal Capture System

My process developed through the technical hurdles and limitations of discovering how to capture space over 24 hours. I used the FLIR E30bx Thermal Camera in conjunction with the media design software Millumin to stream the footage into a Mac Mini, from there I recorded the output. This meant i needed my setup to be connected to constant power and to be in a fixed position. This limited my work to spaces where I could safely leave such a capture system running undisturbed and in a safe and dry location.

In the end I think these images are somewhat interesting , and are not visible without this machine situation I arranged. When I consider remaining opportunities for this project, I think there could be a more narrow frequency of places that I could record. I also would have loved to figure out how to record outdoor, too. To that end, picking a process/subject that had the risk of being uninteresting was part of my goal for myself and I am happy I took the time to explore thermal, timelapse and the potentially banal.