Person In Time WIP

My current plan is to create an installation of a box with a peephole and a circular plane mounted on top. When users place their eye in front of the peephole, a proximity sensor will detect their presence and send the data to my unity project. Unity will receive the signal when someone is at the peephole, then record a video of their eye and capture the blinking.

The Unity project will project the captured eye videos onto the circular plane in real time, arranging them into a collage that forms a circular shape. The eyes will blink in sync every second, mimicking the movement of a clock’s second hand.

Quiddity Proposal

For this assignment, I’d like to create a website selling some kind of stereoscopic footage or other “micro-media” I’ve captured about micro worlds/worlds happening at timescales other than ours–with a bit of a twist.

When users buy, they are taken through several steps that highlight non-human (including what we consider living and non-living under non-animist frameworks) and human labor processes. As a simple example:

This is a clip of a video of a small worm wriggling in some kind of soil substrate. The final price might include

1 min x $20/60 min worm acting fee = $0.33 (fee is arbitrary at the moment, but I will justify it somehow for the completed project, though there are obvious complications/tensions around the determination of this “price”)

1 min x $20/60 min soil modeling fee = $0.33

5 min x $20/60 min computer runtime fee = $1.65

5 min x $20/60 min artist labor fee

Etc

The calculation of costs will help bring attention to aspects of the capture process that people might not normally think about (e.g. are we mindlessly stepping into another’s world?), but I think care needs to be taken with the tone taken.

Since some of the “participants” can’t be “paid” directly, we mention to buyers that this portion of the cost will be allocated to e.g. some organization that does work related to the area. For instance, the worm/soil might link to a conservation organization in the area. The computer runtime step might link to an organization that relates to the kinds of materials extracted to make computers and the human laborers involved in that process (e.g. Anatomy of an AI). There will also be information about the different paid participants (e.g. information about the ecosystem/life in the ecosystem the video was filmed in, something of an artist bio for myself in relation to the artist labor).

I will aim for final prices that make sense for actual purchase–as a way of potentially actually raising money for these organizations. If the final totaled labor costs result in something too high, I will probably provide a coupon/sale.

To avoid spamming these organizations with small payments, the payments will be allocated into a fund that gets donated periodically.

Besides the nature of the footage/materials sold, a large part of this project would be thinking/researching about how I actually derive some of these prices, which organizations to donate to, and the different types of labor that have gone into the production of the media I’m offering for sale.

Background from “existing work” and design inspirations:

Certain websites mention “transparent pricing”

Source: Everlane 

Other websites mention that X amount is donated to some cause:

I’m also thinking of spoofing some kind of well-known commerce platform (e.g. Amazon). One goal is to challenge the way these platforms promote a race to the bottom in pricing in ways completely detached from the original materials and labor. If spoofing Amazon, for instance, instead of “Buy Now,” there will be a sign that says “Buy Slowly.”  

Nica had mentioned the possible adjacencies to sites selling pornography (where else do you buy and collect little clips like that) and NFTs. And in considering this project, I’m also reminded of the cabinet of curiosities. Ultimately, all of these (including stereoscopy) touch on a voyeuristic desire to look at and own.

What I initially had in mind for this project was a kiosk in an exhibition space where visitors can buy art/merchandise. I’m still thinking about how to make the content of the media more relevant to the way in which I want to present it, so open to suggestions/critical feedback!! I think there are a couple core threads I want to offer for reflection:

1. desire to look and own, especially in a more “art world” context (goal would be to actually offer the website/art objects in an exhibition context to generate sales for donations). What would generate value/sales? Would something physical be better (e.g. Golan had mentioned View-master disks)

2. Unacknowledged labor, including of the non-human

3. Building in the possibility for fundraising and general education. Thanks!

PersonInTimeWIP

Topic1: Connection – Digital Mirror in Time

A website acting as a mirror as well as a temporal bridge between us. A real-time website, tentatively named ” Digital Mirror in Time”, aims to explore shared human expressions and interactions over time using advanced AI technologies like Google AI Edge API, MediaPipe Tasks, and Face Landmark Detection. The project transforms your computer’s camera into a digital mirror, capturing and storing facial data and expression points in real time.

Inspiration: Sharing Face, Visiting the installation at either location would match your expression and pose in real time with these photos of someone else who once stood in front of the installation. Thousands of people visited the work, and saw themselves reflected in the face of another person.

QA: can I get exactly the 478 landmarks results, currently I experimented and only can get face_blendshapes as output.

  1. Facial Data Capture:
    The website uses the computer’s camera to detect a user’s face, leveraging the MediaPipe Face Landmark Detection model. This model identifies key facial landmarks (such as eyes, nose, mouth, etc.), storing this data along with the corresponding positions on the screen.
  2. Expression Storage (potential: Firebase)
    The user’s facial expressions are stored in a premade database, including information like facial positions, angles, and specific expressions (smiling, frowning, etc.). This creates a digital archive of faces and expressions over time.
  3. Facial Expression Matching and Dynamic Interaction (potential: Next.j + Prisma)

When a new user visits the website, their live camera feed is processed in the same way, and the system searches the database for expressions that match the current facial landmarks. When a match is found, the historical expression is retrieved and displayed on the screen, overlaying in the exact position.
This creates an interactive experience where users not only see their own reflection but also discover others’ expressions from different times, creating a temporal bridge between users. The website acts as a shared space where facial expressions transcend individual moments.

Concept: We often perceive the images of people on our screens as cold and devoid of warmth. This project explores whether we can simulate the sensation of touch between people through a screen by combining visual input and haptic feedback.

  1. Using a Hand Landmarker API, the system recognizes and tracks the back of the user’s hand in front of a camera.The user places their palm on a Sensel Morph (or a similar device) that captures pressure data, creating a heatmap of the touch.
  2. The pressure data is then stored in a database, linked to the visual representation of the hand. Implement algorithms to match the hands of future users with those previously recorded, based on hand shape and position.
  3. When another user places their hand in the same position on the screen, the system matches their hand’s position and visual similarity to the previous user. Display the hand pressure heatmap on the screen when a matching hand is detected, simulating the sensation of touch visually.

Topic2: Broken and Distorted Portrait

What I find interesting about this theme is the distorted portrait. I thought of combining it with sound. When adding water to a cup, the refractive index will change, and the portrait will also change. At the same time, the sound of hitting the container can be recorded.

Birds or Bio Drawing – Marc

My first idea is to do light painting with my 3D printer. By sending direct Gcode instructions, I can direct my printer to move around in 3D space, and I can capture that motion over time with long exposure. For my light source, I originally wanted to use a firefly, but getting fireflies or glow worms is unethical so I can’t do that. Instead I was thinking about using motion-sensitive glow in the dark algae.

-Is this feasible?
-Will it be sufficiently disturbed to glow

-Is this still a “creature in time”
-My thought here is as the algae’s glow reacts to it being disturbed, that feedback between machine moving it and its bio glow is the qudiddity

-Could a different light source be better?

 

My second idea for my project is to use a webcam and openCV or another software to detect birds outside my window. When it does, I will record the location of the birds with my axidraw.

-How should I represent the birds?
-Trace their motion?
-Draw them at a single frame in the abstract m style?

-What software can I use to detect birds?

Person in Time WIP- A Portrait of Rumination

For this project, I’m really interested in creating a self portrait that captures my experience living with mental illness.

I ended up being really inspired by the Ekman Lie Detector that Golan showed us in class. While I’m not convinced by his application, I do think there’s a lot to be learned from people’s microexpressions.

I was also deeply inspired by a close friend of mine, who was hospitalized during an episode of OCD induced psychosis earlier this year. She shared her experience publicly on instagram, and watching her own and be proud of her own story felt like she lifted a weight off of my own chest. I hope that by sharing and talking about my experience, I might lift that weight off of someone else, and perhaps help myself in the process.

Throughout my life I’ve struggled with severe mental illness. Only recently have I found a treatment that is effective, but it’s not infallible. While lately I’ve been functional and generally happy, I would say I still spend on average 2-3 hours each day ruminating in anxiety and negative self talk, despite my best efforts. These thought patterns are fed by secrecy, embarrassment, and shame, so I would really like to start taking back my own power, and being open about what I’m going through, even if it’s really hard for me to tell a bunch of relative strangers something that is still so taboo in our culture.

So, personal reasons aside, I think it would be really interesting to use the high speed camera to take potrait(s) of me in the middle of ruminating, and potentially contrast that with a portrait of me when I’m feeling positively.  Throughout my life folks have commented on how easy it is to know my exact feelings about a subject without even asking me, just based off my facial expressions, because I wear my feelings on my sleeve (despite my best efforts!). I’ve tried neutralizing my expressions in the past, but I’ve never really been successful, so I’m hoping that’s a quality that will come in handy while making this project. If being overly emotive is a flaw, I plan to use this project to turn it into a superpower.

I’ve also contemplated using biometric data to supplement my findings, like a heart rate or breathing monitor, but I’m not totally married to that idea yet. I think the high speed camera might be enough on it’s own, but the physiological data could be a useful addition.

 

 

3DGS stopmotion/timelapse

TLDR: Using the UArm 5, capture photos to make an animated, interactive 3D splat of a mound of kinetic sand/play doh that is manipulated by a person or group of people.

The above video has the qualities of interactivity and animation that I’d like to achieve with this project.

Current workflow draft:

  1. Connect camera to robot arm and laptop.
  2. Write a Python script that moves the robot arm in a set path (recording the coordinates of the camera/end effector) that loops at set time interval. Every execution of the path results in ~200 photos (3 angles, photo taken every 6 degrees; 180 photos per new mound) that will then be turned into a splat.
  3. Conduct first test animation/training data by pinching some sand/playdoh, collecting images for 5 splats. Write a Python Script to automatically run all 5 splats overnight.
  4. Come back the next morning, check for failure. If no failure and I have 5 splats, (here’s the hard part) align all splats and create a viewer that would loop these “3D frames” and allow for audience to interact with the camera POV. Ways I think I can align each “3D frame” and have a viewer that plays all the frames
    1. Unity?
    2. Writing code to do this (idk how tho)
    3. Ask for help from someone with more experience
  5. If the above step successful, ask people to swing by the SoCI to participate in a group stop motion project. I’ll probably put up constraints as to what people can do to the mound, most likely restricting the change to a single-digit number of pinches, flattening, etc.

I’m am very very very open to ways to simplify this workflow. Basically, distilling this idea even further while preserving the core ideas of time flow in 3D work (aka a 4D result).

Slightly less open to changes in concept. I’m kinda set on attempting to figuring out how to use the robot and piecing together splats to make a stop motion animation, so process and result is kinda set. I’m a little unsure on if this concept of “people manipulating a mound” even fits this “person in time” theme, but I’m open to ideas/thoughts/opinions/concepts that aren’t too difficult.

edit: should I capture people’s nails? like shiny sparkly nail art?

Person in time WIP — smalling (?)

Smalling/being smalled…title is a WIP – the way the body contorts when centered on itself. This is to say, there’s a focus on the motion of making oneself physically small, and that motion as one that can only occur when one is very much aware of their own body.

Capturing people (or myself) while physically making oneself small — either naturally (due to environment needs or lack of awareness), or artificially (forcing oneself into a small space intentionally.) 

Not sure what output is here, have a few ideas:

  • smallness as form and a formal quality of an object or person. Would probably be me shoving myself into tight corners and with a nicer camera setup.
  • smallness as a kind of game? I don’t think I want it to be too game-y, but image segmentation is an interesting option and I talked very briefly to Leo and Golan about attempting to live-time measure how small someone has made themselves or how much their form has reduced. I would prefer not having to solely photograph myself so leaning towards this.

Some semi-successful and mostly not great or all that pretty examples of both of the above options:

Detection of small body on goofy prefabbed segmentation thing Leo sent me:

That, but in video: SORRY I DONT KNOW WHY MY NORMAL GIF SITES AREN’T WORKING

Ok. Now photos:

Person in Time proposal

Seconds, minutes, hours. Familiar units of time are arbitrarily defined. The experience of time is different.

In the project:

1. capture different movements of the body: breath, blinking. pacing, heartbeats as the units of time.

blinking: mounted small camera in front of head

walking: similar but maybe on legs

breath: rise and fall of a capture device while lying down

heartbeat: ultrasound

(Revealing Invisible Changes in the World)
(Pulse Room)

2. software component: finding timestamps of the actions and slicing + assembling accordingly. (would start blinking at the same time, a page of eyes blinking at the same time, but with different intervals)

3. Open to other ways of assembling

the time of different body parts synced.

Outwards from June – Report 4

Approaches to temporal capture and portraiture that interest me

01 NYT ATHLETE PORTRAITS

I’ve enjoyed NYT articles in recent years that show, freeze, and analyze an athlete’s movements over time. Here are two articles about Olympic runners, titled “How Noah Lyles Won the Men’s 100-Meter Gold by a Fraction” and “How Julien Alfred Beat Sha’Carri Richardson for Gold.” I like these pieces because they freeze, analyze and animate a person. I think this could be fun to do on a subject that is not necessarily a star athlete and not on a race track. What if we took this serious analytical approach to the movements of common people (non olympic athletes)? I think we could make something interesting and fun! I also like it that in this article line graphs are presented.

 

I also enjoy “Richarlison, Messi and Pulisic: Three Stunning Goals Frozen in Time” as a piece that not only shows and stops goals in a soccer game, but also allows us quickly pan around a 3D view of that critical moment and understand it in a way that is impossible for a spectator in real time. I do like soccer, but I think we could isolate and analyze other human, or maybe even animal movements (or plant movements?) in this way and find something really interesting.

02 NICHOLAS FELTON’S 2010 ANNUAL REPORT

I was incredibly moved by Nicholas Felton’s careful crafting with various forms of data to tell a story about his father, who had passed away the previous year,  in an “annual report.” On a 99% Invisible episode, Felton describes his information design work and the annual reports he creates. “He took 4,348 of this father’s personal records and created an intimate portrait of a man, using only the data he left behind.” I love this project because the subject is so personal and the presentation is so clear.

Overall, I think Felton’s other annual reports are versions of a person in time (himself usually, over the course of a year). It was interesting to learn how the collection of data for these reports influenced and perhaps interfered with Felton’s life. Though maybe not as extreme, it reminded me of Tehching’ Hsieh’s One Year Performance 1980-1981 (Time Clock Piece).

03 LOOPING TIME

I really loved Naren Wilks, One Man, Eight Cameras. because it shows how a person and their movements can be rearranged and synchronized, especially to some kind of music (for what purpose, I’m not sure, but it’s so pleasing to see). Here we get to take a subject and the path of their movements as a material. Then we get to see how we can array and recombine those movements. Time loops are often explored in literature and cinema and I think they are very interesting to consider. Sometimes our lives feel like time loops. 

In landscape architecture, I think one of the big challenges is to intentionally choreograph possibilities for change and movement within a setting. People try to choreograph the bloom times of plants for example, but you could also try to choreograph people and their movements in space. This makes me think of the landscape architect Lawrence Halprin, and his wife, the dancer and choreographer Anna Halprin. Here’s an interview with the couple where Anna Halprin talks about her study of the nature of movement and the nature of the human body.