VW person in time VWIP

Take my heartrate data live from a fitbit. Find some things in my house that tick. Have the house tick in sync with my heartbeat.

Expanding:

I already have a fitbit, I am already in progress getting the API to work. Pretty sure I can get data at the rate I’d want. Can also give up use other heartrate monitor arduino whatever.

Heartrate can also be breathing.

“Tick” means like: clock hand, beep, water drip, quick power on/off.

I will have to experiment to find out what I can make tick. I am allowed to be selective about what household appliances/subjects are incorporated. The end result does not have to be “my house, 100% accurate to how it is everyday”, it can be staged.

The end result of this is an experience for me to be in. That will probably manifest into a video documentation for class. I imagine I would be in the ticking space, because I imagine I would respond to hearing my own heartrate. An alt (not for class) version would be an actual of an installation of a room ticking, and that runs constantly, and I’m not there.

Original idea involved sleep/REM date, and dreamwalking.

Alt: Getting a UV light & Sunbleaching my shadow into a piece of paper for like a hundred something hours. Just I think it’d be fun to make that much time translate into something really stupidly subtle.

Alt alt: Grid of office cubicles. In each cubicle is one chair, a controller to pan/zoom/tilt a ceiling-mounted camera, and a monitor displaying that camera’s feed. IDK office-core panopticon.

 

Work in Progress

People’s interactions in a room

Smokers allowed discussion. : r/nathanforyou

smokers allowed reference-In the “Smokers Allowed” episode of Nathan For You, Nathan Fielder helps a struggling bar owner increase business by creating a designated smoking area inside the bar, circumventing smoking laws. His solution involves reclassifying the bar as a “theatrical performance” where smoking is allowed as part of the act. Patrons are told they are watching a play, though it’s just regular bar activity. The episode escalates when Fielder decides to refine this into an art piece, staging intricate performances based on the footage recorded during the regular bar activity down to the detail of their conversations -under the guise of theater.

people in a room/everything happening at the same time (being able to zoom into people’s private interactions – maneuver volume level, high-light certain areas)

play intermissions, retail store, orchestra/band?

Quiddity Proposal

For this assignment, I’d like to create a website selling some kind of stereoscopic footage or other “micro-media” I’ve captured about micro worlds/worlds happening at timescales other than ours–with a bit of a twist.

When users buy, they are taken through several steps that highlight non-human (including what we consider living and non-living under non-animist frameworks) and human labor processes. As a simple example:

This is a clip of a video of a small worm wriggling in some kind of soil substrate. The final price might include

1 min x $20/60 min worm acting fee = $0.33 (fee is arbitrary at the moment, but I will justify it somehow for the completed project, though there are obvious complications/tensions around the determination of this “price”)

1 min x $20/60 min soil modeling fee = $0.33

5 min x $20/60 min computer runtime fee = $1.65

5 min x $20/60 min artist labor fee

Etc

The calculation of costs will help bring attention to aspects of the capture process that people might not normally think about (e.g. are we mindlessly stepping into another’s world?), but I think care needs to be taken with the tone taken.

Since some of the “participants” can’t be “paid” directly, we mention to buyers that this portion of the cost will be allocated to e.g. some organization that does work related to the area. For instance, the worm/soil might link to a conservation organization in the area. The computer runtime step might link to an organization that relates to the kinds of materials extracted to make computers and the human laborers involved in that process (e.g. Anatomy of an AI). There will also be information about the different paid participants (e.g. information about the ecosystem/life in the ecosystem the video was filmed in, something of an artist bio for myself in relation to the artist labor).

I will aim for final prices that make sense for actual purchase–as a way of potentially actually raising money for these organizations. If the final totaled labor costs result in something too high, I will probably provide a coupon/sale.

To avoid spamming these organizations with small payments, the payments will be allocated into a fund that gets donated periodically.

Besides the nature of the footage/materials sold, a large part of this project would be thinking/researching about how I actually derive some of these prices, which organizations to donate to, and the different types of labor that have gone into the production of the media I’m offering for sale.

Background from “existing work” and design inspirations:

Certain websites mention “transparent pricing”

Source: Everlane 

Other websites mention that X amount is donated to some cause:

I’m also thinking of spoofing some kind of well-known commerce platform (e.g. Amazon). One goal is to challenge the way these platforms promote a race to the bottom in pricing in ways completely detached from the original materials and labor. If spoofing Amazon, for instance, instead of “Buy Now,” there will be a sign that says “Buy Slowly.”  

Nica had mentioned the possible adjacencies to sites selling pornography (where else do you buy and collect little clips like that) and NFTs. And in considering this project, I’m also reminded of the cabinet of curiosities. Ultimately, all of these (including stereoscopy) touch on a voyeuristic desire to look at and own.

What I initially had in mind for this project was a kiosk in an exhibition space where visitors can buy art/merchandise. I’m still thinking about how to make the content of the media more relevant to the way in which I want to present it, so open to suggestions/critical feedback!! I think there are a couple core threads I want to offer for reflection:

1. desire to look and own, especially in a more “art world” context (goal would be to actually offer the website/art objects in an exhibition context to generate sales for donations). What would generate value/sales? Would something physical be better (e.g. Golan had mentioned View-master disks)

2. Unacknowledged labor, including of the non-human

3. Building in the possibility for fundraising and general education. Thanks!

3DGS stopmotion/timelapse

TLDR: Using the UArm 5, capture photos to make an animated, interactive 3D splat of a mound of kinetic sand/play doh that is manipulated by a person or group of people.

The above video has the qualities of interactivity and animation that I’d like to achieve with this project.

Current workflow draft:

  1. Connect camera to robot arm and laptop.
  2. Write a Python script that moves the robot arm in a set path (recording the coordinates of the camera/end effector) that loops at set time interval. Every execution of the path results in ~200 photos (3 angles, photo taken every 6 degrees; 180 photos per new mound) that will then be turned into a splat.
  3. Conduct first test animation/training data by pinching some sand/playdoh, collecting images for 5 splats. Write a Python Script to automatically run all 5 splats overnight.
  4. Come back the next morning, check for failure. If no failure and I have 5 splats, (here’s the hard part) align all splats and create a viewer that would loop these “3D frames” and allow for audience to interact with the camera POV. Ways I think I can align each “3D frame” and have a viewer that plays all the frames
    1. Unity?
    2. Writing code to do this (idk how tho)
    3. Ask for help from someone with more experience
  5. If the above step successful, ask people to swing by the SoCI to participate in a group stop motion project. I’ll probably put up constraints as to what people can do to the mound, most likely restricting the change to a single-digit number of pinches, flattening, etc.

I’m am very very very open to ways to simplify this workflow. Basically, distilling this idea even further while preserving the core ideas of time flow in 3D work (aka a 4D result).

Slightly less open to changes in concept. I’m kinda set on attempting to figuring out how to use the robot and piecing together splats to make a stop motion animation, so process and result is kinda set. I’m a little unsure on if this concept of “people manipulating a mound” even fits this “person in time” theme, but I’m open to ideas/thoughts/opinions/concepts that aren’t too difficult.

edit: should I capture people’s nails? like shiny sparkly nail art?

Person in time WIP — smalling (?)

Smalling/being smalled…title is a WIP – the way the body contorts when centered on itself. This is to say, there’s a focus on the motion of making oneself physically small, and that motion as one that can only occur when one is very much aware of their own body.

Capturing people (or myself) while physically making oneself small — either naturally (due to environment needs or lack of awareness), or artificially (forcing oneself into a small space intentionally.) 

Not sure what output is here, have a few ideas:

  • smallness as form and a formal quality of an object or person. Would probably be me shoving myself into tight corners and with a nicer camera setup.
  • smallness as a kind of game? I don’t think I want it to be too game-y, but image segmentation is an interesting option and I talked very briefly to Leo and Golan about attempting to live-time measure how small someone has made themselves or how much their form has reduced. I would prefer not having to solely photograph myself so leaning towards this.

Some semi-successful and mostly not great or all that pretty examples of both of the above options:

Detection of small body on goofy prefabbed segmentation thing Leo sent me:

That, but in video: SORRY I DONT KNOW WHY MY NORMAL GIF SITES AREN’T WORKING

Ok. Now photos:

Looking Outwards 04

 

1. Paul Sermon’s Telematic Dreaming (1992)

 

Paul Sermon’s Telematic Dreaming is an interactive installation that allows two individuals to interact virtually in separate locations via video projection onto a bed. Participants see themselves and their remote counterpart lying together in the same space, despite being physically distant. This project fascinates me because it explores intimacy and presence in a virtual world, emphasizing the ethereal nature of human connection in an increasingly digital age. It captures not only a moment in time but also transcends physical boundaries, creating a dreamlike experience of togetherness.

2. Stefan Draschan’s Photography of Museum Visitors

https://www.dw.com/en/people-matching-artworks/video-52219715

https://www.dw.com/en/people-matching-artworks/video-52219715

. Stefan Draschan Captures Museum Visitors Who Accidentally Match The Artwork  - IGNANT  People Matching Artworks: Photos by Stefan Draschan | Daily design  inspiration for creatives | Inspiration Grid

Stefan Draschan’s photography is renowned for capturing museum visitors whose clothing or posture coincidentally aligns with the artwork they’re observing. Draschan patiently waits for these serendipitous moments where the viewer unintentionally becomes part of the artwork itself. I find this project interesting because it plays with the idea of temporality and synchronicity, turning candid, fleeting moments into art. The subjects aren’t posing, yet their interactions with the art they’re viewing create natural, yet curated, compositions that offer new ways of perceiving both viewer and artwork.

3. Carl Knight’s Moving Pictures 

https://www.instagram.com/p/C8aC05Uu7ll/?igsh=MTBoZTU1MGZ2N2J6bQ%3D%3D&img_index=1 

In Carl Knight’s Instagram series Moving Pictures, the artist creates cinemagraphs—still images with subtle, repeated motion. These moving portraits capture a moment in time that feels continuous, as if frozen in an ongoing loop. In this example, the use of motion adds a hypnotic quality to a seemingly static scene. I am drawn to this project for its ability to merge stillness with movement, capturing time in a way that feels both suspended and in constant flux, offering a deeper reflection on the passage of time within portraiture.

 

 

 

Looking Outwards4

Quantified Self Portrait (One Year Performance). Michael Mandiberg. 2017.

 

Quantified Self Portrait (One Year Performance) is a frenetic stop motion animation composed of webcam photos and screenshots that software captured from the artist’s computer and smartphone every 15 minutes for an entire year; this is a technique for surveilling remote computer labor.

 

Quantified Self Portrait (Rhythms). Michael Mandiberg. 2017.

Quantified Self Portrait (Rhythms) sonifies a year of the artist’s heart rate data alongside the sound of email alerts. Mandiberg uses himself as a proxy to hold a mirror to a pathologically overworked and increasingly quantified society, revealing a personal political economy of data. The piece plays for one full year, from January 1, 2017 to January 1, 2018, with each moment representing the data of the exact date and time from the previous year.

 

Excellences and Perfections. Amalia Ulman. 2014.

https://webarchives.rhizome.org/excellences-and-perfections/

4 month performance on instagram, fooled her followers into believing in her character and following her journey from ‘cute girl’ to ‘life goddess’. Bringing fiction to a platform that has been designed for supposedly “authentic” behaviour, interactions and content’

EEG AR: Things We Have Lost. John Craig Freeman. 2015. 
https://johncraigfreeman.wordpress.com/lacma-art-technology/

Freeman and his team of student research assistants from Emerson College interviewed people on the streets of Los Angeles about things, tangible or intangible, that they have lost. A database of virtual lost objects were created based on what people said they had lost, as well as avataric representations of the people themselves.

I thought these might not be related by had them on:

Clocks. Christian Marclay. 2010. 

24-hours long, the installation is a montage of thousands of film and television images of clocks, edited together so they show the actual time.

Cleaning the Mirror. Marina Abramovic. 1995.

Different parts of a skeleton – the head, the chest, the hands, the pelvis, and the feet – on five monitors stacked on top of each other forming a slightly larger than life (human) body. On the parts of the skeleton one sees the hands of the artist scrubbing the bones with a floorbrush.

 

Sunday Ingredients

Practice part 1: Ingredients

In this project titled Sunday Ingredients, I photographed the objects I touched with my hands throughout the day on and arranged them sequentially based on the time of interaction, creating a list of objects. These items constitute my day on that Sunday—The Ingredients of 9.29.2024.

I believe that daily life is made up of countless ordinary, trivial matters, and each of these small events is triggered by an object. For instance, drinking water as an activity might be initiated by a bottle of mineral water, or perhaps by a combination of a cup and a faucet. Therefore, the mineral water bottle itself, or the combination of a cup and a faucet, can imply an action—in this case, drinking water. When we document a series of objects, we can imagine and deduce a sequence of behavioral trajectories. For example, if the table shows remnants such as a candy wrapper, a book, a pair of glasses, and a mouse, we might visualize that the owner of these objects was perhaps sketching while eating candy at some point. However, we cannot definitively state that the owner removed their glasses, read a book, ate a candy, and then played with the mouse this morning. This is because these objects form an overlapping trace of different moments, each leaving behind its own mark.

Desk Composition

The hand is the agent of intentional choice. It serves as the bridge connecting my mind to the outside world. If I were an ant, the hand would be my antenna; if I were an elephant, it would be my trunk. The act of touching an object with the hand represents the embodiment of will. When an object is actively touched by the hand, it reflects the need of that particular moment. Thus, when the touched objects are linked together in chronological order, they record my life. If I touch the faucet, refrigerator door handle, milk, cup, and bread in succession, these objects collectively suggest the event of breakfast.

Breakfast Sequential Items

In this project, I documented the main objects my hands actively touched on Sunday, 9.29, starting from when I woke up until I lay in bed preparing to sleep. I sorted them chronologically and divided them into eight time segments.

The Ingredients OF 9.29.2024.

Practice part 2: Collage

After collecting these objects that reflect my activities throughout the day, I wanted to use these “ingredients” to create a collage that represents my day. I took a photo of my hand from that day and used all the objects I interacted with as materials to collage an image of my hand. This collage, titled Hand of 9/29/2024, symbolizes my day.

HANDS OF 9.29.2024

 

I took 65 photos of the objects I touched that day, cropped each one into 5×5 small squares, and obtained a total of 1,675 material images.

65 Ingredient Item’s Photoes

Library Resource

For the target image, I converted it into a grayscale image and divided it into 15×20 small squares. Then, I compared each target square with the material squares in the library one by one and selected the closest match to replace the original square.

In my comparison function, I used OpenCV’s Canny Edge Detection, ORB (Oriented FAST and Rotated BRIEF), and SSIM (Structural Similarity Index Measure) to perform a comprehensive similarity analysis between the target image and images from the library.

        1. Convert the target image and library images into edge maps using Canny edge detection.
        2. Use ORB to compare the similarity between the edge maps of the target image and the library images.
        3. If the ORB similarity score is greater than the set threshold value (e.g., 0.5), then proceed to evaluate further.
        4. Scoring mechanism: Calculate the SSIM value between the grayscale version of the original target image and the grayscale library image. Then, combine the SSIM score with the ORB score using a weighted formula. The weights are yet to be determined and represented as variables for now.

Photogrammetry of UV Coloration with Flowers (ft. Polarizer)

Initially, I took interest in the polarizer sheets, which I can divide the diffused light and the specular light with. I was particularly interested in the visual of image with only the diffused light.

(Test run image: Left (original), Right (diffused only))

My original pipeline was to extract materials from real life object and bring it into a 3D space with the correct diffused and specular maps.

After our first crit class, I received many feedbacks on my subject choice. I could not think of one that I was particularly interested in, but after seeking advice, I grew interested in capturing flowers and how some of them display different patterns when absorbing UV light. Therefore, I wanted to capture and display the UV coloration of different flowers that we normally do not see.

I struggled mostly with finding the “correct” flower. Other problems that came with my subject choice were that flowers wither quickly, they are very fragile and quite small.

(Flower I found while scavenging the neighborhood with the bullseye pattern, but it withered soon after.)

After trying different programs to conduct photogrammetry, RealityScan worked the best for me. I also attached a small piece of polarizer sheet in front of my camera because I wanted the diffused image for the model; there was not a significant difference since I couldn’t use a point light for the photogrammetry.

Here is the collection:

Daisy mum

(Normal)

(Diffused UV)

 

Hemerocallis

(Normal)

(Diffused UV)

(The bullseye pattern is more visible with the UV camera)

 

Dried Nerifolia

(Normal)

(Diffused UV)

My next challenge was to merge two objects with different topology and UV maps so that it has two materials on one model. Long story short, I learned that it is not possible…:’)

Some methods I tried on Blender were

  • Join two objects as one bring the two UV maps together than swap them
  • Transfer Mesh Data + Copy UV Map
  • Link Materials

They all resulted in a broken material like so…

The closest to the result was this, which is not terrible.

Left is the original model with original material; middle is the original model with UV material “successfully” applied and Right is the UV model with UV material.  However, the material still looked broken, so I thought it was best to keep the models separated.