That’s the post. Pretend I am in the the Frank-Ratchye studio giving my project presentation, I am not getting paid enough to greenscreen that in as a background.
My proposal is to get liquid emulsion, covering large surfaces, and then testing natural found shutters.
Ex: Opening a door, walking through it, and shutting the door again is a natural shutter. The capture happens if the room I exit is lit and the room I enter is dark, and the wall opposing the doorway in the dark room has an emulsion surface.
A motion sensor light that times out after a while is a natural shutter.
The sun rises and sets is a natural shutter.
I think something standardized might be nice if I wanted to compare things in a type. I think something I mixed & applied myself would not be standardized enough, and I might have to buy something premade, but this might not work with scale.
I am looking at other things like natural dyes and cyanotypes (anything light sensitive), I will be doing media studies & trying to get down cost.
The idea was to have things in my house that I could get to “tick” all tick on the same tempo as my body. Examples of a “tick”: a light on a smoke-detector blinks, a curtain sways, a hinge creaks, a faucet drips.
Why these two tempos are important:
The heartbeat or breath rate are the body’s cycles that indicate it’s alive. These are fairly regular cycles, but there are some that are more irregular (ex. walking vs. not walking).
The house’s ticks indicate the house also has some kind of natural rhythm (ex. a clock on a wall). Some of the rhythms indicate a human is living inside (ex. stepping on a floorboard causes noise), and so may be more irregular (but I might creak the same floorboard when I get out of bed the same time each day).
I think the cycles of these two things are comparable.
For the project:
The person in this would be myself– or more “my body” than “my-self”. The quiddity is the specific rhythms to this body, and/or marks the body leaves on it’s environment. (A rhythm of the body may be a heartbeat. A mark made by the body may be the faucet dripping- that would not happen if I had not just run the sink.)
If a body can leave marks on a home, then the cycles are also linked / influence each other. I wanted to explore the question of “What happens/what does it mean if these are 1:1“? I’m assuming that means there’s no more barrier between body and house.
Part of that question is: “Why are they linked“? Overtime, if you inhabit a house enough, it’s changing you as much as you’re changing it. You automatically duck under a low-hang, you know where to reach for light switches in the dark. This is the house changing you. Overtime, you indent the middle of stairs and smudge the wall around the light switches where you feel for them in the dark. This is you changing the house. Given enough time, these rates of changing each other might become the same/synonymous.
There’s an assumption that in the relationship between a person and their house, the person is the one with agency, and the house is passively changing in response. Say that isn’t true, or that the opposite is true: “What happens if the house has all the agency, and the human-body is passive“? I think places can be sentient in a way of having their own agency. There are some scenarios where they can feed off a person inhabiting them.
The third possible answer to “why are they linked?” is that: You are not the same thing as your body. Your body is a thing you inhabit, much like a house. “Why is the end of the skin the end of your body, if you havesimilar levels of agency over and privacy within your body as your house?”
The thing that I am producing is sort of a depiction of the fantasy where one or some of these possible answers happen.
Output, Attempt 1:
I spent the majority of time trying to get data from a FitBit. I started there because it could give me sleep stages on top of heart rate. This sucked:
These are examples of requesting my live heart bpm via the FitBit Web API. It does not give me live data and it’s finnicky. I had to figure out requesting and verifying tokens via the url to get permission from the app- it was a lot of work and I do not know why it is like this, the behavior is unacceptable.
Output: F-it, arduino:
After that did not work, here’s what I could get done with an Arduino in like 10 days:
I figured syncing heart-beats to something would be my best bet, although I also have tested a BMP280 air pressure sensor & a DS18B20 temperature sensor (both are Arduino inputs) to capture beathing. The air pressure sensor gave an output that’d be useable under good circumstances.
I chose to go with a faucet dripping out of all the “ticks” I’ve mentioned previously because it’s a subtle life-sign, it easily matches the type of pulse I get from a heart-beat, and doesn’t have the same definite connotation as something like a clock.
Here’s the thing installed:
Here is a final-output video with it:
Final thoughts:
I think this went okay, even though the final vision that was in my head is not present here, and I didn’t do everything I wanted. There are also approx 7 extra pages of notes overthinking it, so this was a good thumbnail to have explored the fuller idea further, and to discover where technical limitations are. (Link to my notes, although it’s mostly incoherent).
I think this project connects to other ideas I care about exploring– I wanted to see a depiction where “I” am specifically “my-body” and not “my-self“. I usually ask to see some kind of dispersal or relocating of that “self”. I like that this condenses many hours of work into something very tiny & unimportant (if the solenoid was quiet). The previous project I’d tried to do for this class was on technophony: transcribing background-noises into nonsense-text. These projects are both essentially about giving human agency to the environments humans inhabit.
Because my project is similar to some other peoples’: I do not care about heart-rate in terms of tracking or counting. I care about this body-rhythm because it indicates life.
Take my heartrate data live from a fitbit. Find some things in my house that tick. Have the house tick in sync with my heartbeat.
Expanding:
I already have a fitbit, I am already in progress getting the API to work. Pretty sure I can get data at the rate I’d want. Can also give up use other heartrate monitor arduino whatever.
Heartrate can also be breathing.
“Tick” means like: clock hand, beep, water drip, quick power on/off.
I will have to experiment to find out what I can make tick. I am allowed to be selective about what household appliances/subjects are incorporated. The end result does not have to be “my house, 100% accurate to how it is everyday”, it can be staged.
The end result of this is an experience for me to be in. That will probably manifest into a video documentation for class. I imagine I would be in the ticking space, because I imagine I would respond to hearing my own heartrate. An alt (not for class) version would be an actual of an installation of a room ticking, and that runs constantly, and I’m not there.
Original idea involved sleep/REM date, and dreamwalking.
Alt: Getting a UV light & Sunbleaching my shadow into a piece of paper for like a hundred something hours. Just I think it’d be fun to make that much time translate into something really stupidly subtle.
Alt alt: Grid of office cubicles. In each cubicle is one chair, a controller to pan/zoom/tilt a ceiling-mounted camera, and a monitor displaying that camera’s feed. IDK office-core panopticon.
Probably coming from the Artist’s Grandfather Eating a bowl of Soup work, I am thinking of homes and very slow accumulation. Also still “passive capture” techniques. A few examples of sun-bleaching from r/mildlyinteresting:
I think sun-bleaching is good at translating immense amounts of time into incredibly subtle things. I like the idea that a shadow isn’t passive/ephemeral. Mrs. Homegrown posts about dust.
The one by Benglis (above) is her narrating over old videos of family, and has a quality of dissociation both through time and the motif of watching her life play back through a screen. It’s interesting to be more personal with the viewer than to her own experiences. [link to full video, requires login]
The one by Steele (above) functions a lot like a combination of sun-bleached home-places and Bengalis’ narrative style. Bodies and houses are fairly equivocal, so I’m sure finding the specific differences leads somewhere. [link to full video, requires login]
A few more pieces that came to mind mostly thinking about time-passage capture-methods:
Portrait of many people, representing any individual’s drawl/likelihood to standing in the shade. [link]
Using google-street maps as a camera. Proves that the street-map photos aren’t ambiguous in time. [link]
Election cycles as a time-measuring metric. Also yard-signs as passive-capture of a house’s residents. [link]
So, what’s my project? Alright, well, the first part of it was spending a lot of time nitpicking over ideas.
I knew I wanted to do something with sound, and then I knew I wanted to do something on technophony. If a soundscape is composed of biophony, geophony, and anthrophony— technophony is electro-mechanical noise, subcategory of human-noise.
To me technophony seems to fall into background-noise-noise-pollution and sounds meant to communicate with humans. Ex. ventilation drone and a synthetic voice or beep.
The first is interesting to me because of how subtle and unnoticed yet constant and invasive it is. The second is interesting because by giving machines sensors and reactive ques, they gain a sense of agency where they otherwise shouldn’t have any (language is typically considered a trait only of things with sentience). If these two ideas are combined, you’re presented with a world where you’re constantly engulfed in sentient actors that are completely invisible.
There’s a point that centralized a/c isn’t particularly human-reactive or communicative— it’s only sensing room temp, it also doesn’t have a voice. However, a lot of a/c.s will cycle on and off in a pattern, which creates an image where you are inside of a giant thing that is breathing very very slowly. There’s other things like this— streetlights that turn on and off at dusk and dawn have nocturnal sleep-wake cycles.
What I’ve ended up doing — I’ve been trying to get speech-to-text transcription to work on technophonic noises.
An extremely subtle sound is indistinguishable from room tone, and it feels like I am not recording any one specific technophonee.
Okay, I can abandon interviewing a/c units, and try particuarly clangy radiators or faulty elevators. Great, yup.
Whisper (python library) transcriptions will give wildly different results on the same file— the input and the process are both random- bunk.
Vosk (python library) transcriptions vary little enough that it’s an actual methodology, but that means it’s too good at filtering out anything not human speech.
Where I’m actually at:
I can generate spectrograms (librosa).
I can transcribe a file via Whisper or Vosk to a .txt file.
I can input an audio file, output a .mp4 with captions time-stamped to word-level (via vosk in videogrep + moviepy).
Getting word level timestamps out of a whisper .json fucking sucks dude.
I decided I wanted to do something with audio as my jumping off point. Currently my favorite idea is a typology of non-human voices in public spaces (ex. loudspeakers, elevators, busses). And then if they are all collected together I can let them talk to each other. My other favorite idea was a typology of room-tones or A/C, vents, and pipes. Like, subtle background noises. I have been unsure if I’d rather begin with concept or with technology, but I think at this point, I would like to find the piece of tech that most compliments the loudspeaker idea.
I am part way through reading a book that’s giving some good ideas, and I have attached a pdf of some of my notes thinking : typology notes.
Edit: After having the meetings, I also have this idea: trying to get “noise pollution”-type sounds to be able to be transcribed by a speech-to-text program into words or phonetics. I like the idea of using “noise pollution” type sounds because these sounds are all caused by humans, but ignored/disliked by humans, and they are very subtle but very constant. I can see one vision of this where the end result is : A video of the Object that makes the noise, and the video’s audio is the noise the Object makes, run through speech-to-text, and then the transcription is read out (either by synthetic or human voice) in sync with the video.
First I was playing around with Slit Scan stuff. I was trapped in the Ideate Basement so I did not have great material to capture.
Then I was allowed to leave, and I made some 3D scans of some things at home. It was challenging to find an app that was not behind a paywall, but the one I’m using could export directly to a SketchFab account, where I could download the scans into Blender. The first scan captured a lot of the background in a weird way, so the second one I tried giving it a simpler backdrop. This backfired, and I ended up with more of the environment than the subject. I think the warped space is interesting, I didn’t play around enough to figure out how to purposefully distort it. I chose to scan some of my favorite toys/dolls because I enjoy them and I felt like people on the internet would also like to play with my toys with me. (This was the slit scan app. This is the 3d scan one. Following 2 pics link to SketchFab).
Something interesting or new: I hear about photography from people who do it as an art medium, but really the main argument I hear for photography there is that “the art is in finding the right shot”. Ex. finding the right moment, right view-angle, & composition. I know I’ve seen people assemble collections of photos, to have treated kind of like a dataset, but I’ve never known anyone who was in photography without wanting the photo/aesthetics of it to be a part of the final work. I think it’s interesting the writing gives “images made to be measured, like earthquake tremors or spectrograms” as types of photography. I wonder what also technically counts as photography— especially that I might’ve seen before & neither me nor the person introducing me to it recognized it as photography.
An artistic opportunity, made possible: I think it’s interesting comparing Raman spectroscopy where the photos are discarded, and visual data-sets where the photos are kept. Like, the difference being the level of abstraction from the human eye, and one being unreadable until it’s converted. I’ve seen a lot of art that exists in it’s documentation (was just looking at Gordon Matta-Clark, who’s building pieces are torn down after). I’m trying to think if there’s a type of photography/adjacent that somehow isn’t documentation. It was interesting to hear about the attempts at standardizing photos of Venus in 1874, in how the photographers had to be militarized in performing the documentation.
(Could not tell if these were 2 different prompts in the phrasing!)
I’m making a post for a few little apps. The Slit Scan Camera app just seems kind of novel & like it’d be fun to play with for a minute. Demonstrates the idea of a photo being not-instantaneous + time corresponding to space.
The other app I posted was the Spirit Talker ™ which lets you talk to ghosts. I absolutely love ghost hunting television. I get it if you’re skeptical that your phone has actual sensors for real ghosts, but with equipment they use in the TV shows, it’s such a creative theatrical use of imperfect sensors. Even if some if it is possibly staged (unlikely), they’ve still figured out how to mess with the sensors’ reliability while remaining off camera (minus post-production). It’s actually a very nuanced take on mass-media through the lens of old-time spirit photography practices. I would be very disappointed if the Spirit Talker ™ app relies solely on some rng algorithm, and doesn’t incorporate sensor data into that at least a little. I have had similar phone apps actually scare me before. Attached is a relevant image of Zach Bagans from Ghost Adventures. I hope he brightens your day.