Typology Machine: Cellphone Obsession and Trajectories from Above

Newest Concept Based on Feedback & Critique (Same Theme/Diff Output)

Concept: I plan on capturing campus students who use their cellphones while walking. I will primarily be using the CUT, standing on both the CUC to Doherty pathway and the CFA to Drama pathway. I will act as a stationary collision piece, waiting to see if the individual notices that I am in front of them. A camera strapped to my shirt, will capture their reaction. I will then ask the following question: “I’m doing a research experiment, can I ask what were you doing on your phone?” I’m receiving their reaction in real time for the question of what was so important on your phone that you couldn’t properly walk or pay attention? If I can get more qualitative insights and interviews with the individual as they walk to class that would be wonderful.

The final output will have an image that memorializes their realization of looking up from their phone, and right at me. Their interview answers will be typed out in quotations on the bottom of the image.

 

 

 

OLD CONCEPT BELOW:

Concept:
I plan on capturing campus students who use their cellphones while walking, and recording the imagery from a Surveillance/Rooftop Angle. As I walk to class, I notice an enormous number of people who walk to class, completely consumed by their phones, and they often bump into random people as they walk. I want to show just how embedded phones have become in our lives, specifically the lives of young college students, that we can’t even put the phone down as we walk. I’ve seen so many individuals walking down on the stairs with their phone, hunched over on their phones. TLDR: Exploration of human behavior, and our relationship with technology, especially individuals who grew up with the evolution of the iPhone/handheld phone.

Collection of Individuals Walking on their Phone

 

Capture Method:
1. Set up high-resolution camera on a tripod from a single rooftop overlooking the CUT OR with a drone. (Camera areas listed further below)
2. Capture time-lapse footage over from 8 am to 6pm of rush hour.
3. Use computer vision algorithms to identify and track individuals using cell phones.
A. I would need to develop a system that classifies specific behaviors:
* Walking while texting
* Stationary phone use
* Phone calls on the move
* Group interactions involving phones
2. Have lines that follow each individual, so as you watch every individual go past the timelapse screen, a line follows them as they walk and stays. You see the build up of phones throughout the days and the paths they take!

Final Output:
1. The final time-lapse should have so many lines that map out the direction and length at time at which individuals had their phones out. A line would become dotted if an individual picked up and put down their phone.
2. A large image that shows all captured images of every individual tracked by the algorithm.

Where are my possible camera locations?
Since I need a high surveillance angle, I think a drone during the busiest hours/rush hours may prove to have the most merit. I think I could use the corner of Shatz Dining Room, so I can get a clear overhead view of the CUT from 3 different directions. The rest of the test images I captured, didn’t seem high enough. Other areas encounter more obstruction. There is also the 3rd Floor study area with a perfect view of the cut but have a few trees., where I can position the camera, without needing explicit permission. I know there are also surveillance cameras on campus, it would be SO interesting, if I could use the security cameras to capture and analyze from different angles. Not sure if CMU police would let me, but it could certainly be worth a try if I am allowed to use footage from weeks before!

Most Promising View: Shatz, but I want more of an aerial view of the cut!!

Closer Hope of Capturing on the Cut, but more of an aerial, birds eye view.

Even more an aerial, birds eye view, but this would be the hope of clear view of individuals when they walk.

What tools do I need?
Most likely a drone because I need more aerial footage. Camera, Tripod, Battery, Signs that say “Do Not Touch – Work in Progress, questions please call [my number here]”, Signed document or confirmation that I could use a specific area or that it is cleared by whoever is in charge of that specific room (ie. Shatz)

Ethical Considerations
1. I’ll blur faces if it’s too obvious who it is, however the phones do a great job of hiding individuals since most people are looking down, not up!
2. Obtain necessary permissions for window surveillance or provided video tapes

Typology

For my typology project, I wanted to capture the inner perspective of different garments. If done right, I thought it would be an interesting concept to compare expensive vs cheap clothes and details such as linings and finishes that are not apparent from the outside. Could extend to brand study (maybe put the focus group on one particular brand).

Reference Images:

Story pin image.

My attempt so far with this setup using a hairdryer lol:

.

 

Might need a bigger air source like a fan and a better lighting environment with better equipment for the quality of the picture.

Comments: – Transparent mannequin(?) – Use gravity! – Narrowing subject scope to what type of clothing garments I want to capture: could be things I wore for a certain week, garments from a certain price point, something said about capturing the “inside” of something/ thrift stores, intimate stories.  wet dry vacuum, air-tight plastic inflatable

Collage Camera

Collage Camera Workflow:

  1. Capture a static initial image and apply stylization:
    • Use a camera to capture a static image as the original material.
    • Apply stylization to the image, enhancing the contrast between light and dark areas, simplifying details, and creating more abstract, block-like color areas. This step aims to highlight the outer contours and reduce complexity in the image, making it easier for subsequent contour extraction.

2. Block segmentation and line information analysis:

  • Segment the stylized image into multiple blocks, with each block containing part of the image.
  • Analyze and process the line information within each segmented block. This may involve edge detection, shape recognition, or further geometric feature analysis within the blocks. The results of the analysis guide how the image is reassembled or used for subsequent collage creation.

3. Prepare a collage material library:

  • The images in the collage material library should contain minimal information to simplify the composition.

4. Stylize the images in the collage material library:

  • Apply stylization to the images in the library, extracting their primary outer contours.

5. Match line features:

  • Compare the line features of each block from the target image with those of the images in the collage material library, selecting the image with the most similar line features to replace the block.

6. Collage assembly:

  • Assemble the collage by pasting the selected replacement images into the corresponding blocks of the target image.

1000 Pennies

My plan for the typology is to continue my exploration of pennies from previous studios. Since this is a typology, I plan to focus on scanning a large number of pennies and trying to learn something about the ‘pennie space’ – ie the visual space pennies occupy.

^ penny

Below is my planned pipeline:

(1) scan pennies with flatbed scanner –> (2) python openCV edge detection to extract individual pennies –>  (3) rotation correction (also pythonCV ?) –> (4) perform umap on penny images –> (5) interpolate the output to a grid (RasterFairy?)

^ gridded t.sne from ml4a

I also want to try and see if interesting patterns emerge from computing and visualizing the norm between each penny and the average penny, to visualize both individual wear on each penny and average wear across all pennies. This would use the same first 3 steps, then require a separate computing norms (which would be pretty easy I think).

I’ve written the openCV script for separating the pennies and plan on scanning some pennies and testing it this week, then have to write the rotation correction. I found and played around a bit with this open source umap python package and I think it’ll work well, but I still have to figure out how to do grid interpolation. That same package does have an aligned umap api but im not sure that’ll accomplish what I want.

TypologyMachineWIP

I have two concepts in mind that I’m still developing. The first is about a tree hollow where audiences can whisper secrets. These whispers would be quiet so a special microphone designed to capture the sound waves is needed. These captured waves would then be visualized as ring like patterns, similar to tree rings and projected inside the tree hollow. The final deliverable would include individual visualized sound waves, as well as a combined projection of all the whispers inside the tree hollow, which is an overall display of sound collected.

The second idea explores eye contact. Inspired by the apple vision pro’s features, which aim to project a video of your eyes and your face to make communication more natural while wearing the headset, I want to look into the sensation of communicating with someone who’s there but not fully present, which can create an uncanny effect. In this piece, I envision an audience peering through a peephole into a space. As they look through, their eyes are captured by a hidden camera and projected onto an average looking face? (haven’t really fleshed out this part) The projection would create an experience where they appear to make eye contact with themselves, but something feels slightly off. I’d like to capture and analyze the subtle movements of their eyes as part of this experience.

Typology Machine: Work in progress

Garbage ASMR?

  1. Recording the sound of garbage bags dropping down the trash tube in my apartment, ideally using multiple microphones to create a sound field, or securely attaching a binaural microphone to the wall of the tube on the top or bottom floor.

2. A infrared or other type of specialized camera (wildlife camera) that records a short video of the trash bags falling down the tube from the top of the tube. (Unclear whether filming the contents of the trash is legal).

3. Ideally, matching the audio with the image.

Ideas from lecture:

  1. Subtitles of the contents of the trash paired with the image of trash falling.
  2. Garbage remix.
  3. Presentation.
  4. Android phones taped to wall.
  5. making a spy bug with circuit playground/some other small physical computing stuff?

Roadkill Toll

Idea 1: 

I have this idea about capturing the “roadkill” cost of a given trip (vs e.g. toll or gas). Context: last summer to fall, I spent a lot of time driving between Pittsburgh and NYC, and I was struck by the amount of roadkill I’d see. I was also especially affected because I had spent the year prior to that thinking about the consequences of the Anthropocene and the reconfigurations of our transit infrastructures for automobility. 

I started looking for examples of previous work done on this subject and found these quotes

  • The great irony of roadkill is this: Its most conspicuous victims tend to be those least in need of saving. Simple probability dictates that you’re more likely to collide with a common animal—​a squirrel, a raccoon, a white-​tailed deer—​than a scarce one. The roadside dead tend to be culled from the ranks of the urban, the resilient, the ubiquitous.
  • But roadkill is also a culprit in our planet’s current mass die-​off. Every year American cars hit more than 1 million large animals, such as deer, elk, and moose, and as many as 340 million birds; across the continent, roadkill may claim the lives of billions of pollinating insects. The ranks of the victims include many endangered species: One 2008 congressional report found that traffic existentially threatens at least 21 critters in the U.S., including the Houston toad and the Hawaiian goose. If the last-ever California tiger salamander shuffles off this mortal coil, the odds are decent that it will happen on rain-​slick blacktop one damp spring night.
  • Astronomical as those numbers for larger animals may be, they pale in comparison with the amounts of insects and other smaller creatures that perish on the road. To get a handle on that, Arnold van Vliet of Wageningen University & Research in the Netherlands and his colleagues devised a citizen science project specifically focused on insect mortality. Drivers were asked to take a daily photograph of all the insects squished on their license plates, record their car’s mileage and then scrub the license plate to start with a clean slate the next day. By extrapolating from the nearly 18,000 dead insects thus tallied, the group came up with estimates that, if extended globally, would mean that 228 trillion insects are killed each year on the world’s 36 million kilometers of roads.
  • Here are the relevant links for the quotes above:

I talked with Nica and we concluded that focusing on insects would perhaps capture the “least visible/highest impact” with a relatively simpler setup. I thought about perhaps capturing the exact moments that different insects collide with my windshield. I still have to think through the exact capture system, but the car environment does help with some of the space/power logistics. One constraint for this project I would like to impose is that I will not intentionally drive around to collect data for this project; rather, I’d collect data through regular use of my car. I would like to think of the capture system built as something that could allow people to collect open source and crowd-sourced data about this topic. Visually, I want to represent the data collected in a way similar to road logistics shown on Google Maps. Overall, I hope this project will serve as a first interaction on building a setup for collecting roadkill data on future trips (personally but perhaps in a more public way as well) and uploading visual representations of the data.

TypologyMachineWIP

Topic1: Paper Chromatography Experiment – inks are almost always made up of multiple colors which spread at different rates. Technique: Time-lapse + PlayBack

Plan 1-A: Through Paper Chromatography, we see that the final color spectrum is color separation and contour separation. Restore the single color code through flashback.

step1 Write the answer on paper

step2 Perform Paper Chromatography and record the whole process with a camera

step3 Video production using time-lapse photography and flashback

Plan 1-B: Color reconstruction of plants. There are no two identical leaves in the world. We think that all leaves in the world are the same green. Chromatography is used to analyze the color structure of different leaves to determine the uniqueness of leaves in the world.

step1 Leaf type selection

step2 Grind into powder and filter the leaves

step3 The whole process of Chromatography of leaf juice in time-lapse photography

* Kitchen filter paper can be used to make experiments, but the colored markers I picked didn’t have great color separation, so I need to do more experimentation.

 

Topic2 Bubble Photography. The features of bubbles that attract me are as follows: (1) The infinite possibilities of color and combination under light (2) The instantaneous existence and easy to break (3) Fisheye imaging.

I have a series of possibilities I want to explore, you can choose one to start exploring:

Plan2-A: “Capturing the disappearance of bubbles”. Capturing the moment of popping bubbles in different ways. Technology: Using Edgertronics high-speed cameras.

Plan2-B: Bubbles produced by some special liquids, such as oil-water mixture.

Plan2-C: Bubbles and smoke.

The invisible audio spectrum

I’ve recently been really interested in microphones and how they are meant to amplify very specific things. For example, contact mics are meant for capturing the sound of something interacting with the mic. It doesn’t have the best sound quality and is inexpensive because it is supposed to capture generality.

 Although ultrasonic microphones have some of the highest sample rates of most microphones, they aren’t necessarily the best microphones for music production or for radio broadcasters as an example. Depending on the purpose of the microphone, they are biased towards various portions of the frequency spectrum against others, and I find the technology that allows this to happen to be very compelling.

My idea is to mic a long list of things, and create a library of sounds that are constantly being recorded, but aren’t heard in the average human frequency spectrum. I plan to record sounds, places, people, and everyday interactions at the highest sample rate the ultrasonic microphone In order to avoid aliasing, there will be an initial hard bandpass filter that filters out everything beyond 192khz. After, I will filter out the audible information from 0 to 20khz. Then I will tune everything up into 192khz frequencies down several octaves until all of the information that was inaudible is audible and the initially audible portion of the frequency spectrum is no longer audible.

robot camera?

Research:

https://www.yuhanhu.net/

ur5 realtime control in python

privacy at cmu

robot arm video

One of my current biggest motivations is how can industrial machines be used in an artistic setting and using a robot arm would achieve just that. For this project, I was thinking of recording a series of image (panoramas or long exposure photos) of the shadows in the studio, to examine how light works in such a space. I’m still working on getting the code to work with the simulation software I downloaded, as the robot arm hasn’t been installed quite yet.