Final Project – Printed Impressions. & Identity Collections

Hello!!

Welcome to my final project post – it’s so sad that the class is coming to an end.

Since I experimented with two projects for the final output, I’ll be walking through Identity Collections (the original aim, capturing faces) first.

Inspiration

After creating two projects focused on surveillance where I was merely the observer putting myself in these situations to capture the invasive, I wanted to push myself and the piece I was making. I wanted to incorporate projection and interaction to make the viewer directly engage with the work, where they are forced to think about the implications of their own face being used or kept within a collection, and how they feel about that idea.

The questions I really wanted to investigate/answer were:

How would people react if their face or identity was captured in a way that blurred the boundaries of consent?

Would the invasive nature of this act stay with them afterward, influencing their thoughts or feelings?

Would they reflect on the ethical implications of image capture and its invasive presence in everyday life?

Brainstorming Concept

The concept drastically changed throughout the semester.

I’ve always had this idea of manipulating people’s faces in real time that would manipulate their mouth to speak. This is technically my idea for senior capstone/studio, but I wanted to see if something could get up and running in the class to capture faces in real time. After lots of really valuable conversations with Golan and Nica, we came to the following conclusions:

1. There is a strong possibility that it just wont work: I won’t get the reaction that I’m looking for, and I may experience a lot of difficulties that I didn’t ask for both backlash from the viewer and/or technical difficulties.

2. It is possible to make it work, reverse projection would be key, but it is beyond my current technical limit especially with the time we have left. I didn’t know Touch Designer well before we started off.

So we changed it, and after talking with Golan, he had an interesting idea about collecting faces in a terrarium like way:

I loved this idea of collection, moving it the faces to one place to another, so to test my technical skills while also capturing faces, the essence of the capture. I set off to capture the faces dynamically into a grid, and felt the terranium level would be the next level of the project if I could get there.

Concept

I’m severely interested in using the viewer as the entity, to further their interaction and engagement directly into the performance. I aimed to use facial capture software using Touch Designer in real time to create a visual experience that allowed viewers to directly interact with the captured collection of faces, extracted from viewers that came before them. I wanted to incorporate projection and interaction to make the viewer directly engage with the work, where they are forced to think about the implications of their own face being used or kept within a collection, and how they feel about that idea.

The questions I really wanted to investigate/answer were:

How would people react if their face or identity was captured in a way that blurred the boundaries of consent?

Would the invasive nature of this act stay with them afterward, influencing their thoughts or feelings?

Would they reflect on the ethical implications of image capture and its invasive presence in everyday life?

Capture System

I played around with what the rig would look like – shoutout Nica for saving the wooden box, and thank you Kelly for donating the wooden box to my cause (along with the wood glue, thank you <3 )

STEP 1 – Visit the rig (Wooden Box)

The idea with this step was the initial step that forces the viewer to choose: do you want to interact or not, and if you do, there’s a slight price that comes with it. Meaning on the outside of the box where the face is, I could write out the words “putting your face in this hole means you give consent to have your face captured and kept” (in need of more eloquent wording, but you get the picture). This was an element Nica & I talked about often, and I loved the subtlety of the words outside of the box.

Recent Find: In my senior studio, I actually had a live stream feed into the piece I was working on and I wrote “say Hi to the Camera”. Most people were completely fine with looking into the camera to see themselves in the piece, however 2 or 3 out of the 45 that came to visit, were adamant about NOT having themselves on screen. Once they were assured it wasn’t actually recording, they looked at the camera and viewed the piece for themselves

TLDR: I think most people would stick their head in?…

STEP 2 – Touch Designer does it’s thing!

more in-depth info in challenges

STEP 3 – Projected Face Grid – Dynamically updates!

 

Project Inspiration

Tony Oursler, Face to Face, 2012

 

Michael Naimark, Taking Head, 1979

 

Challenges

Now here’s where I’ll talk about Touch Designer in depth.

The piece needed to complete the following:

    1. Capture a video in real time using an HDMI Webcam!
    2. Capture the video on cue, I decided to use a Toggle Button, meaning I am manually tapping record
    3. Video uploads in unique suffix to a folder on my desktop (person1, person2, etc)
    4. Touch Designer could  read ALL the files in order on my desktop AND could read the most updated file
    5. Grid dynamically changes regarding how many videos there are!!!!!!! (rows/cols change)
    6. Most recent video is read and viewed in the Table COMP
    7. Video within the grids play on a loop and don’t stop even when the grid changes dynamically

Now 1-5 ✅ were completed as seen in the video! But 6-7📍were insanely aggravating.

The biggest problem was that I should have gone to the PC earlier when I first saw the challenge in taking the recording using the Mac.

TLDR: TD was recording in Apple ProRes, but it wasn’t reading the recording when I needed to upload it back into the grid.

When I first encountered the first challenge of recording on Touch Designer and uploading the video to the desktop, I talked with M (thank you M), and we changed the recording of the video from H.264 to Apple ProRes (the only video that could be recorded using a Mac). So at this point, we got the *recording* of the video to work and upload to the desktop. I thought we’d be fine!

Nope!

I had the TD code working, and used Select DAT to read the most recent file in the Facial_Capture folder, and would use Movie In CHOP to read the video and upload it into the Table COMP. But Touch Designer writes that it was “Warning: Failed to Open File” when using Apple Pro Res.

Turns out, Touch Designer needs to be able to encode and decode the recording and there are some idk license complications with Apple, which makes it SO difficult.

https://forum.derivative.ca/t/td-wont-read-quicktime-apple-pro-res-422/172458

SO I tried to record with NotchLC & Hap/HapQ, but even though they download to desktop, the files with the recordings in them DON’T OPEN OR READ!!!!!! They also just won’t be read in TD.

This brought the question of could I export the file manually to an MP4 or a better file version and then send it back in to TD. Nope.  This also somewhat defeats the purpose of real time.

Then, I found out that the newest version of Touch Designer SHOULD be able to encode/decode using Apple Pro Res, but then I realized I am on the newest version of Touch Designer. The following article seems like it needs Apple Silicon to work? I don’t feel like that’d be such a big impact, but it just may be because the article states it works with the Apple Silicon chip.

“On macOS, both the Movie File In TOP and Movie File Out TOP have the additional benefit of supporting hardware decoding (depending on the Mac’s hardware). This means that Mac users will be able to take full advantage of Apple Silicon’s media engine for encoding and decoding.3https://interactiveimmersive.io/blog/outputs/apple-prores-codec-in-touchdesigner/

Anywho, these are just the glimmers of the tech difficulties, but the most solvable reason is that I just needed to test and play with a PC. I was so focused on getting the code to work before I switched over to a PC, and looking back I think I was looking at the steps a little to methodically, and should have played/experimented with the PC from the beginning. Now I need a PC and will borrow one from Bob for the entirety of next semester so I won’t have this problem again. Even though it was a pain, I am pretty confident about my ways around Touch Designer now which is quite exciting that I achieved part of the goal on this assignment.

Outcome & Next Steps

As for the design/materials for the rest of the rig, I have everything:

    1. Kelly’s box , black felt, paint, sharpie (rig)
    2. transparent mask, the frosted spray paint (Reverse projection)
    3. mini projector, webcam, tripod stands, extension cords (electronics)

I’m excited to play around with this next semester and see what can come about, since I’m hoping to present something similar at Senior Studio. I’m pretty confident I can get it to work knowing what I know now. Hopefully I can get more technical and explore more interactive works.

 

Now let’s move on to what was actually presented…

 

Printed Impressions.

I wanted to make sure I showed something at the final exhibition, and went back to edit my second project.

See Project Comments in-depth: https://courses.ideate.cmu.edu/60-461/f2024/11/11/person-in-time-printed-impressions/1875/

Research Question

How does the sequence of the printing process vary between individuals, and to what extent are students aware of the subtle surveillance from a nearby individual when completing their routine actions?

What event did I choose to capture? Why this subject?

I chose to capture Carnegie Mellon students who are printing out assignments, surveilling their behaviors and habits in this mundane yet quick routine. After a lot of trial and error, I realized the simplicity of observing individuals in this sequence of entering the print station, scanning their card, waiting, foot-taping, and either taking the time to organize p or leaving their account open—creates an impression of each person over time and reveals subtle aspects of their attention to their surroundings, details, and visual signals of stress over time.

I chose this particular subject because not many people would be aware of subtle surveillance when doing such a mundane task, I thought it could also probe more of a reaction, but to my surprise, most people were unaware of my intentions.

 

What were the project updates?

The biggest takeaway I heard from the critiques were to “extract more of the absurdities,” meaning how could I zero in on what made the people interesting as they printed. Did they fidget? How did they move? How did they interact with the printer? Were they anxious? Calm? Were they quick/fast? Were they curious as to why I was in the space? I really loved the concepts simplicity on something so mundane and was interested to see just how different people were.

Here were some comments that stood out to me:

“Key question: do people print their documents in different ways? Is there something unique about the way people do such a banal thing? It’s been said that “the way you do anything is the way you do everything”. Did you find this to be true during these observations? ”

“I’m glad that you found such a wonderfully banal context in which to make discoveries about people. (I’m serious!) Be sure to reference things like Kyle McDonald’s “exhausting a crowd”, the Georges Perec “An Attempt at Exhausting a Place in Paris”. ”

“I agree that it is hard, like Golan said, to extract information and the interesting bits out of the videos themselves. Maybe what Kim did using voice overs with the Schenley project or subtitles, stalking(?) observations, anthropological notes? But I wonder what more can come out of the project and how can discussions of surveillance and privacy begin from what’s been captured. ”

“I admire you for being a dedicated observer, but I wonder if the privacy issues of this project have been discussed.”

“The collection of people is so interesting because I think I know like 2-3 of them. I think your presence still effects the results and how these people act.”

Video Reasoning

When I went to revise, I realized I had to make them more focused on the act themselves, so I revised the videos to see the similarities within each piece. Did they print the same, similar fidgeting, similar actions of looking at the paper, similarities in looking back to see what I was doing there. I reorganized the videos, but I was actually having a lot of trouble with how to organize them, or if I was doing the project injustice by stitching them all together in this way. My biggest question was how could I show just how powerful these little moments are, how can I help guide the viewer to see what they need to focus on. That thought translated into finding the similarities. I thought about the narration, but was unsure how to write in this format of videos stitched together that I thought it’d be a bit confusing, which is why I didn’t take that route.

Outcome & Next Steps

After talking with Golan, we agreed that the piece wasn’t strong just as a visual video and actually fell flat because the viewers just couldn’t understand what was happening. They needed clear guidance which translated into the need for an audio narration or captions that narrated what was going on. He also explained that it was perfectly fine to focus on each video instead of stiching the moments all together, literally creating a typography of videos accompanied with the narrating captions/audio. I really like this collected idea, and want to focus on it for my next revision. I think the stitching together in this way takes away from the magic of the creep cam and the work. In order to truly focus on the piece, I need to emphasize each video and person.

 

WIP Final Project – Identity Collections

Concept

Collecting faces, identities.

I’m interested in illustrating facial collection in real time as a visual experience allowing viewers to interact with the captured collection of faces, extracted from viewers that came before them.

Attempted Output

STEP 1 – Visit the rig (Wooden Box)

STEP 2 – Touch Designer does it’s thing!

Touch Designer

Recording is OFF

 

Recording is ON
MOVs saved to file!
File names & Video Cells

 

 

STEP 3 – Projected Face Grid – Dynamically updates!

 

Tony Oursler, Face to Face, 2012

 

Michael Naimark, Taking Head, 1979

 

 

Process – Touch Designer

Pseudocode ARGH

✅How to capture the video in real time- Webcam!

✅How to capture the video – Toggle Button, meaning I am manually tapping record *I cannot figure out automation, and this isn’t important to me in this iteration.

✅Video uploads in unique suffix to a folder on my desktop(person1, person2, etc)

✅Touch Designer can read ALL the files in order on my desktop

✅Grid dynamically changes regarding how many videos there are!!!!!!! (rows/cols change)

📍I NEED TD TO PULL THE VIDEO INTO THE TABLE COMP

📍PLAY THE VIDEO ON A LOOP!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

THEN I’M DONE!!!!!!

Next Steps

  1. Fix the Rig ft. Kelly’s Wooden Box
    1. Cut out the holes for the face
    2. Cut out the holes for the reverse projection sculpture
    3. Put black felt/paper inside so it’s super dark!
    4. Write some words on the outside of the face hole of “you putting your face inside here means you consent to your video being saved” (smth more ethical and better worded)
  2. Test Projection / Installation
    1. Making sure everything is clear to the viewer the flow of the installation
    2. Is it a surprise if they see other people’s faces on the screen???

Thank you Kelly you kind soul for donating your wood box – ILYYYYYYY <3

 

 

 

Final Project Proposal – Identity Collections

TLDR: I’m interested in illustrating surveillance and facial collection in real time as a visual experience allowing viewers to interact with the captured collection of faces, from viewers that came before them.

Inspiration

Tony Oursler, Face to Face, 2012

 

Michael Naimark, Taking Head, 1979

 

Tony Oursler, Obscura, 2014

Nancy Burson, Human Race Machine, 2000

 

Concept

The idea of taking something fun that makes people want to “stick” their head into some contraption, and then completely doing a 180 that illustrates the subtle ways of surveillance is interesting.

 

 

Possible Process?

Golan recommended Touch Designer for this particular project, but I’m debating about the possibility of Open Processing and/or Python in VSCode.

 

Reverse Projection Sculpture could be the Initial Capture!

Nica & I talked about projecting the individual’s face on a sculpture infront of them in real time. If I was able to create a cardboard box with the black fabric on the inside/outside, and when the individual stuck their face inside, they see the reverse projection of their face onto the frosted mask. The individual sticking their head inside is the “opt in” and the video would most likely capture their shock and confusion as they try and make out who it is. The mask is of a specific size, so no matter who sticks their face in, they will look different – especially if I apply a face filter on. I know if I stuck my face and saw a reverse projection, my mouth would be dropped open and I’d say “woah that’s crazy” and likely not recognize myself at first, I’d also then try and see it replicate my mouth movements. That video of me trying to put together my reflection and process what I see in the box, would be projected onto the grid, and would be the continuous video seen of me.

 

In terms of ethics, I’ll ask people I know to stick their head in. If I show this at open studio, I can write that the faces will be deleted after the performance, or I could ask for signatures of individuals willing to let me “keep their face” for art purposes. Lots to think of here…

 

 

Recording faces in real time will be annoying, so I need to be able to build a rig that does the following:

    1.   How to capture the video (Reverse Projected Sculpture)
      1. I think Golan’s open processing holds a lot of promise, esp OpenCV!!
      2. FaceWork – https://openprocessing.org/sketch/2219047
      3. Alternate possibilities include more hardcore facial imaging:
        1. Google Image Segmenter – Extract specific regions of image you want
          1. https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter
        2. Google Face Landmark Detection Guide
          1. https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker
    2. Crop video?
      1. If everything’s set at a specific camera and the rig consistently captures each individual using Tony Oursler’s method, it could be okay!
    3. Develop rules – does the video capture for 5 seconds, when does it start? After 1s of detection? When does it end? Can I make it automatic? Or do I need to be there like I’m a fair grounds person.
    4. Save the video to a folder on the desktop
    5. Develop another set of code that reads the most recent video featured on the desktop
    6. Adds the most recent video to a projection grid of faces
      1. I would love to randomize faces, but I’m not sure that’s realistic in this timeframe.

 

Golan recommended Touch Designer and after much researching, I can add videos in real time using two options (thank you to this anonymous person):

“You could use the Folder DAT to monitor the files in the network and load them into individual Movie File In TOPs are you want to show them. In this case you wouldn’t be using the ‘Sequence of Images’ mode in the Movie File In TOP whereby it’s pointing at a directory.

Another option is to use the ‘Specify Index’ mode of the Movie File In TOP instead of ‘Sequential’, which allows you to set the index manually. This way when you reload the node it won’t start from the beginning again.” https://forum.derivative.ca/t/realtime-update-folders-contents/7744

 

 

Ideas for capturing the video consistently:

Tony Oursler’s method of facial capturing

Isla gave an interesting idea for making it fun for people to stick their heads in, almost like at a fair. Drastic contrast for the idea of capturing people’s faces.

 

Quick Mockup of Possible Rig Process

Person in Time – Printed Impressions

Printed Impressions.

Research Question

How does the sequence of the printing process vary between individuals, and to what extent are students aware of the subtle surveillance from a nearby individual when completing their routine actions?

What event did I choose to capture? Why this subject?

I chose to capture Carnegie Mellon students who are printing out assignments, surveilling their behaviors and habits in this mundane yet quick routine. After a lot of trial and error, I realized the simplicity of observing individuals in this sequence of entering the print station, scanning their card, waiting, foot-taping, and either taking the time to organize p or leaving their account open—creates an impression of each person over time and reveals subtle aspects of their attention to their surroundings, details, and visual signals of stress over time.

I chose this particular subject because not many people would be aware of subtle surveillance when doing such a mundane task, I thought it could also probe more of a reaction, but to my surprise, most people were unaware of my intentions.

 

Inspirations

1. Dries Depoorter, The Follower (2023-2024)

https://driesdepoorter.be/thefollower/

I found this absolutely incredible project (for my looking outwards 4!) by an artist who takes an individual’s Instagram picture and uses a surveillance camera company called EarthCam to show the real-time process of that individual attempting to capture the very image they posted to Instagram. Depoorter essentially exposes the real-time workflow behind an Instagram capture, illustrating the the ironic relationship between personal photo consent and public surveillance consent

The-Follower-Dries-Depoorter-04.gif

B’s Typology on Surprise, capturing people’s surprise reactions as they opened a box.

and… Special shout out to Hysterical Literature & Paul Shen’s Two Girls One Cup

Initial Project Focus & Redirection 🙂

I was initially really obsessed with subtle surveillance that could capture a series of fear or shock committed by an unknowing individual and even capture some information about them that isn’t revealed verbally.

However, much conversation and redirection from Golan and Nica was exceptionally helpful to uncover what exactly is a quiddity, and how I should capture a person over time. I was really grateful for my talk with Nica because they really reminded me the importance of trial and error, to quite literally go out and figure out what works well (and doesn’t) with the creep cam. Golan’s reminder that sometimes the more simple project is often the most powerful was really helpful.

 I did a LOT of overthinking since I felt like it was too close to the first project, but I’m confident that although I’m using the same capture method, it’s intention relates etc a completely different capture, one that simply observes 🙂

So with their wonderful advice, I got to work!

Capture System – Creep Cam

    1. Discuss the system you developed to capture this event. Describe your workflow or pipeline in detail, including diagrams if necessary. What led you to develop or select this specific process? What were your inspirations in creating this system?

I built this capture method for my previous class project where I would bump into people who were on their phones, and would then ask “what are you doing on your phone”. I wanted to stay with turning the Creep Cam into capturing something more positive, but still staying in line with subtle surveillance as the purpose of the project is to incorporate subtly film the entire footage without asking for permission.

What is a Creep Cam, you may ask? It’s a small 45-degree prism used by predators on the subway to subtly look up people’s skirts. So, why not repurpose this unfortunately very clever idea in a completely new way?

Examples of Creep Cam online, there are more sophisticated versions that are much smaller and undetectable!
Reflective Prisms: The science behind the madness!

Ethics was heavily discussed, and since the footage is in a public place, turns out it’s all good–no permission required . After discussing my concerns with Golan, he suggested the PERFECT solution: a Creep Cam.

Close Ups of the Rig!

 

Set Up – Creep Cam in the Wild

When employing the creep cam to capture the event, I need to wear black clothing. On the day of I wore all black: dark pants, top, coat – the real deal to make sure the camera was as unnoticeable as possible! (Pictures don’t show the outfit I wore the day of)

I chose my location in the UC Web Printer, an alcove that’s nice and small to make sure my creep cam could see all the little details.

 

 

Printed Impressions

I was pretty shocked to see the results to say the least… I was able to record 12 people!!! My personal favorites are 5, 9, 11, 13, 14!

Insights & Findings

    1. Advantages of the Dual Camera 

A big comment that was given on my previous project was that I should learn to take advantage of the creep cam’s dual camera. Since the creep cam has a dual camera of both the top and bottom half of a person, it perfectly captured the stress you could feel from them having to wait for a printed image. Many of them would tap their fingers on the table, tap their foot, or even sigh! I was really excited because the environment perfectly captured each individual’s process at an interesting angle.

2. Most Individuals did NOT realize I was filming them

Most people who went to use the printer would ask if I was printing something out, when I would say no, they would rapidly go to print out their stuff. After reviewing the footage, I think some people were somewhat aware, but wasn’t completely reflecting the possibilities of what was going on.

3. Person in Time is Clear!!!
I was most excited with the fact that it perfectly captured a quiddity, a subject of a person completing an action in time, and you could witness the varying factors of stress or habits among individuals. Printing is such a mundane task that requires someone to wait for the output especially in such a digital world, that it really boils down to focusing on the individual’s focus at the task at hand.

4. Happy Accidents

I was extremely happy with the added aspect of surveillance towards revealing personal information, since most people using the printer failed to log out of their account. I was able to use this to catch a glimpse into the possibilities of who they are and what they’re working on!

 

 

now for the project 🙂

 

Person 1 – Liam C. – First Year, Dietrich College

 

 

 

Person 2 – Unknown

 

Person 3 – Unknown

 

Person 4 – Unknown

 

 

Person 5 – Unknown

Person 6 – Tomas R. – Sophomore, Business

 

Person 7 – Unknown

Person 8 – Unknown (Again)

Person 9 – Nick B,  Junior, Business

 

Person 8 – Amy C,  First Year, CFA

 

Person 11 – Kobe Z, 5th Year, Electrical & Computing Engineering

 

Person 12 – Nikolaj H, Masters, Mechanical Enginerering

 

 

Person 14 – Unknown

 

 

 

Evaluation of Printed Impressions

 

Person in Time WIP – Echos of Self

Concept (TLDR)

I want to explore individuals reaction of shock or fear when encountering real time, invasive surveillance when private information is verbally revealed by the viewer’s own personal face (think Deepfakes!). I’ll be using Golan’s Face Extractor & FaceWork code from p5.js to capture the indivdual’s face in real time, and manipulate their mouth movements into saying something they don’t believe in OR speaking on their personal information.

Inspiration

I have been playing with projected installations where I project recorded faces onto a white sculpture of a face, and the face begins talking to the viewer. I love probing a reaction of shock, confusion, any type of reaction that probes the viewer into reflecting on the message based on a bit of realization or even fear…

 

How will I do it?

I’ve manipulated Golan’s p5.js code into manipulating and focusing on the mouth movements. The code also makes sure the face is centralized in the middle of the screen for easy, stabilized projection!!!!! SCORE!!!! Thank you Golan!

Face Extractor & FaceWork1

https://openprocessing.org/sketch/2066195

https://openprocessing.org/sketch/2219047

 

Where I’m unsure…

I’m not exactly sure what I want the individual’s face to say once I’m able to manipulate their mouths.

A long time idea has been to have the projected face say something the person likely doesn’t agree with, but I think this idea requires more thought & attention then just a few weeks.

I’m leaning towards the face revealing the individual’s personal information of somesort:

Address, phone number (this would require me paying for some governmental record)

Instagram or Twitter

Or I let the individual type in a seceret into the computer and the face verbally reveals it.

With all of these ideas, the main goal is to make sure I can capture the individual’s live reaction to their own projected face almost betraying them in some way.

 

I’m also unsure of HOW I need to manipulate my code.

I can either pre-write the code to manipulate the mouth with pre-written phrases OR with private information I already know about the individual and I just capture their reaction.

OR

I code the mouth alphabet, and write in sentences in real time. Not sure of the fluidity of the reaction.

Ethical Considerations for this project

I’m somewhat unsure of the capture method.

I think this project would most definitely need verbal consent, or their consent by typing their name into the computer.

Verban consent would likely come from friends, family, classmates, professors. Not sure random viewers would be as willing, unless I just brought my computer to the cut? Debating how to gather individuals to the capture experience.

 

Looking Outwards: Temporal Captures

**Redid Looking Outwards for MORE explorative projects**

1. Dries Depoorter, The Follower (2023-2024)

https://driesdepoorter.be/thefollower/

I found this absolutely incredible project by an artist who takes an individual’s Instagram picture and uses a surveillance camera company called EarthCam to show the real-time process of that individual attempting to capture the very image they posted to Instagram. Depoorter essentially exposes the real-time workflow behind an Instagram capture, illustrating the the ironic relationship between personal photo consent and public surveillance consent.

The-Follower-Dries-Depoorter-04.gif
The Follower

It delves into the disconcerting nature of surveillance, revealing that not only are individuals monitored in their everyday lives, but the very moment they take a photo to later upload for their followers is also captured and watched. It visualizes the capture process, an individual walking around, chasing poses, checking out the camera, and highlights the damaging implications of surveillance in connection to social media. I absolutely LOVE this project because it underscores the irony that, while taking an Instagram photo, individuals often remain unaware of the broader watchful eyes observing them. It’s so easy to piece together someone’s digital footprint, it’s scary regarding the possible implications: stalking or used as a resource by police to track convicted individuals/solve a case.

 

To perform a version of the artist’s experiment, a New York Times photographer was captured, Instagram ready, in popular tourist spots
A screen grab from an EarthCam camera in Times Square showing the photographer, John Taggart, to the right of the TKTS sign.

Depoorter would download public photos from Instagram using the locations individuals tagged in their posts. Deeporter collected the live online feeds from Earth Cam over a two-week period. He then developed a software that compares the Instagram images with the EarthCam recorded footage. 

Interestingly enough, Instagram responded saying that ” ‘collecting information in an automated way’ is a violation of the company’s terms of use and can get a user banned… We’ve reached out to the artist to learn more about this piece and understand his process. Privacy is a top priority for us, as is protecting people’s information when they share content on our platforms.”

 

Mr. Taggart on Mulberry Street in Little Italy in Manhattan

 

A screen grab from EarthCam. Mr. Taggart is wearing black with his hands up.Credit…

When I logged onto EarthCam.com, I was amazed to discover it provides the public with real-time surveillance from various landmark locations worldwide. Originally intended (with good intentions) to allow people to experience places they may never visit in person, I was shocked to learn that it also features surveillance of major landmarks not only in the U.S. but also in hundreds of countries abroad such as Brazil and Bali. EarthCam’s live broadcasting of people’s activities in public spaces without their knowledge—and making this technology, and livestream footage from the previous days, publicly accessible—is absolutely insane. When probed by the New York Times to answer questions on the project and the risk their camera’s pose to individual’s privacy, the marketing director only sayd that Mr. Deeporter used the “imagery and video without authorization and such usage is in violation of our copyright” (nyt). Depoorter responded by saying that “It’s not only EarthCam… There are many unprotected cameras all over the world.”

From the comfort of my couch, I’m currently watching individuals in real-time on Bourbon Street in New Orleans on a Saturday night. EarthCam operates cameras in all 50 states; and Pittsburgh has several, including one at Andy Warhol’s grave, where unknowing visitors come to pay their respects, while others who know EarthCam, turn around and wave. Depoorter’s project forces us to confront the uncomfortable truth about our digital lives and the extent to which we are all under surveillance, raising critical questions about privacy and consent in an age dominated by social media.

Bourbon Street

https://www.earthcam.com/usa/louisiana/neworleans/bourbonstreet/?cam=bourbonstreet

Couple waving at Warhol’s grave in Bethel Park

https://www.earthcam.com/usa/pennsylvania/pittsburgh/warhol/?cam=warhol_figmentstream

1. Max_woanygolf (TikTok), Two Guys, Golf, & an Egg (2023-2024)

I came across this moving video on Twitter where two guys are playing golf, aiming to hit an egg. The camera is positioned directly behind the egg, making it feel like you’re standing right there, with the golf balls flying toward your face. As I watched, I couldn’t help but twitch and flinch, instinctively pulling my head back, thinking a golf ball was coming straight at my face. I even had to remind myself to breathe! I bring up this particular piece because the placement of the camera in this shot induces stress of the experience, you keep feeling like this object is coming right towards you, fueling an adventure with adrenaline, capturing an experience that most people wouldn’t normally willingly choose to be part of. The part where I had realized I let out a breathe was when the egg was finally hit. The camera angle, not necessarily a moving camera, captures the movement of the golf ball, almost watching as it slows down in the air.  Not only is the action caught by the single camera behind the egg, but there is another camera capturing the upclose reactions of the golfers as they hit the ball.

Moment of Implosion

https://x.com/JohnCremeansUSA/status/1843819501068116445/video/1

@tsn

CAN THEY HIT THE EGG?! 😳🥚🎯 (Via: max_woanygolf/TT) #golf #trickshot #challenge #egg

♬ original sound – TSN

OLD

1. Anish Kapoor, Descension (2015)

Anish Kapoor’s Perpetual Black Water Whirlpool Installed in the Floor of a Former Movie Theater in Italy

 

2. Néle Azevedo, Minimum Monument (2014)

Hundreds of Melting Ice Figures Echo the Intensifying Threat of the Climate Crisis in Néle Azevedo’s Public Works

 

 

3. Bruce Nauman, Walks in Walks Out (2015)

https://www.artbasel.com/catalog/artwork/55855/Bruce-Nauman-Walks-In-Walks-Out?lang=en

Excerpt of Bruce

Best Video of Bruce from Art Basel Exhibition https://www.facebook.com/M23Projects/videos/bruce-nauman-walks-in-walks-out-2015-brucenauman-tate-tatemodern-london-03-octob/415913112189000/

 

 

4. Random International, [Untitled] (2012)

https://www.random-international.com/rain-room-2012

One of my FAVORITE projects of all time, I HAD to include it.

https://www.youtube.com/watch?v=EkvazIZx-F0

Typology: Cellphone Obsessions, Surveillance, Collisions

TLDR: I take an embodied approach by intentionally physically colliding with students on campus who are enraptured by their devices to capture their reactions in real-time once they’ve escaped the digital landscape as a way or revealing how deeply technology influences our awareness, everyday interactions, and infiltrates our lives through subtle surveillance. 

Research Question

How does cellphone use impact college students’ awareness of their surroundings and interactions with others while walking to class, especially their perception to subtle digital surveillance?

Inspirations

My project was initially inspired by several works brought up at the project’s introduction as initial inspiration as to what is a typography. I was drawn to these typographies primarily because they focused on capturing individuals’ emotional reactions in real-time, within consistent photographic environments, the only thing that changes is the individual depicted. The capture of spontaneous emotions like shock or surprise was something I was dying to emulate in my own work.

Bill Sullivan’s More Turns, a study of individuals passing through subway turnstiles.

B’s Typology on Surprise, capturing people’s surprise reactions as they opened a box.

Rachel Strickland, Portable Portraits, Cinema-verte interviews with individuals about the contents of their bags.

Later that week, after Nica and Golan introduced the project, I found campus students on their phones repeatedly running into me, AND an overwhelming majority never said sorry for the collisions. I kept noticing a ridiculous number of people completely absorbed in their devices, visibly bumping into random individuals as they walked. This made me want to illustrate just how embedded phones have become in our lives—especially for young college students—who can’t even put their phones down while walking. Our generation has literally grown up alongside technology’s rapid evolution into incredibly smart, handheld devices, and I wanted to explore this phenomenon of our attachment to technology and its silence surveillance over us.

Initially, I wanted to “go big or go home” and capture the paths of individuals on their phones using a drone to create a timelapse of their movements. However, critiques grounded me and encouraged a more interactive approach to capturing the collisions. One critique group reminded me of the effectiveness of using my own body as the medium that collides with these individuals, allowing my body to literally become part of the experiment—a point that Nica heavily encouraged 🙂

After the critiques, my idea shifted drastically: I would use my body to become the obstacle.

I wanted to gather some data to analyze the project in some way, Tega and the critiques also introduced the element of the qualitative interview, allowing me to ask the single question, “Could I ask what you were doing on your phone just now?” before walking away to see what happens. This added an element of qualitative research was something I was truly SO eager to incorporate, aiming to understand what was truly capturing individuals’ attention and analyze if there was any overlap.

Then came the question of how to capture this. I grappled with using a camera with a 360-degree lens or strapping one on, wondering how I could subtly film the entire footage without asking for permission. Ethics was heavily discussed, and since the footage is in a public place, turns out it’s all good–no permission required . After discussing my concerns with Golan, he suggested the PERFECT solution: a Creep Cam.

What is a Creep Cam, you may ask? It’s a small 45-degree prism used by predators on the subway to subtly look up people’s skirts. So, why not repurpose this unfortunately very clever idea in a completely new way?

Examples of Creep Cam online, there are more sophisticated versions that are much smaller and undetectable!
Reflective Prisms: The science behind the madness!

Now that I had my concept and method of capture, it was time to bring it to life.

 

How Did I Develop My Machine?

Phone Rig
Visual of Rig from the Side
Inner Workings of the Rig
Rig before it’s taped to my phone

I was going to 3D print mirrored acrylic, but encountered challenges with permission and printing on a weekend. Instead, I used a small mirror from my NARS blush compact, cardboard, black electrical tape, tape to cover 2/3 of my iPhone camera, and my personal phone to achieve the machine. I combined the cardboard with the compact mirror at a 45-degree angle and used black electrical tape to make the machine blend in with my phone. I also made sure that on the day I went to capture, I was wearing a black top, so it was even more difficult to tell that I was subtly filming people. Additionally, I wanted to look as put together as possible, aiming to draw people’s attention upwards rather than downwards. A little manipulative, but a success all the same!

What people would see walking towards them!
What I could see -> Creep Cam in Action!!!!!!!!!!!!

How Did I Develop My Workflow?

Experimentation & a Small Set of Rules
I limited myself to the area between the CUT and the Mall, specifically near Doherty Hall to the UC. I needed to make sure I wasn’t actively making or forcing a collision; as long as I set myself on the same path, starting to walk and looking down at my phone—and capturing the video of anyone bumping into me or avoiding my path—that was fair game. I wasn’t sure if I needed to stay in place, be more active during rush hours, or walk around the cut, but I needed to experiment and see.

I wrote an empty backpack to fit the part, then started to walk around to see what happened and what individuals said.

Part of the process that was way more complex than I thought

1. More Time = More Awareness

It turns out that individuals are a bit more aware than I expected because I gave them more time to notice. What does this mean? Walking around the CMU grounds takes some time, and moving from one end of Doherty to the UC gives individuals more time to spot me than I would have liked. Interestingly, even though I provided them with this extra time, most people didn’t care or would purposely walk toward me to assert their presence and make a point of getting off their phones.

Earlier tests, show I’m TOO far!

2. Green Tape on CMU Sidewalks Controls Traffic Flow

During rush hour, people behaved much like cars on either side of the road; the green tape on the CMU sidewalks made it easier for traffic to flow, creating an unspoken rule of staying in your lane that wasn’t there before. I had much more success colliding with people when I wasn’t fighting with the green tape separating the lanes. I often caught people on the diagonal paths from CFA to CS or from CFA to Doherty, or just in the Doherty area where more people were coming in or out right at rush hour.

The green tape covers Doherty to the UC & Doherty to CFA

3. Big Hallways don’t work!

I also attempted to capture interactions in hallways, but the larger hallways like Doherty and Baker contributed to the “stay in your lane” mentality. 

4. Where did most of the bumps occur?

I realized that most of the bumps occurred when I was bombarded during rush hour by a variety of people, or during ridiculously slow times of the day when almost everyone was in class, a time when individuals knew it was okay to let their guard down.

 

Did I Succeed or Fail?

Overall, I would consider the project successful.

A series of the most “successful” reactions

I had several people run into me, many react and walk around to avoid my path, and numerous individuals told me what they were looking at on their phones without questioning what I was doing (except for one person out of 30). This could be attributed to my nonchalant approach when asking the question, but I believe most people were still in a state of shock or confusion, which made it difficult for them to process my question. Even more, most people would turn their phone to show me, a true stranger, what they were looking at on their phone!!!!! I may just have a truly trusting demeanor haha! A majority of the individuals who answered my question said “grade scope” (CMU’s grading portal for exams/quizzes), “Checking my grades,” “finding my computer,” “scrolling instagram,” “texting my friend”.  I would love to continue the project to gather more quantitative and qualitative insights: does time affect the answers gathered? Are there more phones out during exam week? More phones in the morning than later in the day? How many individuals will say they were on the same application?

The most fascinating aspect was that almost NO ONE recognized the invisible camera. No one asked me what I was up to or if I was recording them. No one told me I couldn’t do it, simply because they didn’t notice the camera! When I reflect on how I conducted the experiment, I’m completely sure that my behavior was a bit odd; just walking up to you asking a question, saying thank you, and then walking away. I was genuinely surprised that no one questioned me. The subtlety of not noticing the camera was a bit scary as it relate to the subtle surveillance perspective.  However, when I shared my project with friends, they said, “That’s my worst nightmare, never do that to me!” The subtlety of the camera and the angle of looking down at my phone without noticing created a strong impact for the piece emphasizing the subtlety of surveillance and the impact of technology on our daily lives.

Where do I go from here?

I really want to continue this project!!! More quantitaitive/qualitative aspects to play with / analyze, more reactions of Shock/confusion to capture. I think this project, after having all this time to test and experiment, would benefit from just overloading on typographic captures. Similar to the initial inspirations I stated in the beginning.  The only hinderance is being known as the person waiting to bump into people on the way to class…

This guy was the only one who clocked the camera

 

CLICK HERE to view ALL of the videos (successful & unsuccessful) & photos on the Google Drive! 

*Elements of this blog post have been updated regarding feedback received during critique*

Typology Machine: Cellphone Obsession and Trajectories from Above

Newest Concept Based on Feedback & Critique (Same Theme/Diff Output)

Concept: I plan on capturing campus students who use their cellphones while walking. I will primarily be using the CUT, standing on both the CUC to Doherty pathway and the CFA to Drama pathway. I will act as a stationary collision piece, waiting to see if the individual notices that I am in front of them. A camera strapped to my shirt, will capture their reaction. I will then ask the following question: “I’m doing a research experiment, can I ask what were you doing on your phone?” I’m receiving their reaction in real time for the question of what was so important on your phone that you couldn’t properly walk or pay attention? If I can get more qualitative insights and interviews with the individual as they walk to class that would be wonderful.

The final output will have an image that memorializes their realization of looking up from their phone, and right at me. Their interview answers will be typed out in quotations on the bottom of the image.

 

 

 

OLD CONCEPT BELOW:

Concept:
I plan on capturing campus students who use their cellphones while walking, and recording the imagery from a Surveillance/Rooftop Angle. As I walk to class, I notice an enormous number of people who walk to class, completely consumed by their phones, and they often bump into random people as they walk. I want to show just how embedded phones have become in our lives, specifically the lives of young college students, that we can’t even put the phone down as we walk. I’ve seen so many individuals walking down on the stairs with their phone, hunched over on their phones. TLDR: Exploration of human behavior, and our relationship with technology, especially individuals who grew up with the evolution of the iPhone/handheld phone.

Collection of Individuals Walking on their Phone

 

Capture Method:
1. Set up high-resolution camera on a tripod from a single rooftop overlooking the CUT OR with a drone. (Camera areas listed further below)
2. Capture time-lapse footage over from 8 am to 6pm of rush hour.
3. Use computer vision algorithms to identify and track individuals using cell phones.
A. I would need to develop a system that classifies specific behaviors:
* Walking while texting
* Stationary phone use
* Phone calls on the move
* Group interactions involving phones
2. Have lines that follow each individual, so as you watch every individual go past the timelapse screen, a line follows them as they walk and stays. You see the build up of phones throughout the days and the paths they take!

Final Output:
1. The final time-lapse should have so many lines that map out the direction and length at time at which individuals had their phones out. A line would become dotted if an individual picked up and put down their phone.
2. A large image that shows all captured images of every individual tracked by the algorithm.

Where are my possible camera locations?
Since I need a high surveillance angle, I think a drone during the busiest hours/rush hours may prove to have the most merit. I think I could use the corner of Shatz Dining Room, so I can get a clear overhead view of the CUT from 3 different directions. The rest of the test images I captured, didn’t seem high enough. Other areas encounter more obstruction. There is also the 3rd Floor study area with a perfect view of the cut but have a few trees., where I can position the camera, without needing explicit permission. I know there are also surveillance cameras on campus, it would be SO interesting, if I could use the security cameras to capture and analyze from different angles. Not sure if CMU police would let me, but it could certainly be worth a try if I am allowed to use footage from weeks before!

Most Promising View: Shatz, but I want more of an aerial view of the cut!!

Closer Hope of Capturing on the Cut, but more of an aerial, birds eye view.

Even more an aerial, birds eye view, but this would be the hope of clear view of individuals when they walk.

What tools do I need?
Most likely a drone because I need more aerial footage. Camera, Tripod, Battery, Signs that say “Do Not Touch – Work in Progress, questions please call [my number here]”, Signed document or confirmation that I could use a specific area or that it is cleared by whoever is in charge of that specific room (ie. Shatz)

Ethical Considerations
1. I’ll blur faces if it’s too obvious who it is, however the phones do a great job of hiding individuals since most people are looking down, not up!
2. Obtain necessary permissions for window surveillance or provided video tapes

Capturing Mundane Moments: Timelapse, Slo Mo, Skeletal Tracking

 

I wanted to choose subjects that focused on constant movement—not just capturing my own movement, but also the motion of the world around me. Specifically, I aimed to document the flow of everyday life using different techniques. There’s something fascinating about speeding up or slowing down the mundane—the things we do every day without often appreciating. Every interaction and emotion we experience becomes part of that unnoticed routine.

Time Lapse – Roundtrip walk to CVS for Cough Medication

The first capture is a time-lapse of my roundtrip walk to CVS to pick up some cough medication. I thought it was interesting because, with each new street, a different scene unfolds. I chose this subject because I find time-lapse tricky to keep engaging—if it doesn’t feel like the scenery is changing, the footage can seem stagnant, which becomes boring quickly. While I enjoyed the overall result and various scenes experienced, the camera focus was slightly off throughout the video, and I couldn’t adjust it, so the footage wasn’t as sharp as I’d hoped. I also realized I’m quite a bumpy/uneven walker!

 

Slo Mo – Monday Morning CMU Rush Hour

The second capture is a slow-mo of people walking around me. After watching life sped up, I wanted to see how it looked slowed down. We’re always living life at such a fast pace that we rarely slow down to notice the details—exactly what I wanted to capture. I filmed during the busiest part of the day, when students rush to their next class. Slow-mo is great for picking up small, intimate details, but I forgot just how slow it really is. I caught a funny moment of a freshman struggling with his backpack strap, clearly wrestling with it in slow motion (0:25). Another part of the video, where the sun completely overshines the person in front of me, made it look like they were walking into heaven (2:10). I loved observing each person’s reactions and interactions—there’s something fascinating about seeing these small, often unnoticed moments up close. It highlights actions we might hope go unseen, but in slow motion, these movements become more pronounced and scrutinized. As a critique, my footage was a bit shaky, and I focused too much on the person in front of me, missing some great interactions happening elsewhere.

 

Skeletal Tracking – Monday Morning CMU Rush Hour

Finally, sticking with the theme of people and movement, I experimented with the Ghost Vision app, which uses skeletal tracking to detect human figures in real time. I was curious to see how well it could capture motion and wanted to test its limits in terms of the number of individuals it could track. When I tried recording horizontally, it didn’t work, but switching to vertical mode captured almost everyone around me impressively well. It was fun to see how accurately it outlined people walking past. Although the heat-sensing features and other gadgets cost extra, I’d consider paying the $1.99 if I needed those functions. But even without them, I was really impressed—it was very accurate with it’s tracking!

Oppurtunities within Photogammetry, Long Exposure, & Emulsions

1. Something Interesting or New
I was surprised to learn that photogrammetry was used as early as the 1900s for crime scene observations. I had no idea that such a scientific approach was used so early in criminal investigations, and it’s impressive how these early methods laid the foundation for modern crime scene analysis. Reading about Alphonse Bertillon’s contribution to this process, especially by reducing the manual work needed through orientation points, was interesting.

I’m curious how long it took to set up the process of documenting scenes and capturing images within a grid for precise measurements and accurate scene recreations. I often get measurements wrong, so I can’t imagine the pain of remeasuring and redoing the entire process for the perfect picture of a crime scene.

[Album of Paris Crime Scenes], Attributed to Alphonse Bertillon (French, 1853–1914), Gelatin silver prints

[Album of Paris Crime Scenes], Attributed to Alphonse Bertillon (French, 1853–1914), Gelatin silver prints Album of Paris Crime Scenes, Attributed to Alphonse Bertillon, 1901-8.

Link: https://www.metmuseum.org/art/collection/search/284718

 

2. Interesting Artistic Opportunity
One artistic opportunity made possible by scientific approaches to imaging is the ability to capture the passage of time, as seen in early photographs of solar eclipses in the 1800s. These images didn’t just record the eclipse but allowed for the observation of celestial movement over time. I’m incredibly interested in space, and if I remember correctly, images around this time period were even used to calculate the measure of distance traveled at the time. The ability to document these distant phenomena so early on, deeply impacted humanity’s understanding of space! Henry Rowland’s 1888 photographic map of the solar spectrum is a great example of how extended exposure and manipulation of light-sensitive emulsions provided a detailed glimpse into natural phenomena otherwise invisible to the naked eye. Technology has advanced so far — we now have solar glasses to observe eclipses and photography tools that illustrate greater detail and color!

I find the idea of observing processes over time, especially in space, fascinating—how light influences these processes and how different stages can be documented. The emulsion process and long-exposure photographs, in particular, offer an incredibly unique way to visualize phenomena that lie beyond our natural perception.Lon exposure of the night sky, showing the Earth's rotation.

 A long-exposure photograph reveals the apparent rotation of the stars around the Earth. (Photograph ©1992 Philip Greenspun.) Link: https://earthobservatory.nasa.gov/features/OrbitsHistory