eyecrusty – The Claw

Throughout my childhood the one thing that I had always loved was the arcade. I remember it with a great fondness – how the bright lights, speedy energy, and overall environment always seemed to bring out the excitement in everyone, no matter child or adult. I sought solace in this space and its positive aura. I would always be grinning from ear to ear as I skipped from one side of the arcade to another, playing game after game. I was happy. I was calm. I was content.

But at one point in the night I would always reach one game in particular. A game that made my blood boil; a game that was relentless. It was the bane of my existence, my Achilles heel – the dreaded claw machine.

Losing to The Claw machine | It came as a shock for them to … | Flickr

Growing up and never winning a prize at said game has probably manifested into this all-consuming need I have to see others experience a similar suffering. I initially planned to collect facial expressions that convey the emotion of frustration by recording people the moment the item they are trying to win during a claw machine game slips out of the claw. However, this idea soon evolved into capturing visualizations of a much more weighted reaction – the general feeling of loss. While the game is simple and lighthearted in nature, I feel like these specific, fleeting moments have a much deeper insinuation to them.

Unexpectedly, the limitations of the toy claw machine (i.e. each round only lasting 60 seconds) that I used granted me the ability to diversify my capture by not only being able to record reactions to loss – but different kinds of it. 

I split my data collection into two parts – reactions to the loss of time and reactions to the loss of an object (in this instance  – the paper balls). I used three cameras – a GoPro, a Z Cam, and a Nikon DSLR- all pointed at the subject from different angles and distances to record them playing. Each  person recorded was given three coins, which each grants them a minute of play time. They had to collect three balls of paper in a single round to win a prize (a mini pack of M&Ms). The final video consists of four parts – the intro, the control group (footage of how people look when just navigating the claw around), the captures of the loss of time, and the captures of the loss of an object.

(I also placed a trick ball inside of the machine – which is piece of tape that  I stuck to the floor – to make the game even more confusing for some!)

My setup looked like this:

All cameras in their positions.
Go-Pro mounted on the claw machine.
Go-Pro mounted on claw machine, Z-Camera placed in front of subject.

I was mostly inspired by two pieces of work. Conceptually, I was really interested in Shooter (2000) by Beate Geissler & Oliver Sann because I thought the idea of capturing a particular moment in time when emotions and concentration are equally at their highest is very intriguing. In terms of putting the footage together, I was really inspired by how Ray and Charles Eames approached displaying a typology through a montaged film like in Tops (1969). I used the actual audio that the claw machine itself emits as the backing track and since the tempo of the claw machine changed as players progressed through it, the footage also needed to be paced accordingly. This format was the easiest to sculpt that timing and narrative.

If I could do this over again, I would make sure that every participant is seated in the same position for each take and that I stabilize the lighting so there are less inconsistencies in the overall composition, but I am overall pretty content with the end result!

**Content Warning: Profanity/Swearing**

bumble_b-TypologyMachine

My project, conducted in Carnegie Mellon’s School of Drama, was intended to capture surprise.

This idea went through many iterations. Originally wanting to capture curiosity, I thought of hanging a box on a wall that looked kind of weird that people could peek into or having a box on a table that people would open.

I eventually landed on the box on a table idea because I was really interested in capturing photos at this angle:

The next challenge I faced was… what would be in the box? What would be worth my participants’ while? With Nica’s help, we landed on a kind of ridiculous idea that also shifted my project from the concept of “curiosity” to “surprise.”

I would be inside the box.

The next challenge was… how the hell do I get inside a box?

I decided to construct my own trick table out of a big cardboard box my roommate happened to be throwing away (she bought a bookshelf or something). I cut holes in it for my head and hands (hands to hold the camera) and also bought some tablecloths from Target that I could cut holes into to match.

I stole a box from the School of Drama building (sorry), cut a hole in the bottom of it, and boom.

I had my table…

…with a little surprise.

I recruited my friend Reiley to help me get participants since, you know, I was in a box.

We set up a decoy camera at a different angle to make people think that’s what we were recording, since we needed to ask if they were okay being recorded and it’d be suspicious to ask that when there was no camera in sight (we did actually record from that angle so we had record of consent, which proved helpful in more ways, which you will see at the end).

We did a couple trial runs only to realize that me snapping pictures from that angle was getting the edge of the box more than anything and not really a clear picture of the participants’ faces.

There was also that ugly light in the way that we had to do some finessing to fix.

After more trials, we realized that people were interacting with me and the box in more significant and interesting ways than what a picture could capture, so I reimagined my setup.

We turned the table around so the camera was above me, and we let it record video of the interactions so I could have a more impactful snapshot of my participants.

Enough rambling… here are some of the actual results:

The really heartbreaking part about all of this is that, the next day when I went to put all of my videos from the SD card onto my hard drive to begin working, something went wrong and about 3/4ths of the footage got corrupted even though they were all working the day before (I think something went wrong during the transfer).

I spent an entire day scouring the internet on how to get them back but couldn’t figure it out. I was so heartbroken because this was such a wonderful and fun experiment that I put so much thought into, and I was so thrilled with the interactions and results I got out of it.

Most of the useable and interesting footage was gone, like when somebody opened the box and kissed my forehead or when somebody kept opening and closing the box like they were playing peek-a-boo with me.

I still have a little hope that I can get the footage back someday (though I did have a nightmare that the SD card snapped in half and there was no hope left), but thankfully, all the interactions were recorded on the second view. Though the second view wasn’t set up to take a good shot or be a typology in any way, I’m happy I at least have proof of the cute interactions I had.

This is definitely a pretty heartbreaking end to one of my favorite projects I’ve ever done, but like I said, I’m an optimist and have a little hope left in me!

That’s all…

…for now!

shrugbread- Cemetery Mushrooms

In this project I created a typology of spore prints from wild mushrooms foraged in Homewood cemetery, and displayed them with the names of the buried they were found closest to.

My question was to see if I could capture environmental the impact of human practices around death. This question became a typology of how to represent life coming out of death.

Mushroom life cycle — Science Learning Hub

In the life cycle of a mushroom, the purpose of the fruitbody stem/cap shape is to drop spores from gills on the underside of the cap. Almost all mushrooms spread hundreds of thousands of small spores to be carried by the wind and reproduce away from the fruiting body. Spore printing takes advantage of this unique method of reproduction.

How to: Make Spore Prints - Milkwood

Spore printing is done by cutting off the stem, covering the mushroom cap, and letting it sit for 2-24 hours as the spores fall. Spore printing is mainly used by foragers for mushroom-identification, as the color of a mushroom’s spores may be one of the only differences between an edible mushroom and a poisonous lookalike. Spore printing is also a tool for archiving and preservation as the spores can lay dormant for years in a well maintained spore print, ready to restart the life cycle of the fungus.

I took inspiration from the cyanotype photography of Anna Atkins and Wilson “Snowflake” Bentley. Both are dealing with very delicate natural phenomena. I find a specific relationship with William Bentley because of the delicate nature of both spore prints and snowflakes. In order to capture them you have to disturb them as little as possible. A single finger smudge can completely ruin any image you’re trying to create. I also find kinship with Anna Atkin’s photography as it is specifically about the relationship to ecology and setting. Mushroom and plant species vary across the world, but only in western Pennsylvania will you find this specific set of mushrooms

Anna Atkins | Spiraea aruncus (Tyrol) | The Metropolitan Museum of ArtSnowflakes: Wilson Bentley's Civil War | Harvard Art Museums

The hardest part of this project was organization of my assets. I lost track of what prints were produced by what mushrooms multiple times, had I set up a system of documentation early on I could have saved myself a lot of pain later. Another challenge with the spore printing method is that it is variable on the age of the mushroom collected. A mushroom picked too young won’t drop it’s full set of spores, while a mushroom picked too old won’t show up on the paper at all. I ended up gathering double the amount of mushrooms than spore prints.

 

I missed many opportunities for a more refined and full typology by being almost entirely focused on foraging and scanning. I didn’t have enough time to consider presentation and storytelling fully. Finding a connection to the prints and the gravesites remains a challenge.  Despite this I gained a strong connection to the land in Pittsburgh as well as connections with other mushroom hunters in Pittsburgh

 

 

Muted Identities

Introduction:

This project selects, explains, and visualizes through AI image generation the ignored Chinese names of international students.

link to “Muted Identities” project website

Inspiration:

As an international student from China, one of the things I hear the most while studying in the US is that “Chinese students tend to be shy and quiet”. In reality, the personalities of international students are often “muted” when they speak another language. While they might be sharp, funny, or wild in their mother language, such sparkling qualities do not always get “translated” when they’re living in an English-speaking country.

This project attempts to unveil the hidden identities of these students due to the language barrier while they’re in a foreign country, by unveiling their original names, which are often ignored and forgotten in an English-speaking environment.

Process:

The participants were asked to sign their English and Chinese name digitally in their usual handwritten style. They were then asked to describe the meaning of their name.

The description of their name was then given as a prompt for Midjourney AI Image generator, where a corresponding image is produced.

The participants would then give feedback, such as preferred composition, specific art style, or color scheme, on the generated image until the image is fine-tuned to their own interpretation of their names.

Example: The Chinese name of Bella is 刘宇辰, which means the universe, stars, and dragon. The description “A Chinese dragon in a starry universe” was given as a prompt for Midjourney AI generator. The image went through several iterations until the name owner was satisfied with the result.

 

Explanation:

The website displays the English name of eight international students. The flip cards “reveal” the true names of these students upon hovering. Further explanation and visualization of their name can be found when the flip card is clicked.

An interesting observation can be made when comparing their given name in Chinese and their chosen name in English. It subtly reveals a process of choosing a self-identity, a “doppelgänger”, that represents them in a new country. The one that chose to be the feminine “Bella” was given a masculine name at birth, for example, and the one that chose to be a self-created word “Rigeo” was given a commonly used name at birth.

Feedback:

The project, while very simple conceptually, surprisingly resonated with people more than I expected.  When the project was shown in class and in private to my friends, many more international students showed interest in adding their names to the list, including Korean and Japanese students as well. They were enthusiastic about creating visual representation of their names.

ultrablack-TypologyMachine

Journal of breath

 Qixin Zhang, 2022, mixed media – audio, paper drawing

My project is aiming at capturing different breath patterns, by putting myself in different scenarios, in which I’m revealing my mysterious life.

Breath of sweet sleep, the breath of anxiety, the breath of talking to someone you have a crush on, or even the breath of making love? Every moment of breathing is a different pattern, a different physical experience, a time experience, and an emotional experience. Your breath is leading you, ahead of your judgment, it is essential, it keeps you alive.

I brought zoom recorder H6 with me every day everywhere I go. It represents my ordinary daily life, and also shows my tendency of looking for drama and performative/ritual experiences.

To add visual elements, for each piece I made a drawing to go along with. They represent the experience in  another way.

 

 

The challenge is – to be at the moment and also out of the moment – remind me to connect the device and start the action of recording. Sometimes Iwas so immersive in my emotion that miss the chance to record. Also, it’s this setup don’t allow me to record my breath during big movement. But it seemed easier than expected that the audio quality is quite nice.

The  opportunity remains is that I want to use this medium to augment storytelling. There’s a way to make myself into a character, live my life as a performance art, in which has some predesign scenario and at the same time unknown. Secondly, I’m interested in instrument performer’s body movement, breath, the music that they make out of movement & the music they hear, at the moment when they perform they forget themselves.

 

 

 

Advert Inc Typology Machine

My typology machine is a little online game which asks players working at the poorly managed Advert Inc to name knock-off products for resale. The only catch… the objects don’t make sense and the boss hates all their ideas!

You can try the game for yourself here… or keep reading for a collection of my favorite answers. 

Still from the start of the game.

.gif of nonsense object 1

and suggested names:

  • Strangest octopus 
  • Lube
  • Bubble creator 
  • Blip Blop 
  • Hole Jelly

.gif of object 2

suggested names:

  • Slow blender
  • Film canister 
  • Screw 
  • Sprocket 
  • Turny wheel 
  • Cylinder boy 

.gif part of the gameplay.

I staged the images and videos for this game in my living room and captured them using an iphone and an app called imotion. I then compiled these images, video, and my script into an in browser game. I coded it in the frontend in html, css, javascript (this is what allows you to see the graphics and click), and then learned to write some basic python3 to connect it to flask and LMDB (this is what allows you to see other people’s answers who have also played the game).  

The aesthetic is trying to draw on the feeling of older video games (think pokemon on the gameboy), personal websites, and digital zines. In short, leaning into the ugly and potentially unmarketable. 

In keeping with its inspiration I wanted this typology to have the potential for humor. Seeing these objects and speculating about what they’re for is fun. But I also wanted to poke fun at hyper commodification and pointless jobs. The objects don’t make sense and yet they’re being sold (and you have to sell them). Jim is overly familiar and unprofessional and you still need to try and please him (but nothing you say is correct). This typology collecting machine lives online because I’ve been thinking a lot about how this hyper commodification has been affecting internet spaces… Increasingly I’ve watched censorship for the sake of advertisement make social media more sterile and more difficult to navigate. I guess in some ways this work is nostalgic for an idealized version of the internet I didn’t experience. Or… it could just be some silly names for objects. I think either is fine.  

Apps: iMotion. Languages: Html, CSS, Python3, JavaScript 

Libraries: Flask, Gunicorn, LMDB

Specific Tutorials: 

Important/extensively referenced  Documentation:

People to thank: 

Noah, for listening to what I wanted to make, sending me some amazing tutorials, and catching so many bugs as I learned about LMDB for the first time (oh, and of accepting payment in cookies) 

Golan and Nica, for listening to a million iterations of this idea and saying things like, “couldn’t it be weirder?” 

My classmates, for really helping shape this idea and encouraging me to have fun with it (and being oh so willing to help)

My classmates (again) and friends (and everyone else) for playing the game and being part of the typology

Facial Recognition Recognition … but with drawing

I proposed to create a typology of surveillance power by building facial recognition recognition. That is, I plan to build a computer vision pipeline that can automatically recognize clippings of surveillance cameras from images of street scenes, and an image classifier that categorizes these segmented images into the kind of camera, their technical capability for facial recognition, and the institution they are owned by and send data back to. This will enable me to automate the creation of both power maps and topographic maps of surveillance, that is: both representations of the relationships between institutions of power the surveillance acts on behalf of, as well as representations of the spatial locations different kinds of surveillance can be found. 

This project is a step toward that. I am inspired by the maps of Manhattan surveillance cameras in the Institute for Applied Autonomy’s Routes of Least Surveillance. This work is also inspired by the Pittsburgh Surveillance Walking Tour made by the Coveilance Collective (of which I am a member). The idea of an abstracted map came from “Learning from Las Vegas” by Denise Scott Brown, Robert Venturi, and Steven Izenour, depicting street signs visible from the Las Segas strip (shown below). 

While similarly enthralled with the geography of surveillance and exhaustive geographic accounts of specific features of a place, my new work seeks to build on these projects by recognizing that surveillance cameras are not singular, but varied in their age, technical capabilities, their owner, and who they do and do not share data with. This work seeks to surmise tentative answers from a detailed study of cameras’ outward characteristics.

I had originally intended to build a computer vision algorithm that would recognize surveillance cameras, and classify them according to their capacity to support facial recognition and resulting data owner. However, after attempting to collect training data for this idea, I realized that a more detailed examination of the physicality of the varied cameras that appear in the Pittsburgh landscape was needed. I therefore decided to create drawings of these cameras in three short stretches of street across Pittsburgh. 

 

***

Border of The Hill and Duquesne University

Context: The Hill district is a historically Black neighborhood. Just to its southwest tip lies Duquesne University private a predominantly white university. 



Appears to be mostly older-generation cameras, of uncertain facial recognition ability.  

Hardened, placed high up on private businesses. 

PNC bank had a camera swiveling loudly on a timer. 

Duquesne has a “Blue Light Pole” of dubious effectiveness with a camera towering atop.

***

***

 

Finally, by placing the camera drawings on correspondent maps, and comparing between these three neighborhoods, I reached tentative hypotheses. As well as being presented here, these maps, drawings, and hypotheses are presented on a poster.

As noted in book Photography and Science by Kelley Wilder, cameras are often used to record the particular, whereas drawing provides a medium suited for drawing the “ideal specimen” by abstracting away extraneous visual information. Also, noticing small differences between ostensibly similar cameras requires a long period of focused looking, the same kind of looking that is required to produce a drawing. In this way, drawing is the appropriate capture technique for this project. 

 

***

Bloomfield

Bloomfield is an "up and coming" historically Italian neighborhood, now home to hip bars and restaurants. 

Many newer generation IoT cameras, higher resolution and internet-connected. 

Footage could be shared with law enforcement directly though partner programs. 

Often lower to the ground, more at eyeline.

Some video doorbells.

***

***

To produce this project, I meticulously drew every visible surveillance camera on three stretches of street in the areas of Bloomfield, Walnut Street, and the border of The Hill District and Duquesne University Campus. I then carefully digitized these drawings, and put them on a map of where exactly each camera exists. I then wrote a short analysis of the kinds of surveillance technology characteristic to each neighborhood, presented on the poster and interspersed in this blog post.

***

Walnut Street.

Context: this is the bougie shopping street. Think Apple, Banana Republic, and overpriced cocktail bars. 

Lower density of privately-owned cameras. 


City-owned cameras on utility poles. 

Highly visible street signs announcing this surveillance. 

Who gets this data? How?

***

***

 

I have not yet achieved my goal of building a facial recognition recognition system, by building a computer vision system able to recognize cameras capable of facial recognition. However, this project is a step in that direction, and opens new opportunities, chiefly: When I do build such a system, what can we learn about the different modalities and owners of surveillance, between different neighborhoods?

See the entire full res poster including maps side by side here.

***

David Gray Widder is a Doctoral Student in the School of Computer Science at Carnegie Mellon University where he studies how people creating “Artificial Intelligence” systems think about the downstream harms their systems make possible. He has previously worked at Intel Labs, Microsoft Research, and NASA’s Jet Propulsion Laboratory. He was born in Tillamook, Oregon and raised in Berlin and Singapore. You can follow his research on Twitter, art on Instagram, and life on both.

***

bonus @Bankrupt Bodega:

 

Vitruvian Self-Portraits : Typology Machine


“Leonardo envisaged the great picture chart of the human body he had produced through his anatomical drawings and Vitruvian Man as a cosmografia del minor mondo (cosmography of the microcosm). He believed the workings of the human body to be an analogy for the workings of the universe.”

We can find out a lot about ourselves when given an opportunity to play with an Etch-A-Sketch and a microscope.

This project is a typology crafted through an apparatus that is able to collect Etch-a-Sketch self-portrait drawings from passersby.

Please consider changing playback resolution to 4K for a more optimal viewing experience. 

I’ve enjoyed the freedom of viewing this project from the prospective of an alien and observing human behavior more through an anthropological lens. What was immediately exiting to me about this prompt was giving myself the means to build an “alien machine” – that allowed an excuse to engage with the public in a more direct way.  I was first setting out to somehow be able to capture individuals’ drawing processes over time. I attempted to draw under the MacroZoom MZ .7 to 5x Zoom microscope, but alas pens bleed too much… 

I’ve also been reflecting on Western cultures complicated dynamic with structure and order. In fact strangely enough, the drawing of the Vitruvian Man comes up when you Google “Western culture”. My goal was to capture a large number of strangers all throughout Pittisburgh (and hopefully beyond). With that being the case, I had to prioritize a system that was portable enough that I could at least  setup in some larger public and liminal spaces throughout CMU.

Each video portrait was approximately 2-3 minutes each.

Script:
“Are you willing to participate in a series of self-portraits? I’m asking people to draw the universe in relationship with your body through an Etch-a-Sketch. You’re going to stand here and try to work in this little region of the microscope.  Try to look ahead as much as possible. You can look down if you need to see if you’re out of bounds. And with that, you have two minutes and your time starts now.”

click here for more Vitruvian Self-Portraits

A special thank you to Nica Ross, Golan Levin, and the Studio for Creative Inquiry for providing the space and resources to make this project happen.

marimonda – TypologyMachine

My First ASMR

This project is a typology of human expressions to reactive (and potentially disturbing) auditory content.

(For the best experience, please wear headphones)

For this project, I was interested in exploring the variation that exists through the sensory experiences of different human beings. As someone who is rather sensitive to sound due to misophonia, I was interested in how different people react to different unusual sounds. For example, chewing sounds might be appealing to someone who experiences ASMR or absolutely distressing (and even enraging) for someone who has misphonia.  Most people lie somewhere in between these two experiences, and through their faces they reveal a lot about how they experience these distinct auditory inputs.

Throughout multiple iterations of this project, I was interested in exploring the variety that exists in human emotions– whether they be my own or someone else’s. Upon exploring auditory input, I was introduced to this project by Olivia Cunnally, which captures Grima — a mostly universal response to deeply unsettling sounds. Since sound is something I struggle with, I decided to double down on this idea.

To create this project, I had to first generate the audio using a 3Dio Binaural microphone. This binaural microphone accurately places auditory input in 3D space, making it feel like sounds are either close or further from you. With a combination of hand sanitizer, water, my mouth and pin, I was able to generate an array of sounds to sample from. I ended up with approximately 30 minutes of audio. I then chose the most disturbing audio samples (through the feedback of my friends Bella and Lauren) and used them to create a 5 minute video.

I then recorded various reactions from unsuspecting participants using my phone’s 240 fps slow-motion camera. I told each person that they would be listening to 5 minutes of potentially distressing audio and that I would be recording their reactions. Finally, I compiled 12 of the video reactions I recorded into a 4 x 3 grid with their reactions synchronized to the audio. The video and audio were slowed down by a factor of 0.5, to make expressions more dramatic.

Overall, I am very pleased with how this project turned out. I am not someone who regularly works either with video or audio, so I know there is a lot of room for improvement in how I conducted my project. First, I think there might be a couple of points where the video isn’t perfectly synchronized to the audio.  I think this is due to me synchronizing the videos using the original audio of the video clips themselves (via a click) and joining them. Instead I should’ve used visual cues to make this easier for myself (having people clap, for instance).  I do think this is something I can revise, but I have been having issues with the size of my project, so exporting and editing has been taking a very long time (I will not use Lightworks again). In the end, I am still very pleased with the final outcome but I have learned to be more cognizant of my videography for the future!

Thank you:

Lauren, Golan, Bella, Shelly, Leo, Matthew, Qixin, Neve, Joyce, Sarah, Hima, Emmanuel, Milo, Will, Ashley, Mikey for allowing me to take footage of them!

Nica, Cassie, Bishop and Harrison for listening to the audio.

typology project where I react to short-form garbage

tldr: I strained myself to enjoy the feed of an Instagram bot, and I did.

I find the ecosystem of low-effort algorithmic Instagram content pretty interesting. It’s a mess of scams and thirst traps and fake engagement and the like, all driven by bots driven by people driven by some profit incentive. It spits out a bizarre blend of bot and human made content that exists quite far outside any social media feed I’ve ever experienced. After seeing some of the strange forms this content can take,

I was inspired to inject myself into this ecosystem and create a bot to aggregate and somehow classify the content. I still think this premise would be fun to fully pursue, but a consistent critique was “you get a bot that creates a river of shit. Are we supposed to find that shit fascinating?” After some contemplation and feedback, it became clear that most people do not find all this inherently interesting, so the question became “how can I communicate that interest”? The answer became “I’ll just tell them what I find interesting.”

Step one was to carve a river of shit. I recovered an Instagram account I’d made in middle-school (old accounts are less likely to get banned) and acted like a bot. I went on popular hashtags inundated with bot posts and comments advertising promo-services, promoting scams, or trying to gain followers quickly through mutual-follow tags. The vast majority of accounts that follow bots are bots themselves, so I sought to recreate the kind of feed they would see. Doing so wasn’t too difficult.

Step two was to record myself reacting to that river. Initially I interacted with the feed without restrictions on myself, sometimes exploring comments and profiles associated with posts, but following more feedback, I tightened it down to a series of around 60 10-second recordings.

I still feel the set of recordings I made without restrictions paint a much more interesting picture of the ecosystem I wanted to explore. However the shorter recordings are more digestible and a tad more engaging, so it’s the deliverable for this project.