person in time: 3 project ideas

Making “poop-kin.” Inspired by Donna Haraway’s “Making Kin in the Chthulucene,” this project aims to observe the conversation between myself and my microbiome, a non-human set of organisms considered by researchers to be the human body’s 12th organ. The conversation is evidenced by what I eat and what I defecate.

Over several days, I’d collect as much information about “inputs” and “outputs,” visualizing those captures over time.

The human body has more fungi, bacteria, and virus cells than human cells. From a presentation by Julie Segre, PhD, researcher at the National Human Genome Research Institute.

A day of fluids. We are goopy, mushy, watery meat bags-yet, these movements are invisible within our own body, or abstracted away by modern conveniences. We are rarely confronted with what fluids actually flow through us. What are the cycles of fluid/goo happening in our body? In the span of one 24 hour period, I will collect time/volume/imagery of as many fluids as possible. This information can be used to create a “fluid humunculus,” an abstracted visual of the cyclical fluid processes happening within and without us every single day.

Luckily, I already have data from a previous data visualization project at the NIH that shows how much liquid blood my heart pumps over the course of a heartbeat, which I can incorporate.

Eyeballs. What patterns are there in the eyes over the course of the day? Using video processing methods/machine learning, can I reveal the ebbs and flows of the eyes in one day? Will blood vessels/flow grow and shrink? Am I’m glazed over for the whole day or actively looking?

Sonogram portraits of heartbeats

When sonogram is used as a device for portraiture, what nuances in the motion of the heart can be seen across people? When people encounter a sonogram-enabled view of their heart, it’s usually in a scary, small, cold room. Our relationship is a clinical one; terrifying and disembodying.

source: Oklahoma Heart Hospital

However, there is value in re-encountering the heart outside of a medical context. Contact with hidden parts the body can become, as opposed to an experience of fear, can be one of joyful exploration. We can observe our hearts moving with rhythm and purpose; a thing of beauty. Conversely, we can appreciate the weird (and meaty) movement that is (nearly) always there with us. A collection of hearts help us explore the diversity in this motion.

Check out the interactive —> cathrynploehn.com/heart-sonogram-typology

Toolkit: Wireless ultrasound scanner, by Sonostar Technologies Co., Adobe After Effects, Ml5.js

Workflow

I created three deliverables for this project, using the following machine:

In developing this process, I encountered the following challenges:

Legibility. Learning how to operate a sonogram and make sense of the output was a wild ride. I spent several hours experimenting with the sonogram on myself. Weirdly, after this much time with the device, I became accustomed to interpreting the sonogram imagery. In turn, I had to consider how others might not catch on to the sonogram images; I might need to provide some handholds.

Consistency. Initially, I was interested in a “typology of typologies,” in which people chose their own view of their heart to capture.  I was encouraged instead to consider the audience and the lack of legibility of the sonograms. I asked myself what was at the core of this idea; relationship with body. Instead of act choosing an esoteric/illegible sonogram view, the magic lie in the new context of the sonogram. Further, I realized it’d be more compelling to make the sonograms themselves legible to explore the motion of the heart across different bodies. That’s where the playfulness could reside.

Reframing the relationship our body. Feedback from peers centered around imbuing the typology with playfulness, embracing the new context I was bringing to the sonogram. Instead of a context of diagnosis and mystery, I wondered how to frame the sonogram images in more playful. One aspect was to shy away from labeling, and to embrace interactivity. Hence, I created an interactive way to browse the typology.

Once I focused on a simple exploration of the motion of the heart, the process of capture and processing of the sonograms became straightforward, allowing me to explore playful ways of interacting with the footage.

Deliverables

Looping GIF of single hearts

 

First, I produced looping video for each person’s sonogram. Some considerations at this stage included:

Using a consistent view. In capturing each sonogram, I positioned the sonogram to capture the parasternal long axis view. You can view all four chambers of the heart, can easily access this view with the sonogram, and can see a silhouette of the heart.

Cropping the video length to one beat cycle. One key to observing motion is to get a sense of it through repetition. In order to create perfect looping gifs, I cropped the video length to exactly one beat cycle. I scrolled through footage frames, searching for where the heart was maximally expanded. Each beat cycle consists of an open-closed-open heart.

one beat cycle

Masking the sonogram to just the heart. Feedback made it clear that the sonogram shape was a little untoward. Meanwhile, the idea of labeling the chambers of the heart for some made the sonograms sterile. For these reasons I masked the sonogram footage to the silhouette of the heart. Thus, the silhouette provided some needed legibility without the need to label the heart. Further, I it was easier to compare hearts to one another by size.

Here’s the final product of that stage:

a complete heart sonogram loop

Looping GIF of all hearts, ordered by size

Next, it was time to compare hearts to one another. As I put together a looping video of all the sonograms, I considered the following:

Ordering by size. One cool relationship that emerged were the vast differences in heart size across people. Mine (top left) was coincidentally the smallest (and most Grinch-like).

Scaling all heart cycles to a consistent length. At the time of sonogram recording, each person’s heart rate was different. As I was seeking to explore differences in motion, not heart rate, I scaled all heart cycles to the same number of frames.

remapping all videos to the same length in After Effects

Differences in motion. With all other variables held consistent, neat differences in heart motion emerge. Consider the dolphin-like movement heart in the top row, second column to the jostling movement of the heart in the top row, fourth column.

All heart sonograms, synchronized by beat cycle, ordered from left to right and top to bottom by size.

A interactive to explore heart motion

A looping video does the trick for picking out differences in motion. Still, I wondered whether more nuanced and (hopefully) embodied ways of exploring of a heart’s pumping movement existed.

As I edited the footage in After Effects, I found myself scrolling back and forth through frames, appreciating the movement of the heart. The scroll through the movement was compelling, allowing me to speed up/slow down movement. An alternative gesture came to mind: a hand squishing a perfume pump.

The “squishing” hand gesture was inspired by these scenes from The Matrix: Revolutions, in which Neo resurrects Trinity by manually pumping her heart (I think):

Perhaps because portraiture via sonogram is heavily discouraged by the FDA, sonogram-based inspirations were sparse.

So, after briefly training a neural net (with Ml5; creating a regression to detect hand movements using featureExtractor) with an open/closed hand gesture, you can physically pump the hearts with your hand on this webpage.

I also tried classification with featureExtractor and KNNClassifier, although it seemed to choke the animation of the hearts. The movement can also be activated by scrolling, which is neat when you change speed.

A note about tools

Hearts repeat the same basic movement in our bodies ad infinitum (or at least until we die). Thus, a looping presentation seemed natural. In terms of producing the video loops, using After Effects and Photoshop to manipulate video and create gifs were natural decisions for the sonogram screen capture.

Still, I’m exploring ways to incorporate gesture (and other more embodied modes) in visualizing this data. An obvious result of this thought is to allow a more direct manipulation of the capture with the body (in this case, the hand “pumping”the heart. Other tools (like Leap Motion) exist for this purpose, but the accessibility of Ml5 and the unique use of it’s featureExtractor were feasible paths to explore.

Final thoughts

In general, I’m ecstatic that I was able to get discernible, consistent views of the heart with this unique and esoteric device. Opportunities remain to:

Explore other devices for interaction with heart. Though my project focused making the viewing of the data accessible to anyone with a computer, device-based opportunities for exploring the sonogram data are plenty. For example, a simple pressure sensor embedded into a device might provide an improved connection to the beating hearts.

Gather new views of the heart with the sonogram. What about the motion of the heart can be observed through those new angles?

Explore the change in people’s relationship to their bodies. Circumstances prevented me from gathering peoples reactions to their hearts in this new context. Whereas this project focused on the motion of the heart itself, I would like to incorporate the musings of participants as another layer in this project.

Explore other medical tools. Partnerships with institutions that have MRI or other advanced medical tools for viewing the body would be interesting. MRI in particular is quite good ad imaging heart motion.

MRI of my heart

Heart ultrasound as portraiture

Experiences with the normally hidden aspects of our body can feel peeled out of a thriller film. We may find ourselves in a small, white room with a stranger examining us with a cold instrument. We don’t know whether we’ll be okay. Images are flashed on screen for a few seconds, a diagnosis made, a fate delivered. A sense of disconnect. This is a script for a experiencing a sonogram.

Taken out of this medical context, the sonogram can become a tool for portraiture. Nevertheless, entertainment, or “keepsake” ultrasounds have historically been discouraged by the FDA, particularly for creating images of fetuses:

use of ultrasound solely for non-medical purposes such as obtaining fetal ‘keepsake’ videos has been discouraged

In some states, sonograms for entertainment are even illegal (though not the state in which this project is being created).

However, there is value in re-encountering the heart outside of a medical context. Contact with hidden parts the body can become, as opposed to an experience of fear, can be one of joyful exploration. We can observe our hearts moving with rhythm and purpose; a thing of beauty. Conversely, we can appreciate the weird (and meaty) movement that is (nearly) always there with us. A collection of hearts help us explore diversity in this motion. This project asks: what similarities and differences in the motion of the heart can be seen between people via sonogram?

The Machine

I am reaching out to my familiars, taking their portraits casually in a private space. I chose to capture the parasternal long axis view of the heart-one Hearts will be annotated with structure labels in motion.that is easily accessible and produces a semi-legible cross section:

Hearts will be annotated with structure labels in motion. These are the structures visible in the parasternal long axis view with the sonogram device available:

As an ensemble, this is what multiple hearts captured with the sonogram look like:

 

Some inspiration

It looks like Memo Atken is getting into medical imaging with these MRI videos ostensibly processed with some kind of machine learning:

 

SEM Nail

Nail cross-section, (semi) recognizable scale
Nail cross-section, close up
Nail cross-section, very close up
Nail cross-section, anaglyph

This is a cross section of my fingernail. I chose this item because I wanted to see weird, alternative views of my body. We see our fingernails everyday; clipping them is sort of a chore. I wanted to explore what kind of physical record the fingernails produce (thinking about how rings can be viewed as a long-term record within trees). They’re made of keratin, and, according to Donna Beer Stolz, these structures will form differently between nails, horns, etc. from different organisms. It was amazing to see these layers and the cavernous layers in my fingernail clipping. What kind of differences could there be between human nails? Donna and I discussed possible differences across different people’s nails, especially how the surface of the nail might change.

The “other” that was always there: Zylinska’s “non-human” photography

In Nonhuman Photography, Joanna Zylinska discusses a “nonhuman photography”; the idea that photography always includes a non-human aspect, separate from human vision and agency. I remember creating my own pose-recognition project using Javascript and the ml5.js library, working in concert to form my own interpretation of the bodies in the world.  Called PoseNet, this model was originally ported to TensorFlow.js and them ml5.js. The pose library detects the position of certain features human body (knee, shoulder, etc.). Working through my project, I had to negotiate with how the algorithm could see the world.  I relied upon the computer’s understanding of where these body parts were, determinations occurring beyond my capabilities (at least, to estimate where parts of the body are in real-time).

In this view, the non-human in photography shows us the world occurring at different scales of time and space, traces of our earthly context beyond the scope of our view. In this sense, Zylinska argues that “photography based on algorithms, computers, and networks merely intensifies this condition, while also opening up some new questions and new possibilities.” This is true. While we create machines (for producing visual understandings of the world) with increasing complexity, that separation between human and creation of image is more visible. Still, that non-human component was always there, serving as mediator between the world and the human eye.

What else is possible, though, with the increasing complexity of our visual machinery? One possibility is humility, or what Donna Haraway calls a “wound” that decenters human ways of knowing. Complexity, and the relative ungraspability of algorithmic ways of seeing force us to appreciate those other scales of time and space, our smallness in the context of the forces of the environment.

 

Observation is never objective: Thoughts on Photography and Observation

In my mind, the scientific method depends on consistency and repeatability—if, given a set of variables made consistent, similar results occur. For example, given consistent emulsion in photography, one could predict the kind of image that would be produced from the same lighting, objects, etc. However, the inconsistent emulsion practices and the retouching of photographs in the late nineteenth century demonstrated that photography is easily and more likely to be inconsistent,”a malleable medium,” in the words of Kelly Wilder (director of the Photographic History Research Centre in the UK).  Then, photography, or any capture method, is only as scientifically reliable and predictable as the practices that surround it.

Scientific assessment of a capture medium such as photography belie the larger question of whether objectivity is attainable in the act of observation.  I argue that it is not, because the act of observing always comes from the perspective of an observer. In other words, a person must decide to photograph, to point the camera at a subject. Even further, a medium of capture prescribes a way of seeing; how one can know about the world and the possible things that can be observed. In this way, the relationship between observer and observed depends on the nature of that medium of capture. What can that medium of capture afford? What does it not afford? How we see determines how we relate to the object we’re seeing, and thus what how we can act towards that object.

‘Triple Chaser’ by Forensic Architecture | Addon

What does it look like to wield machine learning algorithms for good? In “Triple Chase,” the act of building an image recognition system serves here as a way to level power differentials between innocent victims, activists and global arms traders. In this case, the knowledge of where a company is distributing weapons can be wielded against that company’s reputation for justice.

What’s fascinating about Forensic Architecture at large is the use of footage to reconstruct the layers of events that have been lost. Indeed, the leveling power in Forensic Architecture’s approach is this notion of revealing unseen, or intentionally suppressed, relationships. In this sense, I ask what power dynamics reside in the ability to know, uncover, or suppress these relationships?

Speaking of power, the democratization of machine learning processes in libraries like Ml5.js (among others) serves as a way to open up the privilege of new ways of knowing about the world. In what ways can we, as educated technologists, further open up the power of computer vision to artists, activists, and non-technologists?

Original post:

Other articles reviewed:

 

New algorithms, same quandaries: the AI-augmented camera

What is the role of a photographer when the camera takes its own photos? Christian Ervin, Design Director at Tellart, writes about the matter with a dusting of disempowerment in his piece The Camera, Transformed by Machine Learning. To him, examples of newfangled ai-powered  cameras suggest “…a camera with a very different kind of relationship to its operator: a camera with its own basic intelligence, agency, and access to information.”

I would disagree with this characterization of a limp operator, which stems from how Ervin defines the act of photography. In discussing science fiction writer Bruce Sterling musings on futuristic computational photography, he describes photography’s  “core action” as: “the selection of a specific vantage point at a specific moment in time.”

Instead, I would suggest the act of photography begins when one selects the camera. The operator selects and sets up a capture device based on how they want to see the world. How they want to understand reality. Indeed, they’ve decided to be an operator of a camera in the first place, to comprehend and remember the world through visual moments.

Then, the nature of the relationship between operator and “cameras with agency” can be boiled down to the size of the black box. How appraised is the operator of the inner workings of the camera? How intimate is the operator with the how that sensor understands the world, and how much knowledge do they have to alter it?

This gulf between operator and self aware sensors is not the chasm that it seems. Indeed, a camera has always been a black box to its casual users, the inner workings of light capture happening within obscured away. Similarly, I can’t be bothered to consider PNG/JPEG compression algorithms when using a smartphone camera. But conceptually, I understand the nature of capture that I should expect from a Polaroid Instant camera versus a digital smartphone camera. Similarly, in the case of Google Clips, I understand that an algorithm trained on well-composed images will capture similar kinds of images. Further, I still decide to photograph my world through that lens and place the camera in a location of my choosing; curatorial tasks of the photographer abound.

The Capture of Gesture: How We Act Together by Lauren Lee McCarthy and Kyle McDonald

McCarthy and McDonald explore the meaning of small gestures in How We Act Together, an art installation built upon machine learning algorithms that track body movements. In the piece, participants are commanded to perform and repeat specific gestures, such as scream or nod – to the point of exhaustion. As long as participants keep performing the gestures, they can view previous performers of the gesture onscreen. If they repeat the gesure longer than all previous participants, their recording is added. Thus, the work is an accumulation of those who endured these performances the longest. The work was on display at Schirn Kunsthalle in Frank­furt, Germany in 2016, but also allowed remote entries through a website.

The hyperfocus on seemingly inconsequential gestures allows us to consider them in a new, awkward way. It’s like the digital art installation version of fumblecore. It’s like kerning a word in Illustrator for too long such that it loses it’s meaning. In this piece, the small movements we take for granted are exposed, made absurd. In this new context of absurdity and exhaustion, new encounters with these gestures – their associated emotions, meanings – are possible. The piece, then, forces us to appreciate the semantics of our “cultural body” experientially. The piece helps us to notice our bodies in a way that reminds me of why I’m interested in meditation or yoga: how are these relationships with our bodies?