Postphotography

An example of “nonhuman” photography that you have either experienced or read about

Reading Zylinska’s “Postphotography” excerpt, the first project that comes to mind for me is Deep Dream. I’m not sure how well it fits into the “photography” category, but the method of creating these images mirrors Zylinska’s accounts of “algorithmic and computational” means of capture.

Respond to Joanna Zylinska’s observation that, “Photography based on algorithms, computers, and networks merely intensifies this condition, while also opening up some new questions and new possibilities.

Zylinska says this when discussing the recent “reconceptualization of photography in algorithmic and computational terms,” arguing that photography has always, in a sense, arisen through human-nonhuman collaboration. She seems to argue that this modern intent is as close as it has ever been to its origins in “fossils, analog snapshots, and liar-produced photomaps.” I would agree that both this modern sort of “algorithmic photography” and its analog counterparts are methods of experimental (yet literal) capture, aimed at taking an accurate snapshot of what’s “out there.”
It’s just that this time around, what’s “out there” can be less subjective than ever before, because we can capture things that completely evade human perception. “Nonhuman photography can allow us to unsee ourselves from our parochial human-centered anchoring, and encourage a different vision of both ourselves and what we call the world.” From a cognitive science standpoint, that’s incredibly satisfying to me. I hope to do more work that challenges our species’ limited view of the world. 

Postphotography? response

This article made me think about human gait recognition and its use as a a means for biometric identification. This led me to stumble across this paper on free-view gait recognition which is an attempt to solve the problem of gait recognition needing to have multiple views to be effectively measured/identified, and the use of multiple cameras in city-wide security systems to build such identification. So this combination of using security systems and then deploying algorithmic identification to make the specificity of human beings by the ways in which they move through such surveillance systems seems like an act of “nonhuman” photography that arises out of the combination of smaller systems of “nonhuman” photography.  So I am interested in what other collisions of “nonhuman” systems of photography can intersect with each other to create unintended image portraits.

gait imagery

 

“Nonhuman photography can allow us to unsee ourselves from our parochial human-centered anchoring, and encourage a different vision of both ourselves and what we call the world.” I think that the above example concerned with gait identification is about trying to see something specific, and I am interested in the fragments and detritus that could be created from this (and other systems of collision). For example, in the picture below one can see the images compiled and the lines they create for the gait detection to be accurate, but this for me raises an aesthetic question of what does the imagery needed to identify people via their gate look like. What are the human portraits created out of surveillance and algorithmic identification?

 

 

Postphotography: Response

One example of a human-nonhuman process that Zylinska talks about is the search for dark matter. I had very recently attended a lecture describing the locating of possible dark matter, something that leaves a gravitational imprint but cannot be detected via any other known method. Scientist use advanced telescope and other advanced astronomical measurements, (that I unfortunately did not catch) to view an immensely detailed view of a snippet of the universe. By using the human eye, we can look at galaxy shapes and other patterns in galaxy formation to hypothesize the where about of dark matter. Then trading it back off to the computer for finer details in terms of the actual amount and the coordinates of the possible whereabouts. This process of non-human to human to non-human then back again to human filters through the visual information that is deemed unnecessary to hone in on the invisible.

Reading 03-PostPhotography

 

Reading this article made me recall the candid Machinery Lecture, and the collection of weird images taken from Google Earth I think it was. Even though the images were captured by a satellite, a human still needs to go through the stills to pick out the ones that indicate some strange phenomena. The camera itself, isn’t able to distinguish between normal and strange behavior, even though it still has the agency to “take photo’s”. I think my big takeaway when it comes to photography based on algorithms and computational methods is that it still needs a conscious human brain to make sense of the images and prescribe significance to them. I don’t think I could say that the relationship between the user and the camera could ever be separated completely.

Reading 03

As discussed in the reading, photograms (silhouette images made using light-sensitive paper) are a form of nonhuman photography. Bruce Conner, a pop/abject/assemblage artist active in the ’60s-’80s, made a series of photograms called Angels capturing the human figure in unfamiliar, ghostly poses. Due to the scattering of light, parts of the body that were closer to the capture surface left a brighter mark, making it look like the figures were glowing or emitting superhuman power. Despite the human subjects, the title (literally a divine being) and technique make this work nonhuman.

Bruce Conner Angel photogram

Zylinska writes that postphotography helps us see the medium as “an assemblage of ‘surface-marking technologies'” rather than a “semiotic/indexical understanding of images.” This wording was familiar to me from more traditional art classes where we were taught to think about 2D art as ‘imagemaking’ rather than separating out categories like drawing, painting, and such which can be constrictive.

I agree with Zylinska that computational and algorithmic techniques are not changing photography fundamentally, rather, they are “intensifying [the nonhuman] condition”. Computation has expanded the field and is helping make complex photographic techniques more accessible. It’s true that computing has enabled three-dimensional, generative, and extrasensory imaging, but at the highest level these practices have existed for a long time (ex. if you conceive of Sol LeWitt’s instructional wall drawings as generative.)

The “other” that was always there: Zylinska’s “non-human” photography

In Nonhuman Photography, Joanna Zylinska discusses a “nonhuman photography”; the idea that photography always includes a non-human aspect, separate from human vision and agency. I remember creating my own pose-recognition project using Javascript and the ml5.js library, working in concert to form my own interpretation of the bodies in the world.  Called PoseNet, this model was originally ported to TensorFlow.js and them ml5.js. The pose library detects the position of certain features human body (knee, shoulder, etc.). Working through my project, I had to negotiate with how the algorithm could see the world.  I relied upon the computer’s understanding of where these body parts were, determinations occurring beyond my capabilities (at least, to estimate where parts of the body are in real-time).

In this view, the non-human in photography shows us the world occurring at different scales of time and space, traces of our earthly context beyond the scope of our view. In this sense, Zylinska argues that “photography based on algorithms, computers, and networks merely intensifies this condition, while also opening up some new questions and new possibilities.” This is true. While we create machines (for producing visual understandings of the world) with increasing complexity, that separation between human and creation of image is more visible. Still, that non-human component was always there, serving as mediator between the world and the human eye.

What else is possible, though, with the increasing complexity of our visual machinery? One possibility is humility, or what Donna Haraway calls a “wound” that decenters human ways of knowing. Complexity, and the relative ungraspability of algorithmic ways of seeing force us to appreciate those other scales of time and space, our smallness in the context of the forces of the environment.

 

Reading03 Response Olivia Cunnally

An example of nonhuman photography I’ve been in close proximity of is photography taken using computer algorithms. One example that I’ve experienced but am not sure counts is generative images. Since they use real images to create a completely new one, this may not qualify as technology but is a type of not completely human Art form that I think of often in these types of conversations. I definitely agree with Zylinska that these new forms of capture open of new questions and opportunities. Specifically, such as the new questions regarding the history being described in the article about Angkor Wat. Having new forms of capture that may be “nonhuman photography” I believe does not destroy photography or ruin any art forms, but instead, as Zylinska stated, creates new opportunities for discovery of information and for humans to work alongside of technology. Just because humans may be giving up some control, does not invalidate the future of photography, capture, or art. All of the information in the article I believe further serves the excitement for a future where technology and humans can work together to discover new information, opportunities, and perspectives.

Postphotography Response

While not necessarily an example of non-human photography as described in the chapter exactly, I think these satellite photos of planes are particularly interesting. They reveal the way that the image is created from red, green, and blue bands that are layered together. Each image is taken by the satellite sequentially, so there’s a slight delay between each. For things on the ground that are relatively still, this is fine, but for fast moving objects like planes each of the frames doesn’t exactly line up. This creates a kind of rainbow shadow. It reveals something about the process that can’t be found in other forms of visible light photography. It’s only possible because of the computational process behind how the images are captured, and is in a sense a kind of a glitch, but a very nice one. Clearly, the author’s observation is correct in that the algorithmic approach intensifies the “nonhuman entanglement,” in addition to revealing something new.

There’s a full album of these planes on Flickr.

Postphotography?

The concept of nonhuman photography is fascinating. As the author poses, nonhuman photography can offer new methodologies of perception, displacing the human-centered perspective of the world. New imaging technologies enable the capture of traces and information invisible to human eyes. This could be considered as a way to open up our view to other kinds of realities that are out of the field of vision: “As a practice of incision, photography can help us redistribute and re-cize the human sensible to see other traces, connections, and affordances ”. 

Somehow, these new imaging processes and devices can make us aware we are not the only subjective core of reality, there are so many other forces, phenomena and beings that operate in the construction of the world. 

Furthermore, through the emergence and use of machine learning algorithms and software, the role of the operator has shifted from the photographic experience. A human entity is no longer required for the photographic act. As the quote states, this intensifies the idea of nonhuman photography as there might be a sentient system or technology that can make decisions about capture and image processes. These new ideas challenge the definition of photography turning it into an assemblage or collaboration of diverse nonhuman actors. 

Last year, I experienced a full solar eclipse in Chile. During the eclipse, as sunlight came through branches and tree leaves, they transformed into thousands of pinhole cameras. This allowed the projection of the crescent shadow of the sun to be projected onto a surface. I believe this phenomenon embodies the essential concepts of imaging or photography. Though there is no human intervention, no device, the image comes alive by itself like a mirage. 

Although I think that this example does not completely match the theoretical premises of the text (but it is closer to Roberto Huarcaya’s mesmerizing Amazograms) I believe it is a poetic example of a phenomenon that aligns with the essence of image-making or even with a more radical definition of nonhuman

Here are some of the images I took during the eclipse. It was July 2th 2019

 

 

Non-human photography reading response

I find the term ‘nonhuman photography’ misleading in that it seems to suggest that photography before computation is somehow ‘more human’ and has more human involvement that after. Perhaps it is true that photography has been “reconceptualized in algorithmic and computational terms”, but it has always been a practice that requires a collaborative relationship between human and machine, and the history of photography has always been a narrative of human and machine mutually shaping each other’s identity and potential. I believe that the reconceptualization is happening to the extent that there is a shift in terms of labor: computers are now smarter and can do much more, and the individual human photographer becomes a team of “engineers, photographers, pilots, coders, archaeologists, data scientists” — a “human-nonhuman assemblage,” borrowing Zylinska’s words. I do think that photography based on algorithms, computers, and networks intensifies and expands the condition of inventive-ness and nonhuman-ness that have always been there for the medium, but I don’t think the change is from ‘human’ to ‘non-human.’
This is not really an example of “nonhuman” photography” but I skimmed through some course materials from the Computational Photography course offered at CMU. The slides for the first lecture, for example, give a quite comprehensive overview of the types of technologies that are making photography a more computational, algorithmic practice, sometimes with networking properties.