Reading 03: Postphotography

My first instinct thinking about non human photography is to think of the digital tools we use in everyday life. So many of the processes we go through are facilitated by technology and computer vision, taking images that are never to be seen by an organic eye. My first thought was barcode and QR scanning. Every grocery store has both a physical inventory and a digital representation of those items. These digital items can be organized and accessed far more easily, however there needs to be a way for the digital brain to be aware of the physical items which are ultimately what the human users engage with. To buy an item from a store, it has to be documented in a way that distinguishes it from every other item in inventory. Barcode scanning can then be seen as a specialized form of photography which holds no visual representation or significance to humans, but allows a digital brain to be aware of real physical objects.

Roberta Cooks Marni in the Morning 12" x 12" stained glass robertacooks87@yahoo.com

Amy Wassum  Barcode (2016)

Postphotography

Nonhuman photography as described in the article is the ability for a sensor to capture data that would normally be invisible or inconspicuous to a human. An example of this nonhuman photography would be an MRI scanner, which reveals the insides of a human in slices of volume. A less practical example is Benedict Groß’s “Who Wants to Be A Self Driving Car,” which explores what the data self driving cars parse looks like. Photography from these nonhuman conditions becomes interesting in that it can enhance data that we would never notice, or obscure data that would normally dominate one’s understanding. These images are useful as a lens of understanding, but also as a generator of artistic ideas.

Policarpo Baquera’s ‘Postphotography?’

While reading Joanna Zylinska’s text on ‘post-photography’ I kept thinking about how cameras have always been systems that combine natural intelligence with human agency to capture some features of our reality. Aerial LIDAR systems, apart from capturing very accurate models of our environment, have shown to be useful for seeing long-time patterns on the earth’s surface and lead further understanding of past civilizations. These systems rely sincerely on our direction to work, but I agree that they cannot be categorized as cameras but capturing frameworks.

In this effort of recording the nonhuman, the University of Vienna used acoustic cameras to measure elephant vocalizations in Nepal. Acoustic cameras use a geometrical array of microphones strategically oriented to construct a 2-dimensional representation of loud areas in the chosen direction. Aside from location information, the camera was also able to visualize precisely sounds in low frequencies in the presence of ambient noise. The research showed that the sounds are produced in their mouth rather than at the end of their horns, which helped them understand better how they communicate. The difficulty of classifying this system as either sonic or visual encourages the institution of a post-photographic discipline in charge to record complex phenomena of our environment that no longer belong to the threshold of our senses.

 

Reading 03 – Postphotography

I think that one of the largest projects of “nonhuman” photography that I have experienced and am aware of is Google Earth. When considering what archaeologists can observe from past civilizations, and the excitement of discovering irrigation systems and deeper understandings of the lives of those people, the amount of information that satellite photography provides to us about certain constructions of our current civilization is insane. This sheer scale and detail of the snapshot of the anthropocene that is constantly being built upon and updated to create an incredibly detailed 3D model of the planet offers us an unprecedented view and map of our current age. This type of mapping has, and will continue to lead to new questions and possibilities.

Response: Nonhuman Photography

One particular example of nonhuman photography I have experienced are these cameras that people strap to trees in their back yards that take photos when a motion sensor is tripped. The idea is that people can get photos of animals in their back yards without having to physically be there to take the photo. I remember having one on a tree in the woods behind my house when I was a kid, but I honestly don’t remember getting any photos from it despite having a ton of deer and other animals in the area I live in. In theory, though, these cameras allow people to capture images that they couldn’t capture if they were present. A deer or a coyote might be scared off by the presence of a person, so a candid image of that animal might not be possible from a human photographer who doesn’t have a lot of familiarity photographing wildlife.

The use of algorithms, computers, and networks in modern nonhuman photography intensifies the entanglement between the human and nonhuman. It raises questions about intent and control, about how much control a photographer exerts over their camera (I’m using these terms loosely) when the taking of pictures is defined by a script rater than simply a pressing of shutter. I would argue that perhaps the photographer is exerting more control when the taking of the photo is scripted. The photography is precise, consistent, something that is incredibly hard, if not impossible, for someone to do by hand.

Postphotography

I wasn’t totally sure about other examples of postphotography, so I looked at some other classmates responses, a lot of which were astronomy-related. This made me realize a cool example — how astrophotographers colorize their photos of space, especially of galaxies and phenomena from light-years away. We obviously cannot see these objects because they are too far away, but if they were close enough, our eyes still aren’t complex enough to be able to understand space in a lot of color. The technique that astrophotographers use to add color to their images is similar to what we did with Photoshop to create the anaglyphic versions of our SEM images. They take one set of three different photos of space, each one filtering only red, green, or blue light, and often a second set as well, which filters for light coming from places with hydrogen, oxygen, and sulfur (I’m not exactly sure how that part works). Then they are combined to create a single RGB image. Read here.

As for Zylinska’s paper — I appreciate how she has invented a subcategory for this kind of media, as I don’t think photogrammetry, LIDAR imaging, etc. should be grouped with traditional photography. This may just be my biased opinion as a relatively experienced photo-taker (“photographer” these days seems to be a loaded label), but as I said in a previous response, but I believe the relationship between the human and non-human in postphotography isn’t as close as Zylinska claims, and this paper somewhat scares me as a result. I find myself defensive on analog photography’s behalf. I am absolutely supportive of exciting technology, and a lot of my own art uses it, but my wish is that all these data-driven capture techniques get their own name without “photography”.

Postphotography

I find at certain levels of Zylisnka’s article a clear stating of the obvious that I feel, while important in raising awareness, don’t in my mind present anything that is all that new. For example, this notion of the “Nonhuman photography” being “very dependant on the human element” is somewhat of a circular argument with regards to the human extraction of this photography for the sake of viewership. There is an undeniable element of the human in extracting work for viewing, with the subjective framework of curation connected to this.

The sentiments of the capture of the past and future of humanity, however, I feel is an altogether much more interesting lens of argument. I couldn’t help but feel a strong connection to the notions of science fiction in reading this article, particularly in relation to both Asimov and Heinlein. The idea of posting a system of conceptual and moral discussion such as Asimov’s laws of robotics lens towards this greatly, with regards to a system of moral capture that evolves through time. In my mind, this lends well toward the ongoing development of photography, with the idea of capturing the development of a medium by using that medium itself a powerful parallel to much of human endeavour. [Therein however lies the darker overall message of article in it’s presentation of the human impact on the natural world, as presented in the before and after of us. ] In relating this to the notion of the algorithmic photography, technology is undeniably continually providing “new questions” and indeed new solutions. I say technology primarily as a method of extrapolating away from photography, in my mind within this discussion photography is applied technology, with the novelty of the medium cast away for the sake of capture.

With regards to nonhuman photography I have encountered, Véronique Ducharme’s Encounters is a work that I connect to. Using “a remote motion-detecting hunting camera that uses heat to trigger the photographic exposure” Ducharme creates a series of animal portraits which seek to subvert the natural photographer. While this not only documents, in my eyes, a more accurate depiction of the natural world, it also plays with the notion of the hunting camera as a tool to photographically capture these animals rather than lead to their untimely deaths, subverting the initial expected use of the technology.

 

Reading03 – Postphotography

Algorithmic photography “opens up questions and possibilities” by uncovering the hidden structures and invisible networks of the world around us. Human photography is limited to the multiple perspectives of the naked eye, however a lot of our surroundings cannot be readily seen. In an environment conducted by time and filled with patterns and energies, nonhuman photography is relied on to investigate the conditions surrounding us.

In the Postphotography reading, it is noted that laser imaging technology was used to discover an Angkor Wat temple complex and their sophisticated water management system underneath the floor of the Cambodian Jungle and I find that truly fascinating. Using nonhuman photography to “denaturalize nature” is something that could not be done through pure human intervention unless the earth “chose” to expose itself. This discovery is similar to the usage of LiDAR (Light Detection and Ranging) technology to reveal ruins of the Mayan Civilization in the Guatemalan jungle. Even systems of highways, temples, and waterways made by prior human civilizations eventually morph back into the hidden structure of the Earth and non human intervention is needed to access human invention. Ironically, the increased activity of humans calls for more advanced nonhuman technology to understand the affect of people in nature.

Postphotography

I completely agree with Zylinska’s comment. The distinction between human and nonhuman photography, at least as those terms are used in this reading, is blurry at best. There have always been humans involved at some point in the photography process (in most cases; see the last reading), and there have always been machines/technical processes. The procedure for developing negatives is in its own way an algorithm, but no one could call that “postphotography.” However, using technology to capture images that humans physically cannot see with our own eyes does create exciting opportunities that would not have been possible with only traditional photography. One example that comes to mind is the recent image of black hole M87* that was captured by a team of scientists.

They could not simply point a camera at the black hole and take a picture, because no light can escape from a black hole. So they took a series of photos with a network of telescopes, combined them and filtered out noise using algorithms they developed, and ultimately obtained an image of the black hole’s shadow against the light of all the luminous gas around it. Though this was a very complex technical process, it was also the result of a lot of human work and ingenuity–not truly “postphotography” or “nonhuman photography,” then. Just a new method of capture.

Postphotography

An example of “nonhuman” photography that you have either experienced or read about

Reading Zylinska’s “Postphotography” excerpt, the first project that comes to mind for me is Deep Dream. I’m not sure how well it fits into the “photography” category, but the method of creating these images mirrors Zylinska’s accounts of “algorithmic and computational” means of capture.

Respond to Joanna Zylinska’s observation that, “Photography based on algorithms, computers, and networks merely intensifies this condition, while also opening up some new questions and new possibilities.

Zylinska says this when discussing the recent “reconceptualization of photography in algorithmic and computational terms,” arguing that photography has always, in a sense, arisen through human-nonhuman collaboration. She seems to argue that this modern intent is as close as it has ever been to its origins in “fossils, analog snapshots, and liar-produced photomaps.” I would agree that both this modern sort of “algorithmic photography” and its analog counterparts are methods of experimental (yet literal) capture, aimed at taking an accurate snapshot of what’s “out there.”
It’s just that this time around, what’s “out there” can be less subjective than ever before, because we can capture things that completely evade human perception. “Nonhuman photography can allow us to unsee ourselves from our parochial human-centered anchoring, and encourage a different vision of both ourselves and what we call the world.” From a cognitive science standpoint, that’s incredibly satisfying to me. I hope to do more work that challenges our species’ limited view of the world.