Photography and Observation

How can the medium of capture influence a typology?

A typology is just as much about how it’s captured as what’s captured. The Venus typology in the chapter is a good example. The typology is a collection of photographs of Venus. It is about Venus and also about the way that Venus has been captured, because the way that it’s been captured is what allows us to see the representation of Venus. The medium of capture is necessarily a part of the typology.

It is also a part of the typology in the sense that the capture medium plays a role in what’s included in the typology. Placing people’s forearms under a camera reveals a typology of the visible exterior of forearms, while placing people’s forearms under an x ray reveals a typology of bones in the forearm. In this case the typology is dictated by the device doing the capturing and what it’s able to “sense.”

Is the medium objective? Objectivity is a separation of the interpretation and biases from the perception. It is pure perception, without any top-down influence. The capture method’s perception of something is objective in this sense, but what is being captured is not. The capture method has been designed to do something; to have a particular kind of perception. There’s nothing objective about this- it was designed. The capture itself, however, is perfectly objective; a transformation of information without interpretation. Yet it exists within a system that has been created to do something, biased in what it perceives.

Another way to think about this is with an aspect of human perception: seeing. There is seeing in the sense of the light hitting the cones and rods in the retina, and then what is seen after it has been processed in the brain with the many top-down influences that humans have. Seeing is the process of converting light to action potentials, while the experience of seeing is different and biased by other aspects of human cognition. I think we can separate the capture from the perception, the pure objective from the interpretative, even though they exist within the same system.

Response to Sean Leo’s Rapid Recap

The idea of using a surveillance tool for image making is quite interesting to me, and the RapidRecap system is a good example. It reminded me of a talk by James Bridle, who uses systems that aren’t originally designed for creative uses- in fact are often used for tracking and data collection- to make projects. He discusses a few in this lecture, one of the more interesting ones was using a system of people and services that track planes to find out how migrants are deported from the UK.

Another somewhat different project of his uses the views of satellites and the ghosting they give to moving objects frozen in time. This kind of system of imaging is a bit like the RapidRecap in that it provides a kind of overlay of objects over time, one being over a long period and another over a very short one.

James Bridle, ‘On the Rainbow Plane’ from AQNB on Vimeo.

 

Reviewed:

Steven’s post on Marmalade Type

Huw’s post on the Non-Euclidean Renderer

Spoon’s post on the Non-line-of-sight Camera

Jacqui’s post on the Unpainted Sculpture

The Camera, Transformed by Machine Learning

When I read this article I was reminded of another article I’d read a while ago about a woman who explores the world in Google Street view and takes screenshots. This is her photography, and she sells it online. It presents an interesting question of authorship as well, as she was not the one to take the original photograph. She’s just curating certain frames from the larger collection of photos on Street View taken by Google’s cameras. In this case, I think it’s pretty clear that she is still the photographer and has authored the work; there was intention behind the selection and composition of each of her frames.

The Google Clips camera, on the other hand, doesn’t even require someone to curate the photos. The machine learning model does that. So who’s the author? I think the answer is the authors of all the photographs the model was trained on. The algorithm has been trained what constitutes a moment worthy of photographing from others’ choices of this moment. I don’t really think we need to worry about machines making intentional choices about a “decisive moment” yet, given that all the ML really is doing is predicting what best matches what it’s been trained to do from existing data.

While the definition of a camera may be in constant flux due to changes in technology, I think that a steady definition of a photographer or author could be useful. Intentionality is definitely a part of that, and probably is the most liberal approach to encompassing everything that’s historically been thought of as authored by somebody. Even photographs taken by a machine learning model can be thought to be authored, just not by the machine itself.

Joiners

This is a piece called The Desk from a collection of works created by David Hockney in the 80’s that he calls “joiners.” They’re collages of many photographs taken from different angles of a single subject or scene. Hockney combines these photographs into a new kind of composite that shows a very different kind of view of something than a traditional photograph.

Hockney made quite a few of these joiners in the 80’s. Some of the later ones get very complex, showing many different perspectives and distances from objects in the scene:

I think joiners are particularly interesting because they take the very familiar medium of photography and produce something totally different in their view of the world. There’s no fancy equipment or techniques required, just a lot of photos and some clever piecing-together.