Reading 03: Postphotography

My first instinct thinking about non human photography is to think of the digital tools we use in everyday life. So many of the processes we go through are facilitated by technology and computer vision, taking images that are never to be seen by an organic eye. My first thought was barcode and QR scanning. Every grocery store has both a physical inventory and a digital representation of those items. These digital items can be organized and accessed far more easily, however there needs to be a way for the digital brain to be aware of the physical items which are ultimately what the human users engage with. To buy an item from a store, it has to be documented in a way that distinguishes it from every other item in inventory. Barcode scanning can then be seen as a specialized form of photography which holds no visual representation or significance to humans, but allows a digital brain to be aware of real physical objects.

Roberta Cooks Marni in the Morning 12" x 12" stained glass robertacooks87@yahoo.com

Amy Wassum  Barcode (2016)

Reading 02: Photography and Observation

I’m interested in the transition from observing as an art form, to observing as a passive act of science. It’s inspiring to know that something as culturally concrete as physical perception can be flipped on its head by the invention of tools. The medium a photograph is captured in can have obvious effects, like if the data being collected even describes the state being recorded. This seems to be the overarching goal of these scientists and artists experimenting with capture medium; to convert world data to visual representation by whatever means necessary. The objective value we prescribe to sight is what gives photography so much power. We can believe that what we see is what is real, but we only see the abstracted information our eyes and brains (organic cameras) are able to immediately present to us in forming our perception of physical space and the events that take place within it. It makes me wonder if it was necessary to rationalize the possibility of photographs and illusions to rationalize our own sense of sight as a similarly organized phenomenon.

Photography is now so widely available and consumed, that the realization of sight as a medium doesn’t even expose its fragility in the same way. Being born into a culture guided by manufactured icons, we can’t help but associate them with natural reality. If you had to struggle to see things, or if you were made immediately aware of the great lengths and processes needed to produce such images, you would understand the nuanced reasons why photographs don’t express reality, and are only estimations with limited focus. Because photography aims to transcend process, and just present a neatly bundled product, these considerations aren’t necessary to make in its everyday application.

ProjectToShare Reviews

Looking at Experiments with Security Camera by Olivia.

I was trying to think of why thermal cameras are so mesmerizing to me. Photographs deliver visible light to our eyes, so the obvious first step of inventing photography is to record this visible light. Heat energy is not ordinarily visible to us, but it can still be recorded in the same way from light beyond our range of vision.

There are similar processes for capturing energy beyond our range of visible light, but the difference is that heat is something both tangible and ephemeral. We use it as a tool for physically altering materials and we are very familiar with its sensation, using it constantly to navigate and understand the world. This sensation is not what is delivered by a thermal photograph however.  Is it possible to create a photo that possesses the tangibility of heat with less of its ephemerality? An image that is physically cold in places it recorded cold and hot in places it recorded hot?

The photos by Clelia Knox weird me out existentially. Dissociating the photos from the what it must have looked like to record them, you see human energy being radiated into a cold environment. It captures the constant generation of life in relation to an imposing environment. In this way a new sensation is captured, of vitality, beyond the energy used to make these scenes visible.

I also looked at:

The Non-Line-of-Sight Camera from Spoon

Joiners from Christian Broms

Marmalade Type from Steven Montinar

Tea Ceremony from Stacy Kellner

Reading 01: The Camera Transformed by Machine Vision

The Clips camera system makes me pessimistic about human’s developing relationship with artificial intelligence. If photography is an expressive outlet, or a way to communicate perspective, what purpose does this system serve in terms of capturing humanity? We have all of these tools that let us take photos, which have been developed to be “easier” to use, but this only serves to eliminate the control that a person has over the tool (even though there is still control over where the camera is pointed, although they would probably eliminate that too if it was possible).  I think a system like Clips also strengthens cognitive barriers hindering our ability to consider our senses from beyond themselves. It’s designed to deliver a product that most closely resembles how something would look if you saw it in person, so the user never has to consider the media as a result of real, physical processes, only a reflection of they expect. Not only that, but it decides what is significant content, preventing the user from having to consider what’s real that makes content significant. Thinking of the future, if everything we do is aided by artificial guides and standard parameters, what will we all be doing that’s so important to photograph beyond the state’s interest in surveillance?

The Pinterest Lens works in a similar way, and I don’t think it’s as offensive because it’s not a photography tool. It seems to serve more as a search engine that uses image data to associate concepts and things. In other words, this system would work similarly if seeing the photos was removed from the process altogether. You are pointing a device at an object or space, and finding posts about similar objects or spaces.

Bruce Sterling’s concept of future imaging as only computation is intriguing, and similarly unsettling. The implication is that this system of cloud computing is aware of the physical arrangement of matter, meaning whoever controls this system can observe any event taking place, no matter how private. It also means that they can observe people’s dreams, ideas, and feelings via the arrangement and activation of neurons in the brain. Similar things are already being implemented using machine learning to reconstruct a subject’s radiated brainwaves into the image that is being seen by the subject.

Non-Euclidean Renderer

I was considering whether rendering engines were a form of photography. An ordinary 3D renderer simulates our own visual perception of real, physical space. If photography documents real light and real space, how does the staging of a set with models, props, and light sources expand the definition. Both 3D visual artists and photographers work by manipulating arrangements of color values to form specific representations, the difference is that photographers record physical light, while 3D digital artists don’t need physical light to form their arrangements (despite looking through an illuminated monitor the entire time).

Either way, I found a project by Charles Lohr where he implemented a unique 3D renderer with the ability to simulate non-euclidean space. In euclidean physical space (and most 3D renderers) straight lines remain consistently straight through space and the volume that space occupies has perceivable consistency, not changing in relation to the space around it. In non-euclidean spaces, it’s possible for relatively straight lines to appear inconsistent: lines which are similar or parallel  across some areas, may not be parallel across others. The engine made by Charles Lohr allows for volumes of space to be “stretched” to varying dimensions. Examples in this demo include a tunnel that appears longer on the outside that it appears traveling through its center, and houses that appear small from outside, and very large once you enter them. The engine uses ray-tracing to generate images of how these theoretical spaces would look through our eyes.

Non Euclidean GPU Ray Tracing Test

This project broke down some of my subconscious assumptions about 3D rendering. Ordinarily, 3D engines are designed to simulate physical stages as accurately as possible; it’s assumed that artists want to simulate or imitate perspective photography. For example, I never really considered how the perceived straightness of a line is the resulting illusion of how our space is composed, or how these things that seem so universally constant are only the result of specific circumstance. A 3D digital artist is able to directly control every pixel and create fantastical settings, yet it’s rare that this art roams from the low-level consistencies of perspective photography.