Postphotography

I wasn’t totally sure about other examples of postphotography, so I looked at some other classmates responses, a lot of which were astronomy-related. This made me realize a cool example — how astrophotographers colorize their photos of space, especially of galaxies and phenomena from light-years away. We obviously cannot see these objects because they are too far away, but if they were close enough, our eyes still aren’t complex enough to be able to understand space in a lot of color. The technique that astrophotographers use to add color to their images is similar to what we did with Photoshop to create the anaglyphic versions of our SEM images. They take one set of three different photos of space, each one filtering only red, green, or blue light, and often a second set as well, which filters for light coming from places with hydrogen, oxygen, and sulfur (I’m not exactly sure how that part works). Then they are combined to create a single RGB image. Read here.

As for Zylinska’s paper — I appreciate how she has invented a subcategory for this kind of media, as I don’t think photogrammetry, LIDAR imaging, etc. should be grouped with traditional photography. This may just be my biased opinion as a relatively experienced photo-taker (“photographer” these days seems to be a loaded label), but as I said in a previous response, but I believe the relationship between the human and non-human in postphotography isn’t as close as Zylinska claims, and this paper somewhat scares me as a result. I find myself defensive on analog photography’s behalf. I am absolutely supportive of exciting technology, and a lot of my own art uses it, but my wish is that all these data-driven capture techniques get their own name without “photography”.

SEM Experience

I crushed up some pills of different drugs into a single powder. I was curious if the difference between chemical compositions would be obvious when viewed at this scale. It is not.

I had an incredible time at the SEM. Donna is super sweet, she explained everything very carefully to me, and chatting with her about her background in microscopy-related subjects was almost as cool as my specimen itself. This experience opened up a whole new world of photography to me, and honestly just a whole new world. I understand my existence as a human differently now–suddenly space seems bigger, and I want to explore literally everything around me at a molecular level.

Photography and Observation

In the context of this class, a capture is synonymous with a piece of data. It seems, then, that any collection of data could be considered a typology. For this reason I think it is inaccurate to consider “capture” a single medium–perhaps it generally categorizes the devices we use to make typologies, but I think the type of data itself should be considered the medium.

The article’s definition of objective seems to be “a recognizably real representation” of the world. By this definition, I don’t think many contemporary capture technologies create typologies that are necessarily recognizable nor representational> Especially in art, sometimes artists. but they are definitely real. This makes me realize it’s interesting to think about what is “objective” and what is real. Is reality the realness of objects in physical space? or of our eyes perceiving the signifier and our brains interpreting that as a signified? or of the schema we’ve learned to remember at the cognition of the object? Does an object have to be a physical thing or can it be a thought? There’s layers and layers of reality.

The question of reliability is a bit more complicated. I think it comes down to the purpose of the typology. If the data in the typology was created purely for measurement, then obviously it is reliable. This data exists only to describe, and they are generally predictable because they describe reality. But when capturing became an art and emotion and interpretation got thrown into the mix, art typologies could not simply exist; they commented, caused, and affected. A typology created for art has opinions of its own, and it shows the world through a filtered lens.

Addition

In response to Izzy, who shared Learning to See:

I remember seeing Memo Akten’s lecture as well, but that happened when I was a wee first-semester freshman, unknowledgeable and unappreciative of the power of machine learning. I had completely forgotten about this project, and I’m delighted that viewing it for a second time felt completely different. Maybe I’ve just become a more philosophical person within the past year and a half in general, but this video creates a rollercoaster of emotion in me that did not happen back then. There was a strange empathy and attachment I felt for it; I could almost feel a humanness coming from the program because I was watching it learn, be resourceful, and try. It felt like I was witnessing a child do something for the first time, and I was proud of it. But then I snapped back to reality and realized that a computer was behind all of this, and it made me kind of fearful. It shows how close we are to faking humanness in computers, which is a dangerous place to be. Even though using machines to make art may be inspiring and beautiful on a surface level, it’s important that we also consider the repercussions of these tools. It’s ironic and terrifying to think that some random experimental media artist could be the culprit of the singularity.

Izzy, I know you’ve been at the STUDIO a while so you may have already thought about this, but I also want to note that Learning to See reminds me of Xoromancy, a collab by Gray Crawford and Aman Tiwali, two CMU alumni. I wonder now if the two of them took inspiration from Akten, as Learning to See was made a year or so before Xoromancy.

Other posts I read:

Steven

Huw

Jacqui

Stacy

Response: The Camera, Transformed By Machine Learning

When making our list of expectations of “cameras” and “capturing”, we discussed the fact that there has to be a human photographer when an image is captured. While I agree this is not true, I think owners of image-capturing machines are not talented, and machine-captured images cannot be beautiful because they are not purposeful. Projects like the Google Clip eliminate the need for a photographer, which, as one myself, really upsets me. I already get annoyed by the amount of people who claim they are “photographers” but hardly know how to handle a DSLR off of auto, and I think smart cameras would create even more of these people. I imagine that training a neural net to take “beautiful” photos (well-composed, well-lit, with strong subjects) isn’t that far off, and it’s acceptable to assume those who own such cameras would claim the computational excellence as their own expertise. It’s important to highlight the difference between ownership and authorship here–I think owning a device that takes a photo, and even using it, does not make you a photographer. It would feel so wrong to credit the owner of a smart camera for a “beautiful” photo–in that situation, the human hardly even touches the camera. But I realize saying the opposite, the camera should get credit for its own photos, is not necessarily correct either, because the camera has no intuition about beauty–it’s capturing strategy comes from an algorithm. There is no reason why the camera is taking the photo other than it senses something it recognizes. And the people who coded that algorithm shouldn’t get credit for the beauty; they weren’t even at the scene of the photo. The smart camera seems to take authorship, or uniqueness, out of photography, which is essentially what makes photography a valid art practice. In my opinion a good photographer is one that provides a unique perspective and voice to the world through their images. The smart camera will strip the world of these unique perspectives. That makes me so sad.

My Admiration — Shining360, Claire Hentschker

Claire Hentschker, a former CMU art student, makes a lot of work with photogrammetry. A couple years ago she used this method to reconstruct a 360 video version of the house from Kubrick’s The Shining. She took depth data from 30 minutes worth of frames and stitched them together, creating an interactive exploration of the scenes in the movie (either in video or VR form). The path through which the user is taken is the same as that of the camera in the movie, but the user is free to look around in any direction. The black space indicates there was no documentation of that area in the original footage.

This project is interesting to me because it made me realize that photogrammetry is kind of like the reversal of photography. Taking a photo flattens kinetic, 3D reality into static 2D, while using photogrammetry on a 2D photo projects it into an interactive 3D space. The two allow the user to jump back and forth between dimensions. I’m curious to know what would happen if one completed many cycles of taking screenshots of a photogrammetry VR environment to then pass them through a photogrammetry software, creating another VR environment.