Photogrammetry Workshop

I realized I forgot to post my work from the photogrammetry workshop with Claire Hentschker a few months ago.

Weirdly enough, I’ve tried photogrammetry-ing myself a couple times since but the results have never been this good. I think setting the quality to medium is actually best.

Person In Time: Draft Ideas

  1. Use photogrammetry on home videos from the early 2000s to reconstruct the actions of a deceased family member, whose memory is fading from my mind.
  2. Do some sort of data visualization (tSNE/UMAP, a GAN, a searchable archive?) on 20K+ images downloaded from the Tumblr I used consistently from ages 13-19.
  3. Do something with my girlfriend’s cooking skills. Maybe I could film her through the heat camera while she sautés some crazy thing, or film the fermentation of kimchi for a timelapse…

f(orb)idden orb

A completely self-centered world emerges from a mirrored sphere. You cannot escape the center of the orb.

 

Cognitive neuroscience researchers studying spatial cognition have identified two frames of reference that humans use to understand space: allocentric (world-based) and egocentric (self-based.) These drawings depict the ultimate egocentric frame of reference: a spherified world emanating from the viewer at its center.

I used the NeoLucida portable camera lucida to project the contents of a mirrored sphere onto paper. I traced the projection in pencil to capture the rough location and distortion of objects in the world. Then, I inked in the sketch freehand, filling in details as I had seen them in the camera lucida and the photographs I took through it.

Objects perpendicular to my gaze (i.e. to the side of the orb) became stretched beyond recognition. Even small things, like the bottle in the third image, were spaghettified, as if the image were an inverse black hole. Some objects, no matter how hard I looked, were simply unidentifiable. Dents and other imperfections of the orb distorted the image, creating miniature whorls and spirals throughout the scene.

Preliminary sketch of the orb as it sat placidly in a floral porcelain bowl.

I wanted to see how different environments would appear in the orb, and how it would distort the human form into something unnerving yet familiar. Golan pointed out the unintuitive (at least to me) fact that one’s vantage point (eye or camera lucida) will always be located at the exact center of the sphere. This is oppressive, but unfortunately cannot be stopped.

Pathological mental states like anxiety separate the brain user from their environment, creating a whirlpool of self-centered obsessions and paranoias. Sometimes, one’s own brain is the only thing that doesn’t feel distorted and unfamiliar, when in fact it is more beneficial for the self to open up and bleed into the environment. The second image shows how sheer force of will allowed me to escape the reverse singularity of the orb. Unfortunately, my facial features did not stay intact and it was somewhat painful.

Measurements used to draw a circle 9″ in diameter. Also note that the camera lucida should be level with the equator of the orb in both dimensions. Lights can be used to equalize illumination; in this picture I have several lights pointed at the ceiling.

If I were to do this project again, I’d have a more methodical setup from the start. I’d also develop a more portable/generalizable and less opportunistic rig, although clipping the camera lucida to whatever was available in the environment did help immerse it into the scene. Each drawing was relatively time-consuming, but I would love to make some more.

Reading 03

As discussed in the reading, photograms (silhouette images made using light-sensitive paper) are a form of nonhuman photography. Bruce Conner, a pop/abject/assemblage artist active in the ’60s-’80s, made a series of photograms called Angels capturing the human figure in unfamiliar, ghostly poses. Due to the scattering of light, parts of the body that were closer to the capture surface left a brighter mark, making it look like the figures were glowing or emitting superhuman power. Despite the human subjects, the title (literally a divine being) and technique make this work nonhuman.

Bruce Conner Angel photogram

Zylinska writes that postphotography helps us see the medium as “an assemblage of ‘surface-marking technologies'” rather than a “semiotic/indexical understanding of images.” This wording was familiar to me from more traditional art classes where we were taught to think about 2D art as ‘imagemaking’ rather than separating out categories like drawing, painting, and such which can be constrictive.

I agree with Zylinska that computational and algorithmic techniques are not changing photography fundamentally, rather, they are “intensifying [the nonhuman] condition”. Computation has expanded the field and is helping make complex photographic techniques more accessible. It’s true that computing has enabled three-dimensional, generative, and extrasensory imaging, but at the highest level these practices have existed for a long time (ex. if you conceive of Sol LeWitt’s instructional wall drawings as generative.)

Reading #2

This reading covered a myriad of capture technologies and how the variability and idiosyncrasies  of the technique influenced the result, concluding that photography is far from ‘objective.’ In contemporary captures, the medium is far more predictable than it was over a century ago. We no longer have to deal with film irregularities and such, and cameras with simple UI are available to anyone with a cellphone, rather than experts only. One quote I particularly enjoyed described how photography was seen as a rational, enlightened pursuit:

“Observing was considered an art, reserved for those exceptional, diligent and above all sharp-eyed devotees of the natural world and the heavens.”

As pretentious and idealistic as these concepts seem today, I understand the defensive attitudes of individual photographers and the discipline as a whole. Considering the scientific method was not as widely accepted as the foundation of truth (not that it necessarily is now,) any lapse in the objectivity and reliability of measurements, like wobbly photos of the moon, could undermine the entire endeavor. It’s interesting to imagine what photography would look like today if it had originated as a tool of religious and faith-based belief systems rather than scientific ones.

However, photography is no more objective than it ever was. If it weren’t painfully obvious that images can be used to manipulate ‘truth,’ we wouldn’t have this highly complex political cartoon (only intellectuals can understand, sorry.)

There genuinely are different kinds of truth. One of my favorite ideas from the reading was botanical illustrations that showed a flower in all stages of growth, collapsing time and idealizing a living thing:

“These images were true and scientific representations of nature, just like photographs, but they relied on an entirely different understanding of the truth claim.”

Finally, a quote on what makes a photograph interesting.  Maybe it’s the failures of a photograph as a tool that make it a good way to make art.

“Measuring photographs that also offer accidental pictorial details are, on the other hand, more likely to survive in archives.”

Response: Non-line-of-sight Camera

I wanted to respond to Spoon’s post on the Non-Line-of-Sight Camera, one of the more technical projects shared but which nonetheless has artistic/philosophical implications. As the spoon pointed out, this technology has many practical implications, but it also changes how we think about cameras capturing the reality that is ‘available to’ the photographer. What if all possible views on reality become available?

Recently, I read an incredible book of science fiction short stories by Ted Chiang, Exhalation. A couple of the stories dealt with how it’s essentially impossible to reconcile the predetermined nature of the universe with notions of free will. It’s a bit of a stretch, but a camera that can decode reflections is on the way to simply modeling the position of every atom in the universe. Once we know all those positions, (first we will run into an information storage space problem like in the Borges story with the map, but) then we’ll be able to determine all their present and future positions and thus know everything that has happened and will happen. Anyways, that’s my startup idea based on this research.

Other Projects I Reviewed:

Project to Share: Learning to See

Learning to see: Gloomy Sunday from Memo Akten on Vimeo.

I didn’t want to post GAN art because it’s very predictable of me, but I thought it was interesting how the ‘Camera Transformed’ article conceived of them as capturing unseen possible aspects of reality through the latent space of an image dataset.

In Learning to see, artist Memo Akten trained a generative adversarial network (GAN), a type of machine learning model, using the Pix2Pix (pixel-to-pixel translation) technique. With Pix2Pix, the ML model learns a statistical mapping between representation A of an image and representation B. In this case, Akten used photographs of plants and landscapes as one representation, and line drawings of the same scene (generated by a simple edge detection algorithm) for another. Thus, the trained model could pick up the general edge arrangement of a mundane scene, like car keys and wires on a table, and translate it into the visual representation it was familiar with (natural scenes.)

Philosophically, the piece is interesting because it shows how preconceived notions of reality affect capture: art is never neutral. As Akten notes, the model “can only see what it already knows, just like us.” For this model, even a decidedly un-sublime environment doesn’t incite boredom and despair. Learning to see suggests seeking beauty and inspiration even in the winding shape of an electrical cord.

Reading 01

One of the assumptions we compiled in class was something along the lines of ‘photos are an intentional choice made by the author.’ The way this article conceives of certain image-making systems like GANs or Google Clip show how this assumption can be challenged (beyond just ‘accidental photos’ in the sense of your finger slipping on the phone camera shutter button.) Like all algorithmic art, the author concedes aesthetic control but retains a certain degree of conceptual control. Though the author themself doesn’t decide what exact dog a GAN will generate or what Google Clip will deem salient, the author (or the authors who wrote the program) is still setting the rules for what the system will do. The conceptual space of possible images is limited.

I’ve never understood the concept of ‘creativity’, but maybe in order to have true authorship, there must have been the ability to make something else instead. A GAN (in 2020, at least) can’t be a true author because it doesn’t make true choices (and therefore by some definitions is not conscious.) (Also I don’t know where to draw the line on what makes a ‘true’ choice given the predictability of human behavior, but I definitely don’t think we’ve reached it.) This is why the whole Obvious/Christie’s thing was so silly. AI is not yet a creator, it is still a tool for creators, because its ‘own’ decisions are in fact highly predetermined. Actually, my whole argument is falling apart since I don’t believe in free will, never mind.

As the theorist and artist Grimes once noted, we may be nearing “the end of human-only art.” I’m not sure how we’ll know when it happens, exactly, but there will be a point when it can no longer be denied that AI cameras are making their own decisions, and have thus become photographers.