Typology Machine Project Proposal

A Collection of Liquids Going from Cold to Hot, then (if possible) Hot to Cold.

Sink Test

Shower Test

For my typology project, I am going to capture the the thermal changes of liquids going from hot to cold, and then (if possible) cold to hot. I am interested in capturing not just the liquids, but the surrounding environments that change in response to liquids, too. I feel like I often make art projects that have a specific (political) am in mind from the outset. For this project, I will try a different approach and use the time to try and document a phenomenon I found interesting when playing around and experimenting with this heat camera. The FLIR E30bx Thermal Camera  will be the capture device. All of this capture is clear  and ‘invisible’ without this type of capture. I will let this inquiry drive my projet instead of a more preconceived idea.

Questions I have:

  • Is this remotely interesting?/why do this?
  • How can I narrow or expand the scope of my inquiry?
  • A few technical/aesthetic questions:
    • in my examples above I am using timelapse, how are you responding to that? Is something being missed by the slow change of the sides? I have the original version on my comp and will also show for comparison during our discussion.
    • What about seeing the heat information on the image?
    • The capture is only at 160×120 pixels. I am capturing it by recording the live stream off of my computer; is the above magery to grainy? Should I keep them smaller?
  • How am I going to display my typology project?

Places/things I will film:

  • a variety of sinks
  • a variety of showers
  • ice melting
  • water boiling
  • rivers/streams at sunset/sunrise
  • coffee dispensing from a large shop size dispenser.
  • hosses
  • public and private locations of the above.
  • park faucet
  • eyewash station
  • water fountain
  • What about more gaseous liquid situations, like steam? Or car exhaust? Does this ‘count’?
    • ok and some of these ideas towards the end here are just water dispensers and the change in temperature might be very minute (if at all/visible) should I include these? re: possibly narrowing or expanding my scope from question above.

 

Some related things I found inspiring/interesting/surprising:

SEM – Dried Sweetened Orange Slice

For my visit to the Pitt Center for Biologic Imaging I cut a tiny slice of a dried sweetened orange slice.

Dried Sweetened Orange Slice-01

millimeter scale (above)

Dried Sweetened Orange Slice-02

nanometer scale (above)

anaglyph-01

anaglyph (above)

 

I was really blown away by this process! It was really fascinating to be able to see something already so tiny, even further up close. The larger squares in the first image are sugar crystals. In the more zoomed in images, it appears to look like a weird, swiss cheese moonscape. Upon my arrival I learned that there was some concern that this item would not work, as there may have been small amount of moisture in even the dried fruit, but it did (Donna tested this and knew this before my arrival). I am looking forward to seeing what other people captured. In my visit we discussed what objects/processes and how photogrammetry might work with these types of images. We ended up discussing what bugs might look like under the Scanning Electron Microscope and then turned into photograms.

 

Other photos below:Dried Sweetened Orange Slice-03

Dried Sweetened Orange Slice-04

Dried Sweetened Orange Slice-05

Dried Sweetened Orange Slice-06

Postphotography? response

This article made me think about human gait recognition and its use as a a means for biometric identification. This led me to stumble across this paper on free-view gait recognition which is an attempt to solve the problem of gait recognition needing to have multiple views to be effectively measured/identified, and the use of multiple cameras in city-wide security systems to build such identification. So this combination of using security systems and then deploying algorithmic identification to make the specificity of human beings by the ways in which they move through such surveillance systems seems like an act of “nonhuman” photography that arises out of the combination of smaller systems of “nonhuman” photography.  So I am interested in what other collisions of “nonhuman” systems of photography can intersect with each other to create unintended image portraits.

gait imagery

 

“Nonhuman photography can allow us to unsee ourselves from our parochial human-centered anchoring, and encourage a different vision of both ourselves and what we call the world.” I think that the above example concerned with gait identification is about trying to see something specific, and I am interested in the fragments and detritus that could be created from this (and other systems of collision). For example, in the picture below one can see the images compiled and the lines they create for the gait detection to be accurate, but this for me raises an aesthetic question of what does the imagery needed to identify people via their gate look like. What are the human portraits created out of surveillance and algorithmic identification?

 

 

Reading response #2

In contemporary captures I don’t consider anything objective. After reading about the history and development of different photographic processes it seems like it never was objective. What seems to have been objectively present was the quest for a capture situation that would be objective. A desire, in the western tradition of the enlightenment and its relation to the scientific method, to make something verifiable by reproducibility and replication. So, while there may be tools that can now generally reproduce roughly the same image in a repeatable way, it is not that the process is objective, but more that the journey towards seeking something objective has resulted in something that is close to agreeable in a more generalized way. This idea about process in relation to identifying an objective image is different than a predictive one. While objectivity I think is inevitably wrapped up in the human-perception centric sense of the creators if the image technology, I think there are at this point chemical process for capturing light (or other capture techniques) that can be predictable, or likely to be repeatable. And is predictability what is tied to the veritas of objectivity? That is the question I would posit at the end of this reading.

ProjectToShare Reviews

 

I decided to look at and comment on Huw’s post about non-euclidean renders (and other alternative rendering systems) as a form of photography. In general, I am inclined to support and expanded understanding of photography – so if someone want to claim it as photography, then will hear ut what they have to say. Thinking about non-euclidean render systems strikes me almost as more of a camera making process, that then takes photos via the render. It creates the system for the possible capture in the digital world and then renders it, or it reminds me of Vito Acconci’s work where he would walk down the street and snap ‘random’ photos on film and then develop them later. In a way, the expectations of the non-euclidean render system might not be identifiable until the render mich like the chance photography of Vito Acconci (and others).

Here’s an example of someone’s ‘impossible space’ render system created for Unity.

The projects I looked at:

 

‘Triple Chaser’ by Forensic Architecture

‘Triple Chaser’ by Forensic Architecture

Forensic Architecture's 'Triple Chaser,' model being trained

In 2018 the US fired tear gas canisters at civilians at the USA/Mexico Border and images identifying these munitions as ‘triple chaser’ tear gas grenades emerged. These are produced by the Safariland Group,which is owned by Warren B. Kanders who is the vice chair of the board of trustees of the Whitney Museum of American Art. Why mention all of this?

During the 2019 Whitney Biennial Exhibition, Forensic Architecture decided to create a camera that employed machine learning and computer vision techniques to scour the internet and determine in what conflicts (often states oppressing people) these tear gas canisters were being used. This information is not something easy to find and there were even not enough images of the canisters available to create an effective computer vision classifier, so they had to create models and make their own data so the classifier could work and then be used to identify real-world examples of where the were being used. The results and process of this investigation were then shown at the exhibition. They eventually withdrew the work from the exhibition in response to inaction on their findings.

What I love about this work is the use of inventive technology to create a solution for justice-oriented journalism/activism that would be impossible without the invention of the tools it crested. There would be no other way to sift through endless footage to put together a damning case of the widespread use of these tear gas devices and their relationship to a board member of the Whitney without the use of creating a process, or a performance, that can operate on its own. Also the idea of creating data needed to mine real world data in an effective way seems to be an interesting process for thinking about capture and how to deal with intentionally created blind spots. How to teach something to see as an act of art and activism?

Link to the video they created and presented is here.

Link to their methodology and explanation for creating the object identifier here.

 

 

Amodei’s response to The Camera, Transformed by Machine Vision

In some of the speculative cameras described in the article, the user/operator and cameras/sensors relationship moves very much away from the traditional relationship that point-and-shoot cameras had between their user and camera. One way in which this seems to be happening is along the lines of agency of the situation. In more traditional photo camera’s the agency of the capture is at the behest of a user’s specific action and choice to engage with the camera – simply having access to the device does not allow for a capture to occur.  In some of the situations discussed by Ervin, the question of agency changes to one where the action switches over to one where the agency is at the beginning of the situation where the terms for a potentiality of photographic situations to occur are created. If one has a camera that can take pictures on its own, learn from its actions, an is always operating then the last opportunity for the agency of the capture (in the 20th century sense of the photographic capture) occur at the onset instead of the moment of the performative ‘click’ of the camera. So here we have a situation where the device is always performing at the user’s initial request, and performing on its own in a way that the used to require a union of performative relations between person and machine.

This question of the performative agency happening at the beginning of the situation opens up larger questions about the relationship to agency amidst  the user/operator and cameras/sensors relationship as cameras out in public begin to take on the computational qualities described by Ervin. What happens to the relationship when the consent of situational agency is removed by increased proliferation of these devices in an unregulated manner that arises out of the conditions of America’s late capitalism? When will this capture performance start? What is the Amazon Ring gonna do to us (performatively)?