Photogrammetry – Watch or Egg?

For the photogrammetry workshop I took 6 photos of my broken watch to see how the software would re-construct a model from so few images. Though the model  resembles an egg or other amorphous objects, it is still a reasonable 3d model of the general attributes contained in the scene. I’ve never used these kinds of techniques before but it’s exciting to think about the possibilities for robotic control and navigation – all you would need is a moving camera to generate a general model of space that could be good enough for navigation.

Review Project to Share

I thought the project Variable 4 by James Bulley and Daniel Jones was extremely interesting in utilizing data, sometimes unseen, to create music. I similar project is the musical works of Brian Foo where he creates songs with data, often data that is unseen. There is a pretty substantial amount of songs he has created, but I specifically find the song Music Eclipticalis intriguing where Foo using positions of stars to create a song. Not all of his work is groundbreaking or pushed to the farthest limits, but he has a lot of interesting ideas in this realm of music from data so I’d recommend checking him out.

Website: https://datadrivendj.com/

Projects I Reviewed:

  • Unpainted Sculpture by Charles Ray
  • Escape Route by Dirk Koy
  • The Clock by Christian Marclay
  • Marmalade Type by Rus Khasanov
  • Variable 4 by James Bulley and Daniel Jones

Photogrammetry – Crystal Bird

I brought in a crystal bird figurine to use in the photogrammetry workshop, before I found out that shiny & transparent things are literally the worst possible thing to use. Whoops!

You can see the thumbnails of the real thing in this photo:

And here’s how it turned out:

So…kind of a failure, but also kind of fun that there’s at least a little stump-like thing showing where the bird is *supposed* to be.

‘Triple Chaser’ by Forensic Architecture

‘Triple Chaser’ by Forensic Architecture

Forensic Architecture's 'Triple Chaser,' model being trained

In 2018 the US fired tear gas canisters at civilians at the USA/Mexico Border and images identifying these munitions as ‘triple chaser’ tear gas grenades emerged. These are produced by the Safariland Group,which is owned by Warren B. Kanders who is the vice chair of the board of trustees of the Whitney Museum of American Art. Why mention all of this?

During the 2019 Whitney Biennial Exhibition, Forensic Architecture decided to create a camera that employed machine learning and computer vision techniques to scour the internet and determine in what conflicts (often states oppressing people) these tear gas canisters were being used. This information is not something easy to find and there were even not enough images of the canisters available to create an effective computer vision classifier, so they had to create models and make their own data so the classifier could work and then be used to identify real-world examples of where the were being used. The results and process of this investigation were then shown at the exhibition. They eventually withdrew the work from the exhibition in response to inaction on their findings.

What I love about this work is the use of inventive technology to create a solution for justice-oriented journalism/activism that would be impossible without the invention of the tools it crested. There would be no other way to sift through endless footage to put together a damning case of the widespread use of these tear gas devices and their relationship to a board member of the Whitney without the use of creating a process, or a performance, that can operate on its own. Also the idea of creating data needed to mine real world data in an effective way seems to be an interesting process for thinking about capture and how to deal with intentionally created blind spots. How to teach something to see as an act of art and activism?

Link to the video they created and presented is here.

Link to their methodology and explanation for creating the object identifier here.

 

 

Reading01

The Camera, Transformed by Machine Learning

The act of taking a photo is a very personal, intimate thing. It is a mechanical representation of your perspective, adjusted and dialed in to best record what you are seeing.  It is an act of labor that extends past the body in order to capture that moment in time. The camera is both tool and partner in the creation of the image, and often allows the user to extend their vision in ways that biology cannot (zoom, exposure, depth of field, etc.). With cameras becoming more autonomous, I believe that relationship between user and tool remains intact.  Perhaps it has moved into more of a platonic partnership rather than an intimate romance, but the authorship remains the same. Artworks are credited to the user and their materials. The more autonomous the imaging system the more dependency and trust the user has to place in the system. As machine learning advances, we may have to credit these systems as full fledged co-authors.