Reading02-Tagbamuc

Thinking back to pre-photographic scientific observation times, I can imagine that limited capture methods would make typologies less effective if the goal is to compare their slight variations. The reading states that looking away from the subject would be required in order to record sketches or notes, versus today where we can easily snap and capture something we wish to observe at a later time easily. If someone was to record manually how a bunny reacted to different vegetables, they would miss crucial moments in the experiment that would be absent in the documentation because of having to look away, or, the time it took to sketch the subject would draw attention away from the experiment as a whole. In addition, the entries might all vary in consistency, making comparison challenging if one is unsure if a variation is a human error, or actually existed at the moment of capture. The camera does indeed allow uninterrupted viewing and capture, but being able to see the variation in the product due to human error, can, in fact, be the machine for the typology. We saw this in Kim Dingle’s study of drawings of the United States; if she had asked the students to capture an image of their town with a camera, we would lose a significant element of the typology that only exists due to human error and lack of having a perfect understanding of what the United States looks like.

 

As far as contemporary captures being predictable, I think that’s kinda the point of typographies-to be predictable (at least on a basic level.) They challenge you to pay attention to what each entry has in common with its peers and spot the variation. In Aertryck & Danielson’s, Ten Meter Tower, the audience picks up on the pattern and predictability of what the next subject will do, but that’s also a part of the amusement. I wouldn’t say it’s scientifically reliable, as this capture is only of a small sample size that probably wasn’t chosen to reflect a large body of people.

Project to Share Review : Marianne

   I wanted to expand upon Marianne’s “Project to Share” entry on Olafur Eliasson’s Water Pendulum. I agree that this work is phenomenal. After the photogrammetry workshop yesterday with Claire Hentschker, my mind kept going to the lightning bolt she showed us that was captured at two different angles and made into an object. This got me wondering if the stills from this video could be used to make a 3D capture. I’m not sure if this would be possible using the photogrammetry methods we learned in class on 1/21 because the water is in motion, but it’s an interesting thought to ponder.

Other projects reviewed;

Steven

Lumi

Joyce

Oscar

Joseph

 

 

Reading01-The Camera, Transformed by Machine Learning

The article notes that computers aren’t exactly learning to “see” the way humans do. Instead, they’re learning to “see” like computers, and that’s preciecely what makes them useful to humans. The Pintrest Lense is one example that acts as an on the go research aid that functions as an active database, recording what it “see’s” and providing feedback. Although it’s using a camera, I’d say the computer is merely using it as a communication device with the human. The camera simply provides the computer “eyes” to assist us with our query instead of just a brain and words like a search engine would.

This ever-growing fear of machine learning and computers with superior agency to humans is pinpointed in the design decision of the GoogleClips camera; They chose to include a camera button not for functionality, but for the familiarity and comfort of the user. Ordinarily, this makes sense, as any design should appeal to their target audience. However, in this case, lack of requiring human input is a terrifying thought for many. We see this in shows like Black Mirror too. Human’s like to feel in control and like they accomplished something with capturing a photograph, or using any kind of technology. There is value placed on the act of creation, or authorship. I wouldn’t say that cameras have any degree of independent agency that isn’t pre-programmed by humans yet, but they certainly are advancing and becoming smarter to serve as tools in our everyday life. The camera’s trajectory seems to be following the advancement path of the telephone.

Project To Share: Quayola & Memo Akten: “Forms”.

For the London Olympics Quayola and Memo Akten collaborated on Forms, a piece that uses motion capture, animation, code, and Olympic events to create dynamic electronic art. The piece begins with static images from the video, and out of context, they don’t look like much. The forms are then set in motion and the viewer can clearly track the human body through space. In the original installation, the source footage was displayed on a nearby small screen instead of embedded in the corner of the work. The piece was on display in Bradford, England at the National Media Museum in 2012.

Original Source

Reference

I’m always fascinated by the human body and its capabilities, and this work really showcases that! This piece was so intriguing to me because it abstracts the body within the frame and captures the essence of the athlete’s feat by emphasizing the motion footprint as the body moves through space. I’m not sure if this was intentionally done by the artists, but I found it interesting that in some of the shots, it’s not obvious where the body is, or what the action was while in others it’s much clearer. I’m curious how my viewing experience would change if I was able to observe the piece without the source playback.