Project To Share -The Enclave, 2013

Richard Mosse, The Enclave 2013

This is a film shot using 16mm color infrared film, a technology developed primarily for camouflage detection and used in the military during world war 2. The technology is able capture infrared light which is invisible to the human eye, and shows the chlorophyll reflected off of green plants, painting them in bright pink colors . The artists talks about his choice to look at the “invisible” (to the rest of the world) humanitarian crisis in the Congo, and using a film which “registers the invisible, and makes visible the unseeable”.

I first saw this work presented in the Portland Art Museum years ago, and find myself continually returning to it. I am drawn to not only the beauty in the images, and the way in which simply changing a color in a landscape so dramatically shifts the perception of the world it is portraying, but also to the way that this draws a viewer into the subject matter. I am not entirely sure of my stance on whether this lens and the ‘otherness’ it creates appropriately approaches the subject of the crisis it is observing (especially since it is created by an artist outside of the conflict itself). I think in this case this piece presents example of a photographic technology the produces a beautiful image of visually expanded and unseen elements in the world, as well as attention to potential issue and significance of the relationship between the lens/the image, and the subject it portrays.

images link

Project to Share

Rapid Recap

While not an artistic project, the end result can have creative implications. BriefCam is a video security system that compiles hours of video from a static vantage point, and overlays timestamps on moving subjects to create a condensed overview of events happening before the camera. From the demo footage available, the result shows people occupying the same space at different times but within the same frame. This allows the viewer to more quickly sift through hours of footage, and would reveal motion patterns of those it captures.

The “Rapid Recap” System for Condensing Hours of Security Footage Into Seconds – Link to demo videos

Thinking about re-purposing this surveillance technology for more creative uses is what caught my attention.  Viewing duplicates of a person shown overtime transposed onto the same space can create an eerie visual effect, as well bring to mind how our environments affect our physical interactions. Specifically it reminded me of the graphic novel HERE, by Richard McGuire. McGuire illustrates events taking place in the same spot over the course of thousands of years. From primordial swamps to a rendered future, people, objects, and animals are transposed on top of one-another, cutting through time, only getting glimpses of moments.

HERE, 2014. McGuire, Richard

 

 

 

 

Joiners

This is a piece called The Desk from a collection of works created by David Hockney in the 80’s that he calls “joiners.” They’re collages of many photographs taken from different angles of a single subject or scene. Hockney combines these photographs into a new kind of composite that shows a very different kind of view of something than a traditional photograph.

Hockney made quite a few of these joiners in the 80’s. Some of the later ones get very complex, showing many different perspectives and distances from objects in the scene:

I think joiners are particularly interesting because they take the very familiar medium of photography and produce something totally different in their view of the world. There’s no fancy equipment or techniques required, just a lot of photos and some clever piecing-together.

 

Project To Share: Quayola & Memo Akten: “Forms”.

For the London Olympics Quayola and Memo Akten collaborated on Forms, a piece that uses motion capture, animation, code, and Olympic events to create dynamic electronic art. The piece begins with static images from the video, and out of context, they don’t look like much. The forms are then set in motion and the viewer can clearly track the human body through space. In the original installation, the source footage was displayed on a nearby small screen instead of embedded in the corner of the work. The piece was on display in Bradford, England at the National Media Museum in 2012.

Original Source

Reference

I’m always fascinated by the human body and its capabilities, and this work really showcases that! This piece was so intriguing to me because it abstracts the body within the frame and captures the essence of the athlete’s feat by emphasizing the motion footprint as the body moves through space. I’m not sure if this was intentionally done by the artists, but I found it interesting that in some of the shots, it’s not obvious where the body is, or what the action was while in others it’s much clearer. I’m curious how my viewing experience would change if I was able to observe the piece without the source playback.

 

Project To Share: Non-Line-of-Sight Camera

A group out of the CMU Robotics institute and a couple other Universities are working on a project that allows the capture of visual data that is outside of the field of sight. The method works by capturing light reflected or otherwise displaced on a wall that is within the line of sight, and then using that information to reconstruct an image of the object on the other side of the obstructing surface. The method works because something outside of the line of sight will still be reflecting/absorbing light, and so that reflection/absorption will affect what light is absorbed/reflected from the wall within the line of sight. (you can read more about it here if you’re interested)

I find the project really interesting because it changes one of the fundamental understandings of what we think a camera can do: capture images that are within the line of sight. This has potential implications, for example, in driving, by allowing drivers to see around corners at intersections and avoid potential crashes.

 

My Admiration — Shining360, Claire Hentschker

Claire Hentschker, a former CMU art student, makes a lot of work with photogrammetry. A couple years ago she used this method to reconstruct a 360 video version of the house from Kubrick’s The Shining. She took depth data from 30 minutes worth of frames and stitched them together, creating an interactive exploration of the scenes in the movie (either in video or VR form). The path through which the user is taken is the same as that of the camera in the movie, but the user is free to look around in any direction. The black space indicates there was no documentation of that area in the original footage.

This project is interesting to me because it made me realize that photogrammetry is kind of like the reversal of photography. Taking a photo flattens kinetic, 3D reality into static 2D, while using photogrammetry on a 2D photo projects it into an interactive 3D space. The two allow the user to jump back and forth between dimensions. I’m curious to know what would happen if one completed many cycles of taking screenshots of a photogrammetry VR environment to then pass them through a photogrammetry software, creating another VR environment.

Post to Share- Experiments with Security Camera

These images are from a project Carnegie Mellon student Clelia Knox did where they experimented with images captured from a security camera. In this project Clelia “Experiments with a thermal security camera. Draped an ice cold wet sheet over [their] body to create a blurred effect of [their] body moving below the surface.” I find this project very interesting, but my context on this project is very limited. I found this project on the artist’s public Instagram, and I am not aware of other contexts in which these images were shown/meant to be shown. Additionally, I am not sure of whose in possession of the security camera nor how the artist acquired the images. I found this project very interesting because, for me, it seems like its commenting on the security we are all under all the time. Moreover, I found it interesting since these images from security cameras are often considered as evidence of truth; however, this project reveals the subjectivity of the images these devices capture.

Non-Euclidean Renderer

I was considering whether rendering engines were a form of photography. An ordinary 3D renderer simulates our own visual perception of real, physical space. If photography documents real light and real space, how does the staging of a set with models, props, and light sources expand the definition. Both 3D visual artists and photographers work by manipulating arrangements of color values to form specific representations, the difference is that photographers record physical light, while 3D digital artists don’t need physical light to form their arrangements (despite looking through an illuminated monitor the entire time).

Either way, I found a project by Charles Lohr where he implemented a unique 3D renderer with the ability to simulate non-euclidean space. In euclidean physical space (and most 3D renderers) straight lines remain consistently straight through space and the volume that space occupies has perceivable consistency, not changing in relation to the space around it. In non-euclidean spaces, it’s possible for relatively straight lines to appear inconsistent: lines which are similar or parallel  across some areas, may not be parallel across others. The engine made by Charles Lohr allows for volumes of space to be “stretched” to varying dimensions. Examples in this demo include a tunnel that appears longer on the outside that it appears traveling through its center, and houses that appear small from outside, and very large once you enter them. The engine uses ray-tracing to generate images of how these theoretical spaces would look through our eyes.

Non Euclidean GPU Ray Tracing Test

This project broke down some of my subconscious assumptions about 3D rendering. Ordinarily, 3D engines are designed to simulate physical stages as accurately as possible; it’s assumed that artists want to simulate or imitate perspective photography. For example, I never really considered how the perceived straightness of a line is the resulting illusion of how our space is composed, or how these things that seem so universally constant are only the result of specific circumstance. A 3D digital artist is able to directly control every pixel and create fantastical settings, yet it’s rare that this art roams from the low-level consistencies of perspective photography.

Project To Share – Light Field Photography with a Hand-held Plenoptic Camera

https://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf

The above paper was one of the founding papers in micro-lens camera technology. Researchers at Stanford wanted to create a system that replicated bug eyes, and ended up using an array of micro lenses, where each one would capture a small patch of an image. These small patches would overlap with neighbors, and could be then used in post processing to exhibit parallax in the image. Having multiple micro lenses also allowed for depth deletion of objects (since the parallax movement corresponds to the depth) which is currently a technology that Google employs. Rather than using multiple cameras like in iPhones, Google Pixel devices that use a single lens have a micro-lens array behind the camera that can be used in depth detection and background blurring.

This research is interesting since it provides a more hardware efficient way of capturing depth/parallax without the need for multiple cameras. It also helps photographers re-adjust the focal points of their images in post-processing since the camera is now capturing more data than a conventional camera, and only deciding on what regions of each micro-lens to sample from when recreating a new image. Thus, it makes use of a large sample of data to create a conventional photo.

 

The Capture of Gesture: How We Act Together by Lauren Lee McCarthy and Kyle McDonald

McCarthy and McDonald explore the meaning of small gestures in How We Act Together, an art installation built upon machine learning algorithms that track body movements. In the piece, participants are commanded to perform and repeat specific gestures, such as scream or nod – to the point of exhaustion. As long as participants keep performing the gestures, they can view previous performers of the gesture onscreen. If they repeat the gesure longer than all previous participants, their recording is added. Thus, the work is an accumulation of those who endured these performances the longest. The work was on display at Schirn Kunsthalle in Frank­furt, Germany in 2016, but also allowed remote entries through a website.

The hyperfocus on seemingly inconsequential gestures allows us to consider them in a new, awkward way. It’s like the digital art installation version of fumblecore. It’s like kerning a word in Illustrator for too long such that it loses it’s meaning. In this piece, the small movements we take for granted are exposed, made absurd. In this new context of absurdity and exhaustion, new encounters with these gestures – their associated emotions, meanings – are possible. The piece, then, forces us to appreciate the semantics of our “cultural body” experientially. The piece helps us to notice our bodies in a way that reminds me of why I’m interested in meditation or yoga: how are these relationships with our bodies?