Bubble Faces

Photographing bubbles with a polarization camera reveals details we don’t see when we look at them with our bear eyes, details including strong abstract patterns, many of which look like faces.

I wanted to know what bubbles looked like when photographed with a polarization camera. How do they scatter polarized light? I became interested in this project after realizing that the polarization camera was a thing. I wanted to see how the world looked when viewed simply through the polarization of light. The idea to photograph bubbles with the camera came out of something I think I misunderstood while digging around on the internet. For some reason I was under the impression that soap bubbles specifically do weird thing with polarized light, which is, in fact, incorrect (it turns out they do interesting things but not crazy unusual things).

To dig into this question, I took photographs of bubbles under different lighting conditions with a polarization camera, varying my setup until I found something with interesting results. As I captured images, I played around with two variables: the polarization of the light shining on the bubbles (no polarization, linear polarized, circular polarized), and the direction the light was pointing (light right next to the camera, light to the left of the camera shining perpendicular to the camera’s line of sight).

I found that placing the light next to the camera with a circular polarization filter produced the cleanest results, since putting the camera perpendicular to the camera resulted in way too much variation in the backdrop, which made a ton of visual noise. The linear polarization filter washed the image out a little bit, and unpolarized light again made the background a bit noisy (but not as noisy as the light being placed perpendicular to the camera).

The photo setup I ended up using with the polarization camera, light with circular polarization filter, and my bubble juice.

My bubble juice (made from dish soap, water, and some bubble stuff I bought online that’s pretty much just glycerin and baking powder)

I recorded a screen cap of the capture demo software running on my computer (I didn’t have enough time to actually play with the SDK for the camera). I viewed the camera output through a four plane image showing what each subpixel of the camera was capturing (90, 0, 45, and 135 degrees polarized).

An image of my arm I captured from the screen recording.

I grabbed shots from that recording and then ran those shots through a Processing script I wrote that cut out each of the four planes and used them to generate an angle of polarization image (black and white image mapped to the direction the polarized light is pointing), degree of polarization image (black and white image showing how polarized the light is at any point), and a combined image (using the angle of polarization for the hue, and mapping the saturation and brightness to the degree of polarization)

degree of polarization image of my arm

angle of polarization image of my arm

combined image of my arm

It ended up being a little more challenging than I had anticipated to edit the images I had collected. If I was actually going to do this properly, I should have captured the footage with the SDK instead of screen recording the demo software, because I ended up with incredibly low resolution results. Also, I think the math I used to get the degree and angle of polarization was a little wacky because the images I produced looked pretty different from what I could see when I looked at the same conditions under the demo degree and angle presets (I captured the four channel raw data instead because it gave me the most freedom to construct different representations after the fact).

While I got some interesting results (I wasn’t at all expecting to see the strange outline patterns in the DoLP shots of the bubbles), the results were not as interesting or engaging as they maybe could have been. I think I was primarily limited by the amount of time I had to dedicate to this project. If I had had more time, I would have been able to explore more through more photographing sessions, experimenting with even more variations on lighting, and, most importantly, actually making use of the SDK to get the data as accurately as possible (I imagine there was a significant amount of data loss as images were compressed/resized going from the camera to screen output to recorded video to still image to the output of a Processing script, which could have been avoided by doing all of this in a single script making use of the camera SDK).

Project proposal

I’m blowing big bubbles and capturing them with the polarization camera.

Bubbles do interesting things with light because the thickness of the bubble film is the same size as certain wavelengths of visible light (hence the iridescence of bubbles). While bubbles to necessarily do anything to the polarization of the light, because they interfere with the movement of light through them, I might get interesting effects by shining polarized light through them. I’m hoping that, by shining polarized light at the bubbles, I’ll be able to get interesting images with the polarization camera.

^ that’s what bubbles do to light

 

^ and this is what that looks like (these are visible colors, not any polarity magic

I’m going to blow big bubbles in the photo studio in Margaret Morrison, lit with polarized light, and then I’m going to photograph the bubbles with the polarization camera. Hopefully variations in shape and size (because big bubbles can be weird lumpy shapes) will provide some differentiation between the different images.

Response: Nonhuman Photography

One particular example of nonhuman photography I have experienced are these cameras that people strap to trees in their back yards that take photos when a motion sensor is tripped. The idea is that people can get photos of animals in their back yards without having to physically be there to take the photo. I remember having one on a tree in the woods behind my house when I was a kid, but I honestly don’t remember getting any photos from it despite having a ton of deer and other animals in the area I live in. In theory, though, these cameras allow people to capture images that they couldn’t capture if they were present. A deer or a coyote might be scared off by the presence of a person, so a candid image of that animal might not be possible from a human photographer who doesn’t have a lot of familiarity photographing wildlife.

The use of algorithms, computers, and networks in modern nonhuman photography intensifies the entanglement between the human and nonhuman. It raises questions about intent and control, about how much control a photographer exerts over their camera (I’m using these terms loosely) when the taking of pictures is defined by a script rater than simply a pressing of shutter. I would argue that perhaps the photographer is exerting more control when the taking of the photo is scripted. The photography is precise, consistent, something that is incredibly hard, if not impossible, for someone to do by hand.

SEM – Fabric

I put three fabrics in the SEM yesterday: Neoprene, felt, and some synthetic woven fabric (I don’t remember what it was). The neoprene ended up being the most interesting for me to look at, so that’s what I took the majority of my images of.

The woven fabric

Image with woven fabric on top and the felt on the bottom.

Image with woven fabric on the bottom left and neoprene. You can see the two surface layers of the neoprene and the middle layer as well.

The top knit layer of the neoprene

close up of the cut edge of the neoprene.

 

Close up side view of neoprene, but in stereo 🙂

The same stereo image as above, but as a gif.

Photography and Observation Response

The methods employed to capture a typology, when applied to a newly defined typology, can define how that group is perceived. For example, many of the telescopes used to capture images of planets within our solar system capture images in wavelengths outside of the visible spectrum and translate that into visible light. Because of this, we often imagine planets in colors that don’t match what they’d look like if we were actually looking at them with our eyes.

Capture techniques can always be used demonstrate a particular thing or to make a certain point, and because of this, anything captured can be subjective. In fact I would argue that it is incredibly challenging to produce an objective means of capturing something. Because the capture of something is shaded by our interests to capture a certain aspect of that object/event, it is extremely rare that a captured image represents the totality of the object, making it very unlikely for that image to be objective.

If well-standardized, a capture under the right conditions can be incredibly predictable. One could potentially capture the same aspect of an entire typology, which could then demonstrate whatever observation the individual is trying to show. Because of this predictability, capture is an essential tool in scientific research. While it might not be entirely objective, it is often necessary for a subjective representation to exist to highlight a deeper, objective truth.

Response: Christian Marclay’s the Clock

In addition to reading about Marmalade Type, Tom Sachs’ Tea Ceremony, Water Pendulum, and Rapid Recap, I chose to respond to Philippe’s post of  Christian Marclay’s the Clock.

Being able to compile a full 24 hour clock from movie clips demonstrates how important time and checking clocks is to us and to progressing the plot of a film. It’s interesting because the same does not go for equally ubiquitous (if nor more ubiquitous) tools in our culture such as smartphones, which seem to make an incredibly rare appearance in most contemporary, mainstream film, especially when compared to the portion of our daily time is sunk into using them. Obviously the smartphone is a lot newer than the clock, so only films released within the past decade or so would feature smartphones at all, but I’d be willing to bet clocks still play a larger role in film than smartphones do. The juxtaposition here shows how clocks have cemented themselves as an experience of information that warrants capturing. They can show something necessary that the audience is also interested in. The vast majority of phone usage (emails, social media updates, texts, etc), despite it maybe making the film more realistic, does nothing to progress a plot and is of little interest to the audience despite its obvious reliability.

Reading 01

We often think of cameras as devices that are commanded by the photographer, but they’ve been slowly moving further and further away from that definition. Most if not all of the students in the class grew up around cameras that had auto-focus and auto-aperture/exposure capabilities. But now that cameras can press the shutter (or the digital equivalent) on their own, we begin to question the role of authorship over the photograph. At this point, I don’t think we need to be asking that question yet. There is still an authorship role present in the use of these cameras: they still need to be placed somewhere, and then the resulting photographs need to be curated. While the actual photo taking is mindless for the photographer, there is still a substantial amount of agency and consideration present in the act of setting up the camera and selecting the resulting photographs. Because of this, and I think this also can be extended for most automated art, the user of a Clip camera or any other camera that takes its own photos still has a role of authorship over the resulting art. That role may be different than it was in the past, but it has yet to vanish.

Project To Share: Non-Line-of-Sight Camera

A group out of the CMU Robotics institute and a couple other Universities are working on a project that allows the capture of visual data that is outside of the field of sight. The method works by capturing light reflected or otherwise displaced on a wall that is within the line of sight, and then using that information to reconstruct an image of the object on the other side of the obstructing surface. The method works because something outside of the line of sight will still be reflecting/absorbing light, and so that reflection/absorption will affect what light is absorbed/reflected from the wall within the line of sight. (you can read more about it here if you’re interested)

I find the project really interesting because it changes one of the fundamental understandings of what we think a camera can do: capture images that are within the line of sight. This has potential implications, for example, in driving, by allowing drivers to see around corners at intersections and avoid potential crashes.