Fan-tastic Fan Cam & Fan Slit Scans

I’ve been interested in using my ceiling fan as a motion guide for my phone camera. I started by mounting my phone to the blades of the fan and capturing a slow motion video.

I’ve started to play with building rigs off of the fan:

In addition to the simple rigging systems I explored using the fan for slit scanning, with some intriguing results. To the see the original, non-compressed versions of these, click here.

 

Some photos of the slit scan set up:

People in Time Project Ideas

Here’s a few ideas I’m mulling over for the project. For this project I’m really interested in exploring the unique dynamics and kinematics of human body. Though there are some interesting ideas listed below, I don’t feel that my final framing is contained within.

  1. Walking to a Beat*: A video processing algorithm that generates skeletons for walking people, measures the frequency of the gait and picks a song where the beat matches that frequency. Songs would be presorted by tempo.
    1. https://www.youtube.com/watch?v=6Ron-Ikenfc : films often put music over someone walking or doing some other mundane task to give it this sense of being intense, important, or intentional — this scene from spider man shows what happens when that audio is removed. What would happen if we created a system where that audio was added back in?
    2. https://github.com/aubio/aubio
  2. Movie Path*: An algorithm that processes scenes from movies and uses photogrammetry combined with skeleton tracking to figure out exactly the orientation and path of a camera relative to someone in scene. Use that information to generate a path for the robot arm and film someone to add into the original scene.
    1. Which films?
    2. How contemporary would the films be? — There would definitely be a bias towards contemporary film.
    3. Scale? Films use giant booms to hold and guide cameras – would UR5 robot arm be able to achieve  the necessary motion for this project?
      1. Maybe we could put a squirrel into the scenes?
    4. Similar Idea: Use the algorithm to look at individual movie scenes and encode camera position relative to the floor and the focal plane — sort all scenes this way. Make this searchable by camera position. Use this to find movie scenes that could accept characters from other videos — put squirrels into films? 
  3. On Body Camera Rigs: There are a lot of camera rigs that stabilize the jitter and motion that is inherent in handheld filming, could I create a system that does the opposite? — That moves and responds to human input. The goal here would not be to create a jittery noisy video, rather to create a system that dances and moves and rotates in response to some human input
    1. maybe slow motion camera would be good?
    2. Could a gimbal system be repurposed to do this?
    3. https://www.videomaker.com/buyers-guide/camera-rig-buyers-guide
    4. https://www.adorama.com/cdagcjib.html?gclid=EAIaIQobChMIkfjP2_WB6AIVlozICh36ZgzDEAQYAiABEgJjV_D_BwE&utm_source=adl-gbase
  4. Long Exposure People Pixel Averaging: capture the trails of people by making an algorithm that averages pixels over time with wait the pixels where people have been (a pixel has both position in grid and position in time)
  5. People responsive systems*: set up a set of highly responsive systems, that will rotate in the slightest amount of air in a populated place set up a video camera to capture the rotations and frame the entire scene through these responsive systems
    1. Also: put responsive things into the air that people could move like smoke or micro bubbles or dust
  6. Dust cam: Create a camera set up in front of a light source at just the right angle so that it will pick up the microscopic dust particles in the air
    1. As people walk past the dust particles would get disturbed in a distinct way.
  7. Novels shots:
    1. Building cut away like the hallway scene in oldboy
      1. Creating a very flat side scroller  way of interacting with this
      2. Could an algorithm take like 3 wide angle shots and stitch together a super flat image?
      3. http://www.cs.technion.ac.il/~ron/PAPERS/ieee_mds1.pdf
      4. https://mathematica.stackexchange.com/questions/34264/how-to-remap-a-fisheye-image
      5. https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html?highlight=remap
      6. Maybe video photogrammetry (where a set of frames for photogrammetric model is generated at each point in time)
    2. Other shots that might make people look a little funky/be fun:
      1. Fisheye lens
      2. Parallelized imagery with hyperbolic mirrors(see bellow)
      3. schlieren optics
        1. https://en.wikipedia.org/wiki/Schlieren_photography
        2. https://pdfs.semanticscholar.org/3267/9a2ab1d35774a4b859323e5d7548efb45660.pdf
    3. Side Scrolling Everywhere*: (Connected to above — Novel shots idea):
      1. Use a series of cameras and a flattening/warping algorithm to create super flat scene that looks like a side scroller game

Hallway scene from oldboy

 

Robo-grammetry of my Worn out Stuff

Project Statement

In this project I explored the emergent textures of my worn out stuff – an over used tooth brush, a pill bottle that’s been in my bag for way too long, a charging cord that’s slowly busting out of it’s casing. I explored the textures of these things through robot controlled photogrammetric system (robo – grammetry).

Project Motivations

I began this project with the following project statement:

“I interact with a myriad of objects in a single day, often times these are things I carry with me at all time – such as my pencil, sketchbook, or a bone folder – what patterns might emerge if I sample each of these objects and put them on display?”

As I worked on this project my goals for and understanding of it developed. This project has become an exploration of the unique signature of wear and degradation I leave on some of my things.

Building a Robo-grammetric System:

This project emerged from my learning goal to gain familiarity with the studio for creative inquiry’s universal robot arm. I was able to make the robot step through a spherical path, staying oriented on one point in a working frame, while an Arduino controlled shutter release triggered a camera in sync with the robots motion. The robot script was laid out so I could calibrate the working frame with the plane of my cutting mat. The goal of all this was to create highly standardized photogrammetric composition.

My capture machine was a robotically guided camera that snapped 144 distinct photos of a sampled objet while stepping through a spherical path.

An Arduino was used to open the shutter of the camera every 3 seconds, while the robot moved between to a new point every 3 seconds. The processes were synced at the beginning of every capture.

Why Photogrammetry + Robot Arm?

The goal of this project was to capture the texture of my everyday objects and to see what might emerge from that capture. Somewhere along the line my focus moved to the identifiable aspects of wear and degradation of the objects I interact with. Photogrammetry allowed me to capture these textures in a unique and dimensional way and the Robot arm allowed for extreme standardization of capture.

The Models

Full library of models can be accessed here.

Project Evaluation

With respect to my technical and learning goals this project was a success.

In terms of content and articulation this project feels unfinished – I spent much of this project focusing on the system of capture, and only in the last week have I been able to see the output of this machine. This system has the potential to capture the distinct degradation of people’s things – I believe a larger set of models would highlight greater implications about how one considers and interacts with their stuff. This typology might be more meaningful if I had captured more objects.

This project was difficult because, at moments, it felt like a self portrait of my neurotic tendencies, in my typology machine, and my neglected and worn down objects, the captured objects.

Project Inspirations

As stated in the project proposal: this project was inspired by a sequence of photos seen in the film 20th century women. The photos are a typology of one character’s things: lipstick, a bra, her camera, etc. Through the composition and curation of objects a unique portrait emerges. My goal in this project was to use my objects to create a self portrait.

In addition, this project’s commitment to standardization is connected to the use of camera’s and capture in the scientific process.

Typology of Textures on Everyday Objects through Robot Controlled Photogrammetric Macro Photography

Concept Statement

The theme of my typology is Texture on my Everyday Objects, seen through a high quality macro lens and rendered into three dimensions using Photogrammetry. I interact with a myriad of objects in a single day, often times these are things I carry with me at all time – such as my pencil, sketchbook, or a bone folder – what patterns might emerge if I sample each of these objects and put them on display?

 

Inspiration

Stills from 20th Century Women

In Mike Mill’s film 20th Century Women there’s a scene in which Abbie, played by Gretta Gerwig, is explaining her most recent photography project to her housemate. The project is a typology of her things: lipstick, a bra, her camera, etc. In the film we see quick cuts between each image, which are composed and lit in the same way but only vary in the content. In some ways the objects are mundane, but through the composition and curation there is an intrigue given to the objects. I’m hoping to explore a similar vein in my typology.

Link to an article on Mike Mill’s use of object as an inspiration for his film:

https://www.thecut.com/2017/01/mike-mills-20th-century-women-artifacts.html

Horse in Motion

From its inception photography has been connected with the world of science and structured observation. The typology of the Horse in Motion exemplifies this use case – where a horse was photographed in profile as it galloped and was then imposed on a grid to allow people to see the true nature of the horses gate.

 

Implementation

My goal is to create a super standardized photographic composition by creating a highly controlled system for holding and imaging samples. I will construct a small platform to hold my samples and calibrate the robotic arm to take an image sweep across the plane of the mounting plate. My final setup will be a structure similar to a microscope, which will often have fixturing for samples on a stable bed. I will use the robot arm in tandem with the arduino controllable blackmagic camera to produce high quality video of the surface of the object, which can then be turned into a 3d model by photogrammetry software.

 

Curation

My goal is that the typology will be viewable in the looking glass – maybe built into a box that the viewer can hold? By turning a knob the viewer will be able to shift through the models displayed.

 

Tools

  • Studio Robot arm – running simple linear trajectories calibrated with respect to the sample platform
  • Blackmagic Film Camera + Arduino
  • Integrated lighting & object fixturing system – a platform for ‘samples’ with some kind of integrated lighting system – so that lighting doesn’t change between many samples

Postphotography

I had a hard time deciphering the writers definition of the term “non-human” photography. Is “non-human” referring to the subject matter? Does “non-human” have something to do with the mode of capture? Wasn’t sure about this.

From a very simple standpoint a “non-human” thing is something that is not human – so the photographic equipment, the computers for processing the captured data, the helicopter carrying the camera equipment, etc.  The text implies that as technology advances the process of imaging becomes more dominated by these “non-human” elements, in some ways contemporary photography is less human. But photography has always been in defined by the image creators relationship to some technology. The writer suggest’s that the new technologies for capture are intensifying and changing the photographers relationship to the technology.

The most compelling implications of the text was the idea that photographers have always been inventors, and in many ways experimental capture is not a new pursuit, rather the world of experimental capture is just expanding as we’re presented with new technologies.

Photography & Observation Response

The reading discusses the use of photography for both measurement and information gathering, as well as representation and articulation. The text points to how photography was used in an effort to observe the transits of Venus – it uses this example to illustrate how difficult it was to use photography in astrological observation. In the case of the the transits of Venus the variations in the photographic plates made the comparison and controlled measurement of the data very difficult. This exemplifies the relationship between capture and the subject captured: in the process of capturing something the collection tool leave a signature on the stored images. Though technology has changed drastically since people were working to capture the transit of Venus, our device for capture still leaves an imprint on the images that we create. Technology has empowered us with greater control over the process of capture, yet the traces of our craft and of our tools are always left in the scenes we store.

With reference to a typology the reading implies that our typological machine leaves an imprint on the multiples that it captures. When it comes to contemporary photography we can achieve predictable and precise results, but it’s a stretch to say that the images we make are objective – because we make so many choices in the process of capture and curation of multiples.

Response to ‘Olafur Eliasson, Water Pendulum (2010)’

As Marianne pointed out in her response to Olafur Eliasson’s work ‘Water Pendulum,’ the work uses strobe to suspend a chaotic system in separate moments. The use of a strobe light allows the artist to stop the piece – for an instant – in time. In essence the artist can change the frame rate at which we observe the system, so that the intermediate frames – where the system travels from one state to another – are cut away. We are left with a jittery, lightning bolt hanging for only an instant in space. This piece offers a unique perspective on the change of a system in time.

 

Additional Post reviewed:

Photogrammetry – Watch or Egg?

For the photogrammetry workshop I took 6 photos of my broken watch to see how the software would re-construct a model from so few images. Though the model  resembles an egg or other amorphous objects, it is still a reasonable 3d model of the general attributes contained in the scene. I’ve never used these kinds of techniques before but it’s exciting to think about the possibilities for robotic control and navigation – all you would need is a moving camera to generate a general model of space that could be good enough for navigation.

Response: The Camera, Transformed by Machine Vision

This article highlights a number of examples where a user is acting as a trainer for a computer that then acts as an image capture or generation device. The article seems to imply that in the future of image capture we may cede control and authorship to our computational tools.

 

The article begins by presenting the example of common icons for and 128,706 sketches of the camera — each of which tends to look like a classic point and shoot camera. The article begins this way so that, in it’s larger discussion of contemporary imaging, it can begin to show that the concept of the camera – or the things that we take images with – is shifting towards devices with more self agency and intelligence.

 

The key subjects that the article traverses through are YOLO Real-Time Object Detection, Google Clip, Pinterest visual search, and machine learning tools for image generation. What becomes evident is that the key paradigm shift is more about computers’ ability to interpret and modify image produced by cameras. In the case of the object detection the Google camera and Pinterest are using algorithms to understand the contents of the image. In these cases the computer generates a different result depending on the contents of the images is sees. The end of the article is where some more troubling subjects are explored. The article highlights how machine learning algorithms can generating photo real images. This computational power not only has the possibility of altering our understanding of cameras and photo equipment, bus also could begin to drastically alter our understanding of the truth of images.

Hans Hacke – Environmental & Kinematic Sculptures

I was recently at the New Museum In New York City. I had been drawn there by the work of Hans Hacke, specifically his kinetic and environmental art. The museum had devoted one large space to the pieces in question — the room was filled with undulating fabric, a balloon suspended in an airstream, acrylic prisms filled with water to varying degrees with water, and more. Each of these pieces represented a unique physical system that was dependant on the properties of the physical space it was curated in or the viewers that interacted with it. One specific piece that caught my eye was a clear acrylic cylinder filled, almost completely, with water, and suspended from the ceiling. It was in essence a pendulum with water sloshing around inside. I found myself intrigued by the way the water moved and air bubbles oscillated as the larger system moved. There’s a highly complex relationship between the dynamics of the fluid and that of the pendulum. Connecting this back to experimental capture: how we can capture and highlight this kind of complex dynamic system? What if you had a robot driven camera followed the motion of the watery pendulum? What if you used a really big camera that highlighted the act of robot capturing the pendulum? If you curated the robot, the pendulum, and the captured video together how would that change the piece?

I get excited by art that can spur ideas and inspiration in me – that’s why I got excited about Hans Haacke’s kinetic art.

Link to New Museum Show: https://www.newmuseum.org/exhibitions/view/hans-haack

Water pendulum – Viewers were prompted to agitate the piece and watch it respond.