The Annoying Orchestra

Users use colorful objects to interact with an emulated theremin. Brightly colored objects can be tracked using a webcam and translated into varying frequencies of tone.

I’ve had a few different concepts in mind going into this final project, but I decided to implement a tool I’ve been wanting to make for while, that happens to use physical user motion in real space to generate tones so that the program can be played and sampled like any other digital instrument.

The program has a few basic functions. First the user selects a color by clicking on the area of the video feed where their brightly colored object is. Once the color is calibrated, a function decides the position which is the weighted average of all of the pixels in the image based on how close they are to that color. The user can now move the object around in front of the camera, and the tone output from the program will reflect its position using pitch and volume.

I wasn’t able to experiment with the final setup as much I would have liked, but I put together a short edit compiling some of the sounds I recorded.

I think this piece functions best when the user is only focused on the object they’re manipulating and not having to worry about what it looks like on the screen. I think it would help to create some physical frame or boundary that the user can operate within rather than have to consider the video feed itself. I also think that the layering of sounds is an important element of the interaction. In developing this further, I think I would add functionalities for live looping or more finely controlled sampling and compositing.

Work In Progress April 21st

I have been experimenting with different ways of implementing the 3D rolling shutter effect using the Intel RealSense depth camera. So far I’ve been able to modify an existing example file, which allows you to view live depth feed using the Python library.

The following is what the output looks like raw from the camera. As you can see it displays a point cloud of the recorded vertices.

From here I wrote a function to modify the vertices being displayed in the live view. Vertices from each new frame get positioned in a queue based on their distance from the lens, and the live view only shows points at the beginning of the queue. After each frame this queue gets shifted so that vertices farther from the camera get fed to the live view a little while after the vertices that are right in front of it.

This video shows the adjustments made to the 3D vector array, which isn’t reflected by the color map, resulting in this interference color pattern. In this example the sampling is reversed so that farther vertices appear sooner than closer vertices.

The main issue I’ve come across is the memory usage of doing these computations live. This clip was sped up several times to get that fluidity of motion because of the dropped frame rate. The next thing I plan on doing is getting the raw data into Unity to make further changes with the previous code I wrote.

End-Of-Semester Plan

I’ve been struggling to come up with interesting projects or endeavors these past few weeks, although I’ve been hoping that I will arrive at some interesting discovery as I experiment with a few things.

One of these things is an Intel RealSense Depth camera (borrowed from the studio), which I’ve been working on ways of moving camera data, live or recorded, to work with Unity. I had a few ideas about ways of algorithmically distorting this depth data, and warping the interaction between the camera’s sense of space and time.

Less related to “capture” is a speed tufting tool that I’ve been learning to use, which I will be rigging to do tufting automatically within 2D coordinates. I’ve had an idea for a while of creating a live interaction between performance or world movement and embroidery or other forms of textile fabrication. I was thinking this would be possible using the depth camera, which does a good job of interpreting 3D activity, and whatever configuration of motors I decide on for the project. This is a concept that I have been considering working into a final project for another course I’m currently taking, algorithmic textiles, which would involve writing path algorithms specifically catering to the automated tufting system.

Aside from these, I have found the offerings so far to be refreshing and stimulating exercises, so I wouldn’t mind continuing with a set of those as I work on some of this stuff.

 

WalkOutside: This One River

I started with the intention of documenting this nature trail in my town. The trail used to be a Short Line that supplied materials to factories along the 3 mile stretch between 1840 until 1981. I wanted to document some of the more interesting remnants of these old industrial structures.

Taking the time to document this place that I had grown so familiar with, I was taken back to when my friends used to come here a lot, mostly in middle school. I decided to continue up the river as it meanders through our town, visiting other spots we spent a lot of time skating at.

Starting at what was the end of the river for us – we never really thought to follow the river into the woods beyond this bridge, and so this is as far as it reaches.

This wall is the first structure you see walking from the north end of the trail. These “Tree Sweaters” you’ll see along the trail appeared last year. I wouldn’t be surprised if they were made from acrylic yarn.
This bridge was originally part of the Skaneateles Short Line.

From here I meandered away from the walking path to take pictures of some of the other bridges and brickwork on this side of the river – you can see the previous bridge on the right, and the next bridge on the left.
As I was walking through this area I was struck how the snow obscures and partially “erases” the present chaos of the natural landscape. This seemed to emphasize the regularities in these deteriorating human-made structures. It made me think of the effect of missing data from our photogrammetry workshop. The slight irregularities in the deteriorating structures seem to match the irregularities of the blank snow, enforcing the cohesiveness of this illusion.

I thought about how the river’s movement was probably the most striking disruption of the blank snow canvas – cutting through it so violently in some parts and tepidly in others.
This waterfall is iconic. This bench is iconic.
I paused here to think about how it took hundreds of years for these stone structures to be deteriorated by entropy and reclaimed to the extent that they have by organic materials. It took less than a decade for this bench to be claimed by the organic material of a shit head with a lighter.
I know who did this – he beat me up before soccer practice in 8th grade. Knowing him, I honestly see this as an act of neuroticism more than an act of vandalism – but there may be an inherent connection between the two.
View from the bench.

Natural properties of a filter. This is where Doug found the frisbee.
An interesting feature of these woods is these rows of support pillars that used to hold up mills. They are always arranged in straight lines that seem to cut through the landscape at weird angles – directions that used to disappear seamlessly and intentionally.

The bottom of this area and the next is actually a concrete platform. In the Summer the river dries up and only continues below the platform, letting us walk across to poke around inside these buildings.

The Dock

Another bridge along the Skaneateles Short Line. This is the southern-most bridge of the nature trail.
This is where the river leaves the nature trail and cuts under roads and behind houses into the village.
Creekside
We used to skate a lot in these parking lots behind CVS.
This bank was real nice to skate when there weren’t trucks blocking it and people telling us to leave. The river is back there on the left in this picture.
Old Stone Creamery. We clocked a lot of hours behind here in the Summer. I remember the water being pretty gross here but you could jump off the dock and then lay out on that metal pipe that got ripping hot sitting in the sun all day. This is where my friend lost his frisbee. When we saw Doug with it, he said it couldn’t be ours because he found it at the nature trail.
If you go down this hole you can wander through the sewer system beneath the village.

This is the other side of that filter shown in the previous pics. There’s usually a landmass of gunk and trash here in the summer.
The first bridge that the river crosses from the lake.

It was pretty common to run and jump off this boat house in the summer.
Where the river begins.

 

[I just found out the whole things on google maps]

Person in Time Early Ideas

[Draft – two main ideas]

  1. Isolating interactions with specific objects, and maybe the manipulation of things like keys or peeling an orange, and digitally removing the objects from the final capture. I’m specifically interested in eating, moving objects from the world to a specific point on your body and focusing intensely on the minute motions of how our hands and bodies manipulate our environment.
  2. I was also thinking  about expressing the effects of relativity and the delay of perception due to the finite speed of light, and scaling it down to alter how we perceive bodies in motion. This idea doesn’t apply as directly to a specific human motion as of now, but I may work on incorporating a specific connection.

The Virtual Artifact Gallery

This virtual gallery  displays tangible objects captured from real life, accompanied by their mundane and iconic representations from popular video games.

 

I originally had an idea to curate a similar gallery, which would include samples of real textures alongside their renderings in different styles of illustration. I wanted to capture the process of reconstructing tangible perception in a medium as abstracted as illustration. I eventually found that the variations between styles weren’t cohesive enough to create the type of collection I wanted, so I thought of collecting 3D models, specifically low-polygon models created for video games. This type of modeling is a medium that is abstracted enough by its limited detail, and can assume endless varieties of recognizable forms. It also incorporates some of the gestural aspects of illustration that I was trying to capture, through both the applied image textures and the virtual forms themselves.

I thought of using video game models for my typology when I was browsing  The Models Resource looking for models to use for a different experiment. The Models Resource  is a website that hosts 3D models that have been ripped from popular video games. I first noticed that a lot of the models had been optimized for speed and had interesting ways of illustrating their detail with a limited amount of information.

The following are some of the titles from which I sourced models:

  • Mario Party 4 [2002] – Wario’s Hamburger
  • SpongeBob SquarePants Employee of the Month [2002] – Krusty Krab Hat
  • SpongeBob SquarePants Revenge of the Flying Dutchman [2002] – Krusty Krab Bag
  • Scooby-Doo Night of 100 Frights [2002] – Hamburger
  • Garry’s Mod [2004] – Burger
  • Mario Kart DS [2005] – Luigi’s Cap,
  • Dead Rising [2006] – Cardboard Box
  • Little Big Planet [2008] – Baseball Cap
  • Nintendogs Cats [2011] – Cardboard Box, Paper Bag
  • The Lab [2016] – Cardboard Box
  • Animal Crossing Pocket Camp [2017] – Paper Bag

I decided to use photogrammetry to bring physical objects as seamlessly as I could into virtual space. I wanted to contextualize these artifacts which had been stripped from their intended contexts alongside virtual objects which we might consider to be “more real” due to being representations of photography, rather than abstract visualizations.

I think that this collection of hamburgers is the most jarring, as it frames a convincing representation of something commonly ate and digested as the plastic and rigidly designed product it truly is. The bun of this McDonald’s double cheeseburger was stamped by a ring out of a sheet of dough, the patty was stamped into a circle by another mold and the cheese was cut or sliced into a regular square – all automated processes of physical manufacturing done with the intent of assembling this product for somebody’s enjoyment.

I noted that many of the examples I found were mostly or entirely symmetrical. I think this symmetry reflects our physical ideals of the products we interact with and consume. A virtual object can exist in an absolutely defined state, rather than existing as the expression of this definition which is only similar enough within manufacturing tolerances. These manufacturing tolerances exist in most of the objects we interact with, and even the food we consume. We seem to use symmetry to indicate the intentionality of an object’s existence, which coincidentally or not, is also used for making most organic bodies. When a video game character is designed asymmetrically, it is usually done so deliberately to call attention to some internal imbalance or juxtaposition.

In retrospect, the disruptions of the photogrammetry – especially in the empty box model – seemed to manifest the essence of our fleeting, tangible perceptions of certain objects compared to others. The items that served as packaging for a product, to be discarded or recycled, are missing information as if easily forgotten.

 

 

 

Typology Machine Proposal

This typology machine will capture recurring forms in architecture around Pittsburgh. I would like to create an animation incorporating photographs of residential and commercial buildings which would be sequenced to create a fluid movement of commonalities through space. I think it’s interesting how we can recognize similar objects as iterations of one another, or iterations of some common master or reference production. This is a property of capitalism that allows for a simulation we take part in where a single idea of a product can be shared with an entire consumer base through its automated reproduction. I’d like to apply this concept to the physical rhythms of architecture, expanding it to a larger and more organized display of repetition.

I was inspired by a similar piece done by video artist Kevin McGloughlin.

In this piece, McGloughlin has taken pieces of these architectural frames (usually kinetic), and composited them into a new collage presenting the patterns in infrastructure as isolated fractals built upon themselves in the new composition.

For my typology I want to only show a series photos I’ve taken rather than compositing something like these. The way that I order and arrange these clips is where the machine comes into play.

I plan on either manually, or automatically finding simple geometric shapes within architectures and labeling a dataset with the combined image and shape data. I would then write a program to find similar shapes across the frames and place these images next to each other in sequence, repositioning, rotating and scaling to match shapes as closely as possible and keep everything in the center of the frame. The result would be a semi fluid animation, rapidly exploring the physical repetitions and commonalities throughout the cities.

SEM: Coffee

I had originally prepared a sample containing a crushed Benadryl and sea salt, however there were complications in setting up the SEM and I ended up capturing another sample (which I believe was Philippe’s – coffee).

900x magnification
1300x magnification
7000x magnification
22x magnification stereo pair
22x magnification anaglyphic stereo view