This project was an exploration of how the Kinect might be used to map pre-rendered and generative audio-responsive projections onto the faces of instruments.
The patch uses adjustable maximum and minimum Kinect depth map thresholds to isolate the desired object/projection surface, and subsequently uses the created depth map as a mask. This forces the video content to show through only in the shape of the isolated surface.
The patch is less precise on instruments which expose a large part of the body, as, for example, legs tend to inhabit similar depth thresholds as the face of a guitar; it is better suited to instruments with larger faces that obscure most of the body, such as cellos and upright basses.
In attempting to map to the surface of a guitar, I also toyed around with other uses for the patch, which include this animated depth shadow, which places the video mapped shadow of the performer on the wall, creating the potential for visual duets between any number of performers and mediated versions of their bodies.
I plan to continue exploring how to make this patch more precise on a variety of instruments, possibly by pairing this existing process with computer vision, motion tracking, and/or IR sensor elements.
The gist is here.