The project is an interactive experience which uses the Kinect 2 to transform each participant’s head, hands, and feet into paintbrushes for drawing colored lines and figures in space. Each session lasts for one minute, following which the patch clears the canvas allowing a new user to take over and begin again.
Participants might attempt to draw representational shapes, or perhaps dance in space and see what patterns emerge from their movements.
The user’s head draws in green, the left hand in magenta, the right hand in red, the left foot in orange, and the right foot in blue.
Body Paint will be installed in the Museum in late January for a currently undefined period of time, free for participants to wander up to, discover, and to experience during their visit.
Visual documentation of the patch in presentation and patcher modes and a video recording of the results of my body drawing in space are below.
The Gist is here.
]]>The patch uses adjustable maximum and minimum Kinect depth map thresholds to isolate the desired object/projection surface, and subsequently uses the created depth map as a mask. This forces the video content to show through only in the shape of the isolated surface.
The patch is less precise on instruments which expose a large part of the body, as, for example, legs tend to inhabit similar depth thresholds as the face of a guitar; it is better suited to instruments with larger faces that obscure most of the body, such as cellos and upright basses.
In attempting to map to the surface of a guitar, I also toyed around with other uses for the patch, which include this animated depth shadow, which places the video mapped shadow of the performer on the wall, creating the potential for visual duets between any number of performers and mediated versions of their bodies.
I plan to continue exploring how to make this patch more precise on a variety of instruments, possibly by pairing this existing process with computer vision, motion tracking, and/or IR sensor elements.
The gist is here.
]]>I began by reviewing Jesse’s original shapes/ducks/text patch and reconstructing through the components of the PFFT~ one by one in order to understand how they work together toward being written into a matrix. I subsequently created a system outside of the PFFT~ subpatch which randomly pulls a series of lines from Woolf’s novels and renders them as 3D text to a jit.gl sequence.
The only extant recording of Woolf speaking about the identities of words activates the PFFT~ process, and the resulting signals control the scale of the text. The movement of the text uses Jesse’s original Positions subpatch, but filtered through a new matrix used to control the number of lines which appear at any given time.
At the top of her recording, Woolf says, “Words…are full of echoes, memories, associations…” and I aimed to create a visual experience which reflects this statement as an interplay between her own spoken and written words.
I spent some time altering various parameters – size of the matrices, size of the text, amount of time allotted to the trail, swapping the placement and scaling matrices, etc – in order to achieve different effects. Some examples of those experiments are below.
Here’s the recording of Virginia Woolf.
The Recorded Voice Of Virginia Woolf ]]>The patch with these amendments looks like this:
This playlist documents each original IR signal, each original Psycho excerpt, and all potential matches between an IR signal and Psycho excerpt.
Just for fun, and because I’ve been toying around with the jit.gl tutorial patches, I created an animated audio-responsive “3D speaker” to accompany and visualize each convolution as it plays. Here’s a short video of how that appears.
And here’s the gist for the whole shebang.
]]>For this assignment, I attempted to bring them together by creating a patch that live remixes the music video for Max Richter’s composition Mrs. Dalloway in the Garden through the mapping of theta brainwaves to the jit.matrixset’s frame buffer. Theta waves are indicative of mind-wandering, so the experience of remixing the video is meant to reflect the journey of Mrs. Dalloway herself who spends the majority of Woolf’s book with a mind that meanders back and forth between the past and the present.
The Muse headset sends data via a terminal script, which transforms the data into OSC messages which are read by Max. (The patch features a test subpatcher for testing without the Muse headset).
As mind-wandering and thus theta wave activity increases, the number of frames of delay increases, leading to a more aggressive movement across time. In addition, the waves are also inverse mapped to a jit.brcosa object, and so as the user becomes more present and concentrates more on the present environment, mind-wandering/theta wave activity decreases, triggering the video to fade closer and closer to black. The video only fully appears when the user’s mind is wandering freely.
Here’s a documentary video of a full cycle of the patch in action as influenced by me wearing the Muse.
And here’s the Gist.
]]>As the signal decayed, the video slowed, stuttered, became higher in contrast, and eventually darker and darker, resulting in the interesting illusion of the scene taking place later in the day with each subsequent rendering. After 18 iterations, the film became almost completely black, with only the appearance of what seem to be small fiery flares visible in the darkness.
The grid below is a composite of all 18 passes simultaneously – best viewed in full screen!