Author Archives: ajthomps@andrew.cmu.edu

Adam J. Thompson – Final Project – Body Paint

Body Paint is the visual component of a commission from the Pittsburgh Children’s Museum in collaboration with three sound artists from the School of Drama.

The project is an interactive experience which uses the Kinect 2 to transform each participant’s head, hands, and feet into paintbrushes for drawing colored lines and figures in space. Each session lasts for one minute, following which the patch clears the canvas allowing a new user to take over and begin again.

Participants might attempt to draw representational shapes, or perhaps dance in space and see what patterns emerge from their movements.

The user’s head draws in green, the left hand in magenta, the right hand in red, the left foot in orange, and the right foot in blue.

Body Paint will be installed in the Museum in late January for a currently undefined period of time, free for participants to wander up to, discover, and to experience during their visit.

Visual documentation of the patch in presentation and patcher modes and a video recording of the results of my body drawing in space are below.

The Gist is here.

Project 1 – Kinect Depth Projection Mapping – Adam J. Thompson

This project was an exploration of how the Kinect might be used to map pre-rendered and generative audio-responsive projections onto the faces of instruments.

The patch uses adjustable maximum and minimum Kinect depth map thresholds to isolate the desired object/projection surface, and subsequently uses the created depth map as a mask. This forces the video content to show through only in the shape of the isolated surface.

The patch is less precise on instruments which expose a large part of the body, as, for example, legs tend to inhabit similar depth thresholds as the face of a guitar; it is better suited to instruments with larger faces that obscure most of the body, such as cellos and upright basses.

In attempting to map to the surface of a guitar, I also toyed around with other uses for the patch, which include this animated depth shadow, which places the video mapped shadow of the performer on the wall, creating the potential for visual duets between any number of performers and mediated versions of their bodies.

I plan to continue exploring how to make this patch more precise on a variety of instruments, possibly by pairing this existing process with computer vision, motion tracking, and/or IR sensor elements.

The gist is here.

Assignment 4 – Adam J. Thompson

I’ve used this assignment as an opportunity to continue my explorations of the writings of Virginia Woolf as transformed by digital mediums, as well as to better understand how to use a spectral system to create audio-responsive visuals.

I began by reviewing Jesse’s original shapes/ducks/text patch and reconstructing through the components of the PFFT~ one by one in order to understand how they work together toward being written into a matrix. I subsequently created a system outside of the PFFT~ subpatch which randomly pulls a series of lines from Woolf’s novels and renders them as 3D text to a jit.gl sequence.

The only extant recording of Woolf speaking about the identities of words activates the PFFT~ process, and the resulting signals control the scale of the text. The movement of the text uses Jesse’s original Positions subpatch, but filtered through a new matrix used to control the number of lines which appear at any given time.

At the top of her recording, Woolf says, “Words…are full of echoes, memories, associations…” and I aimed to create a visual experience which reflects this statement as an interplay between her own spoken and written words.

I spent some time altering various parameters – size of the matrices, size of the text, amount of time allotted to the trail, swapping the placement and scaling matrices, etc – in order to achieve different effects. Some examples of those experiments are below.

Here’s the recording of Virginia Woolf.

Here’s the gist.

Project 1 – Adam J. Thompson

I am interested in creating a patch that uses a Kinect, other infrared cameras, and/or infrared sensors to map audio-responsive video to moving targets. I’ve been toying around with the potential for mapping video to musical instruments, with the video content generating and animating in response to the music which the instrument(s) is/are playing. As the instruments inevitably shift in space during any given performance, the video sticks to them through IR mapping. I’m curious, also, about how the video content might represent and shift according to the harmonies at work in the music itself.

Assignment 3 – Adam J. Thompson

I’m an unabashed Alfred Hitchcock fanboy, and with October just around the corner, I’ve been in a Psycho mood. For this project, I decided to expand a bit on the requirements and make a 4 x 4 convolution mix-and-match patch which contains the required 4 IR signals (In the Dryer, Studio 201, In the Shower, and Crunching Leaves) and allows the user to match them with each of four short excerpts from Psycho (A Boy’s Best Friend, Harm a Fly, Oh God, Mother!, and Scream).

The patch with these amendments looks like this:

This playlist documents each original IR signal, each original Psycho excerpt, and all potential matches between an IR signal and Psycho excerpt.

Just for fun, and because I’ve been toying around with the jit.gl tutorial patches, I created an animated audio-responsive “3D speaker” to accompany and visualize each convolution as it plays. Here’s a short video of how that appears.

And here’s the gist for the whole shebang.

Assignment 2 – Adam J. Thompson

I’ve been exploring two bodies of work outside of class: digital interpretations of the novels of Virginia Woolf and interactive experiences/environments triggered by a user’s brainwaves using a hacked Muse headset.

For this assignment, I attempted to bring them together by creating a patch that live remixes the music video for Max Richter’s composition Mrs. Dalloway in the Garden through the mapping of theta brainwaves to the jit.matrixset’s frame buffer. Theta waves are indicative of mind-wandering, so the experience of remixing the video is meant to reflect the journey of Mrs. Dalloway herself who spends the majority of Woolf’s book with a mind that meanders back and forth between the past and the present.

The Muse headset sends data via a terminal script, which transforms the data into OSC messages which are read by Max. (The patch features a test subpatcher for testing without the Muse headset).

As mind-wandering and thus theta wave activity increases, the number of frames of delay increases, leading to a more aggressive movement across time. In addition, the waves are also inverse mapped to a jit.brcosa object, and so as the user becomes more present and concentrates more on the present environment, mind-wandering/theta wave activity decreases, triggering the video to fade closer and closer to black. The video only fully appears when the user’s mind is wandering freely.

Here’s a documentary video of a full cycle of the patch in action as influenced by me wearing the Muse.

And here’s the Gist.

Assignment 1 – Adam J. Thompson

This is the opening sequence of Alfred Hitchcock’s film, Rear Window. The original digitized (via YouTube) film clip was played on my computer screen using MPEG Streamclip and simultaneously screen recorded using QuickTime. The newly recorded version was then played via MPEG Streamclip and again screen recorded using QuickTime, and so on.

As the signal decayed, the video slowed, stuttered, became higher in contrast, and eventually darker and darker, resulting in the interesting illusion of the scene taking place later in the day with each subsequent rendering. After 18 iterations, the film became almost completely black, with only the appearance of what seem to be small fiery flares visible in the darkness.

The grid below is a composite of all 18 passes simultaneously – best viewed in full screen!