ajthomps@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 Twisted Signals Thu, 07 Dec 2017 05:03:31 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.24 https://i1.wp.com/courses.ideate.cmu.edu/18-090/f2017/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 ajthomps@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 32 32 115419400 Adam J. Thompson – Final Project – Body Paint https://courses.ideate.cmu.edu/18-090/f2017/2017/12/03/adam-j-thompson-final-project-body-paint/ Mon, 04 Dec 2017 04:09:26 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1495 Body Paint is the visual component of a commission from the Pittsburgh Children’s Museum in collaboration with three sound artists from the School of Drama.

The project is an interactive experience which uses the Kinect 2 to transform each participant’s head, hands, and feet into paintbrushes for drawing colored lines and figures in space. Each session lasts for one minute, following which the patch clears the canvas allowing a new user to take over and begin again.

Participants might attempt to draw representational shapes, or perhaps dance in space and see what patterns emerge from their movements.

The user’s head draws in green, the left hand in magenta, the right hand in red, the left foot in orange, and the right foot in blue.

Body Paint will be installed in the Museum in late January for a currently undefined period of time, free for participants to wander up to, discover, and to experience during their visit.

Visual documentation of the patch in presentation and patcher modes and a video recording of the results of my body drawing in space are below.

The Gist is here.

]]>
1495
Project 1 – Kinect Depth Projection Mapping – Adam J. Thompson https://courses.ideate.cmu.edu/18-090/f2017/2017/10/30/project-1-kinect-depth-projection-mapping/ Mon, 30 Oct 2017 04:33:07 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1401 This project was an exploration of how the Kinect might be used to map pre-rendered and generative audio-responsive projections onto the faces of instruments.

The patch uses adjustable maximum and minimum Kinect depth map thresholds to isolate the desired object/projection surface, and subsequently uses the created depth map as a mask. This forces the video content to show through only in the shape of the isolated surface.

The patch is less precise on instruments which expose a large part of the body, as, for example, legs tend to inhabit similar depth thresholds as the face of a guitar; it is better suited to instruments with larger faces that obscure most of the body, such as cellos and upright basses.

In attempting to map to the surface of a guitar, I also toyed around with other uses for the patch, which include this animated depth shadow, which places the video mapped shadow of the performer on the wall, creating the potential for visual duets between any number of performers and mediated versions of their bodies.

I plan to continue exploring how to make this patch more precise on a variety of instruments, possibly by pairing this existing process with computer vision, motion tracking, and/or IR sensor elements.

The gist is here.

]]>
1401
Assignment 4 – Adam J. Thompson https://courses.ideate.cmu.edu/18-090/f2017/2017/10/15/assignment-4-adam-j-thompson/ Sun, 15 Oct 2017 21:47:26 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1297 I’ve used this assignment as an opportunity to continue my explorations of the writings of Virginia Woolf as transformed by digital mediums, as well as to better understand how to use a spectral system to create audio-responsive visuals.

I began by reviewing Jesse’s original shapes/ducks/text patch and reconstructing through the components of the PFFT~ one by one in order to understand how they work together toward being written into a matrix. I subsequently created a system outside of the PFFT~ subpatch which randomly pulls a series of lines from Woolf’s novels and renders them as 3D text to a jit.gl sequence.

The only extant recording of Woolf speaking about the identities of words activates the PFFT~ process, and the resulting signals control the scale of the text. The movement of the text uses Jesse’s original Positions subpatch, but filtered through a new matrix used to control the number of lines which appear at any given time.

At the top of her recording, Woolf says, “Words…are full of echoes, memories, associations…” and I aimed to create a visual experience which reflects this statement as an interplay between her own spoken and written words.

I spent some time altering various parameters – size of the matrices, size of the text, amount of time allotted to the trail, swapping the placement and scaling matrices, etc – in order to achieve different effects. Some examples of those experiments are below.

Here’s the recording of Virginia Woolf.

The Recorded Voice Of Virginia Woolf

Here’s the gist.

]]>
1297
Project 1 – Adam J. Thompson https://courses.ideate.cmu.edu/18-090/f2017/2017/10/01/project-1-adam-j-thompson/ Mon, 02 Oct 2017 02:08:57 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1037 I am interested in creating a patch that uses a Kinect, other infrared cameras, and/or infrared sensors to map audio-responsive video to moving targets. I’ve been toying around with the potential for mapping video to musical instruments, with the video content generating and animating in response to the music which the instrument(s) is/are playing. As the instruments inevitably shift in space during any given performance, the video sticks to them through IR mapping. I’m curious, also, about how the video content might represent and shift according to the harmonies at work in the music itself.

]]>
1037
Assignment 3 – Adam J. Thompson https://courses.ideate.cmu.edu/18-090/f2017/2017/09/30/assignment-3-adam-j-thompson/ Sun, 01 Oct 2017 03:32:54 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=934 I’m an unabashed Alfred Hitchcock fanboy, and with October just around the corner, I’ve been in a Psycho mood. For this project, I decided to expand a bit on the requirements and make a 4 x 4 convolution mix-and-match patch which contains the required 4 IR signals (In the Dryer, Studio 201, In the Shower, and Crunching Leaves) and allows the user to match them with each of four short excerpts from Psycho (A Boy’s Best Friend, Harm a Fly, Oh God, Mother!, and Scream).

The patch with these amendments looks like this:

This playlist documents each original IR signal, each original Psycho excerpt, and all potential matches between an IR signal and Psycho excerpt.

Just for fun, and because I’ve been toying around with the jit.gl tutorial patches, I created an animated audio-responsive “3D speaker” to accompany and visualize each convolution as it plays. Here’s a short video of how that appears.

And here’s the gist for the whole shebang.

]]>
934
Assignment 2 – Adam J. Thompson https://courses.ideate.cmu.edu/18-090/f2017/2017/09/18/assignment-2-adam-j-thompson/ Mon, 18 Sep 2017 07:45:33 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=872 I’ve been exploring two bodies of work outside of class: digital interpretations of the novels of Virginia Woolf and interactive experiences/environments triggered by a user’s brainwaves using a hacked Muse headset.

For this assignment, I attempted to bring them together by creating a patch that live remixes the music video for Max Richter’s composition Mrs. Dalloway in the Garden through the mapping of theta brainwaves to the jit.matrixset’s frame buffer. Theta waves are indicative of mind-wandering, so the experience of remixing the video is meant to reflect the journey of Mrs. Dalloway herself who spends the majority of Woolf’s book with a mind that meanders back and forth between the past and the present.

The Muse headset sends data via a terminal script, which transforms the data into OSC messages which are read by Max. (The patch features a test subpatcher for testing without the Muse headset).

As mind-wandering and thus theta wave activity increases, the number of frames of delay increases, leading to a more aggressive movement across time. In addition, the waves are also inverse mapped to a jit.brcosa object, and so as the user becomes more present and concentrates more on the present environment, mind-wandering/theta wave activity decreases, triggering the video to fade closer and closer to black. The video only fully appears when the user’s mind is wandering freely.

Here’s a documentary video of a full cycle of the patch in action as influenced by me wearing the Muse.

And here’s the Gist.

]]>
872
Assignment 1 – Adam J. Thompson https://courses.ideate.cmu.edu/18-090/f2017/2017/09/05/assignment-1-adam-j-thompson/ Tue, 05 Sep 2017 06:00:24 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=657 This is the opening sequence of Alfred Hitchcock’s film, Rear Window. The original digitized (via YouTube) film clip was played on my computer screen using MPEG Streamclip and simultaneously screen recorded using QuickTime. The newly recorded version was then played via MPEG Streamclip and again screen recorded using QuickTime, and so on.

As the signal decayed, the video slowed, stuttered, became higher in contrast, and eventually darker and darker, resulting in the interesting illusion of the scene taking place later in the day with each subsequent rendering. After 18 iterations, the film became almost completely black, with only the appearance of what seem to be small fiery flares visible in the darkness.

The grid below is a composite of all 18 passes simultaneously – best viewed in full screen!

]]>
657