Proposal 1 – Willow Hong

I want to capture motion data using camera or Kinect, and translate those data into audio signals using Max. 

More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…

Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.

Assignment 3 – Adam J. Thompson

I’m an unabashed Alfred Hitchcock fanboy, and with October just around the corner, I’ve been in a Psycho mood. For this project, I decided to expand a bit on the requirements and make a 4 x 4 convolution mix-and-match patch which contains the required 4 IR signals (In the Dryer, Studio 201, In the Shower, and Crunching Leaves) and allows the user to match them with each of four short excerpts from Psycho (A Boy’s Best Friend, Harm a Fly, Oh God, Mother!, and Scream).

The patch with these amendments looks like this:

This playlist documents each original IR signal, each original Psycho excerpt, and all potential matches between an IR signal and Psycho excerpt.

Just for fun, and because I’ve been toying around with the jit.gl tutorial patches, I created an animated audio-responsive “3D speaker” to accompany and visualize each convolution as it plays. Here’s a short video of how that appears.

And here’s the gist for the whole shebang.

Proposal #1 – Alex Reed

As an interaction designer, interfacing the virtual world with the physical is super interesting to me. For this first project I would like to see how Max interacts with sensors and microprocessors like arduino.

I’ll be using physical inputs, like motion, light, buttons, poteniometers, etc. to “dj” a short piece. Kind of like an unnecessarily complicated midi board interface.

Assignment 3 – Willow Hong

For this project I narrated a horror story. Below are the steps I took:

  1. Wrote a short horror story and recorded it in CFA’s sound recording studio;
  2. Recorded the balloon poping sounds in Scott Hall elevator (IR1) and CFA Atrium (IR3);
  3. Downloaded garden ambient sound (IR2) and scary background sound (IR4) online;
  4. Edited the original voice and the IRs in Audacity and convoluted them in Max (IR1=bedroom, IR2=garden, IR3=basement, IR4=horror movie sound effect);
  5. Added the audio time shifting+feedback effect to render the scary atmosphere a bit more;
  6. Outputted the final audio file from Max using the “sfrecord~” component.

Assignment 2 — Jonathan Cavell

Video Synth

Below is a patch which utilizes color information from a video to generate synth sounds through a set of delay channels.

The patch starts by sending the RGB matrix from a video source to a series of jit.findbounds objects, which locate color values within each plane of the matrix (this includes the alpha plane, but it is not utilized here since I am interested in capturing red, green, and blue color values specifically).

There are a series of poly synthesizers — one for each set of bounds the jit.findbounds objects send, with a channel for the x value and one channel for the y value. The number values are “seen” as midi data by the synthesizers which then turn these values into frequencies.

The values which are actually turned into frequencies have been scaled to a less jarring range of possible pitches using a scale object and the transition between frequencies has been smoothed using the line~ object.

Finally, the frequencies are sent into a delay patch which utilizes two different controls. One is the static set of delay for each channel shown in the tapout~ object. Additionally there is a changeable integer value to add additional delay to the preset amounts in the tapout~.

Since I wanted to use a clear video to create an additional feedback effect (for an extra layer of delay using a randomly generated number value), I added a control to adjust the saturation levels. This is helpful as, depending on what is in frame, the change between values is too subtle to produce a recognizable variation in pitch. By manipulating the saturation, you can get around this issue. Additionally, this control can provide sweeping effects by changing the set of values sent to the synths.

The final product provides a crude sonic impression of the colors moving within the digital image.

 

Assignment 2 – Bri Hudock

For this assignment, I wanted to be able to control the color of the timeshifted frames.  I used jit.scalebias and jit.gradient to change the color of my delayed jitter matrix to black and white as well as replace those black and white values with two colors on a gradient.  The two colors could be chosen by the viewer using the color-picker gui.

assignment2

I also played with the time shifting patch we made in class and replaced the delay variable with a random number from 0-400.  This had a similar effect to the glitch patch we received after last Wednesday’s class.  Instead of including a random frame from the past couple seconds, however, it superimposed a random frame from the last 400 frames on the current frame.  I liked the contrast between how smooth the regularly-colored current video was versus how spazzy and terrifying the colored and delayed glitch frames were.  The effect made it look like I had demons inside me that were clawing to get out.  As for the validity of that perception, I provide no comment.

assignment 2 – random

github normal timeshifting: https://gist.github.com/anonymous/6bbd03bd0424ff4fab5939ac9f42e873

github glitch timeshifting: https://gist.github.com/anonymous/378d66d7059e8ba8934092f7192e79fb

 

Also shout out to jit.charmap-twotone.maxpatch in the cycling ’74 jitter examples for help on how to use jit.scalebias and jit.gradient.

Assignment 2 – Matthew Xie

An audio processing patch that the user is allowed to control the timeshifted delay effect of both high ends and low ends of an audiotrack through filters. The high/low ends are separated so different amounts of timeshifting can be applied, then they are played back together through main audio channel.

 

Assignment 2- Tanushree Mediratta

For this assignment, I used the concepts discussed in class and I also added aspects of my own. The basic idea behind my assignment was to strip the 4 depths of a matrix (ARGB) into individual layers using jit.unpack, and then manipulate their values (which range from 0-255) by performing different mathematical operations on them using jit.op. Once these changed ARGB layers were packed back together, and a time delay was added through feedback, it produced a daze-like effect. The gist given below.