Assignment 2 — Jonathan Cavell

Video Synth

Below is a patch which utilizes color information from a video to generate synth sounds through a set of delay channels.

The patch starts by sending the RGB matrix from a video source to a series of jit.findbounds objects, which locate color values within each plane of the matrix (this includes the alpha plane, but it is not utilized here since I am interested in capturing red, green, and blue color values specifically).

There are a series of poly synthesizers — one for each set of bounds the jit.findbounds objects send, with a channel for the x value and one channel for the y value. The number values are “seen” as midi data by the synthesizers which then turn these values into frequencies.

The values which are actually turned into frequencies have been scaled to a less jarring range of possible pitches using a scale object and the transition between frequencies has been smoothed using the line~ object.

Finally, the frequencies are sent into a delay patch which utilizes two different controls. One is the static set of delay for each channel shown in the tapout~ object. Additionally there is a changeable integer value to add additional delay to the preset amounts in the tapout~.

Since I wanted to use a clear video to create an additional feedback effect (for an extra layer of delay using a randomly generated number value), I added a control to adjust the saturation levels. This is helpful as, depending on what is in frame, the change between values is too subtle to produce a recognizable variation in pitch. By manipulating the saturation, you can get around this issue. Additionally, this control can provide sweeping effects by changing the set of values sent to the synths.

The final product provides a crude sonic impression of the colors moving within the digital image.

 

Assignment 2 – Bri Hudock

For this assignment, I wanted to be able to control the color of the timeshifted frames.  I used jit.scalebias and jit.gradient to change the color of my delayed jitter matrix to black and white as well as replace those black and white values with two colors on a gradient.  The two colors could be chosen by the viewer using the color-picker gui.

assignment2

I also played with the time shifting patch we made in class and replaced the delay variable with a random number from 0-400.  This had a similar effect to the glitch patch we received after last Wednesday’s class.  Instead of including a random frame from the past couple seconds, however, it superimposed a random frame from the last 400 frames on the current frame.  I liked the contrast between how smooth the regularly-colored current video was versus how spazzy and terrifying the colored and delayed glitch frames were.  The effect made it look like I had demons inside me that were clawing to get out.  As for the validity of that perception, I provide no comment.

assignment 2 – random

github normal timeshifting: https://gist.github.com/anonymous/6bbd03bd0424ff4fab5939ac9f42e873

github glitch timeshifting: https://gist.github.com/anonymous/378d66d7059e8ba8934092f7192e79fb

 

Also shout out to jit.charmap-twotone.maxpatch in the cycling ’74 jitter examples for help on how to use jit.scalebias and jit.gradient.

Assignment 2 – Kun Peng

In this assignment, I used time-shifting techniques to simulate multiple people reading the same text.

The first component of the patch is a phasor which can generate pitch change when used in combination with a delay window(tapin~ and tapout~). I think the output of this approach sounds less robot-like compared to that of gizmo~ and freqshift~.

The second component adds a small randomized time shift to each track/simulated audio so that “multiple people’s” voices are not completely synchronized with each other. I want the time shift value to vary with time, too, since if one audio is always x milliseconds behind another, it will be pretty easy for the audience to tell the hardcoded delays. To work around this issue, I feed the volume of each audio back to itself. When there is a silence or a sentence break in the audio, the patch will generate a new time shift value.

The attached audio is a sample from an audiobook.The same technique can also be applied on the fly.

Assignment 2- Tanushree Mediratta

For this assignment, I used the concepts discussed in class and I also added aspects of my own. The basic idea behind my assignment was to strip the 4 depths of a matrix (ARGB) into individual layers using jit.unpack, and then manipulate their values (which range from 0-255) by performing different mathematical operations on them using jit.op. Once these changed ARGB layers were packed back together, and a time delay was added through feedback, it produced a daze-like effect. The gist given below.

 

 

Assignment 2 – Adam J. Thompson

I’ve been exploring two bodies of work outside of class: digital interpretations of the novels of Virginia Woolf and interactive experiences/environments triggered by a user’s brainwaves using a hacked Muse headset.

For this assignment, I attempted to bring them together by creating a patch that live remixes the music video for Max Richter’s composition Mrs. Dalloway in the Garden through the mapping of theta brainwaves to the jit.matrixset’s frame buffer. Theta waves are indicative of mind-wandering, so the experience of remixing the video is meant to reflect the journey of Mrs. Dalloway herself who spends the majority of Woolf’s book with a mind that meanders back and forth between the past and the present.

The Muse headset sends data via a terminal script, which transforms the data into OSC messages which are read by Max. (The patch features a test subpatcher for testing without the Muse headset).

As mind-wandering and thus theta wave activity increases, the number of frames of delay increases, leading to a more aggressive movement across time. In addition, the waves are also inverse mapped to a jit.brcosa object, and so as the user becomes more present and concentrates more on the present environment, mind-wandering/theta wave activity decreases, triggering the video to fade closer and closer to black. The video only fully appears when the user’s mind is wandering freely.

Here’s a documentary video of a full cycle of the patch in action as influenced by me wearing the Muse.

And here’s the Gist.

Assignment 2 – Taylor Tabb

For assignment two, I used a lot of what we practiced in class on Wednesday, and also Jit.Chromakey and Jit.Alphablend. Jit.Alphablend might reasonably be used in the case where one video feed contains a static black or white area, as another video feed could be seemingly superimposed just over that white or black area. Jit.Chromakey could be used reasonably to put   an frame from a source image over a video. In this case though, I used the seminal classic “Keyboard Cat,” as well as iSight input, to create a moderately horrifying video result. I delayed the keyboard cat similar to how was done in class, but I shifted which color was experiencing the alpha blend with the iSight, then sent the video back through Chromakey with the original video. The result is a funky looking keyboard cat whose appearance is a factor of the current state of the iSight imagery, and the previous state when the delay was initiated.

 

https://gist.github.com/taylortabb/85da84f5ce01b877b3a96e898aabb568https://gist.github.com/taylortabb/85da84f5ce01b877b3a96e898aabb568.js

Assignment 2 – Will Walters

For this assignment, my first was to filter video feedback through a convolutional matrix which could be altered by the user, allowing for variable effects, such as edge detection, blurring, and embossing, to be fed back on themselves. However, using common kernels for this system with the jit.convolve object yielded transformations too subtle to be fed back without being lost in noise. (The system I built for doing this is still in this patch, off to the right.)

My second attempt was to abandon user-defined transforms and instead utilize Max’s built-in implementation of the Sobel edge detection kernel to create the transform. However, applying the convolution to the feedback itself led to the edge detection being run on itself, causing values in the video to explode. This was solved by applying the edge detection on the input itself, and then adding the camera footage before the final output. (It looks maybe cooler without the original image added, depending on the light, so I included both outputs in the final patch.)