Category Archives: Uncategorized

Assignment 2 — Jonathan Cavell

Video Synth

Below is a patch which utilizes color information from a video to generate synth sounds through a set of delay channels.

The patch starts by sending the RGB matrix from a video source to a series of jit.findbounds objects, which locate color values within each plane of the matrix (this includes the alpha plane, but it is not utilized here since I am interested in capturing red, green, and blue color values specifically).

There are a series of poly synthesizers — one for each set of bounds the jit.findbounds objects send, with a channel for the x value and one channel for the y value. The number values are “seen” as midi data by the synthesizers which then turn these values into frequencies.

The values which are actually turned into frequencies have been scaled to a less jarring range of possible pitches using a scale object and the transition between frequencies has been smoothed using the line~ object.

Finally, the frequencies are sent into a delay patch which utilizes two different controls. One is the static set of delay for each channel shown in the tapout~ object. Additionally there is a changeable integer value to add additional delay to the preset amounts in the tapout~.

Since I wanted to use a clear video to create an additional feedback effect (for an extra layer of delay using a randomly generated number value), I added a control to adjust the saturation levels. This is helpful as, depending on what is in frame, the change between values is too subtle to produce a recognizable variation in pitch. By manipulating the saturation, you can get around this issue. Additionally, this control can provide sweeping effects by changing the set of values sent to the synths.

The final product provides a crude sonic impression of the colors moving within the digital image.


Assignment 2- Tanushree Mediratta

For this assignment, I used the concepts discussed in class and I also added aspects of my own. The basic idea behind my assignment was to strip the 4 depths of a matrix (ARGB) into individual layers using jit.unpack, and then manipulate their values (which range from 0-255) by performing different mathematical operations on them using jit.op. Once these changed ARGB layers were packed back together, and a time delay was added through feedback, it produced a daze-like effect. The gist given below.



Assignment 2 – Will Walters

For this assignment, my first was to filter video feedback through a convolutional matrix which could be altered by the user, allowing for variable effects, such as edge detection, blurring, and embossing, to be fed back on themselves. However, using common kernels for this system with the jit.convolve object yielded transformations too subtle to be fed back without being lost in noise. (The system I built for doing this is still in this patch, off to the right.)

My second attempt was to abandon user-defined transforms and instead utilize Max’s built-in implementation of the Sobel edge detection kernel to create the transform. However, applying the convolution to the feedback itself led to the edge detection being run on itself, causing values in the video to explode. This was solved by applying the edge detection on the input itself, and then adding the camera footage before the final output. (It looks maybe cooler without the original image added, depending on the light, so I included both outputs in the final patch.)

Assignment 2 – Sarika Bajaj

For my time shifting assignment, I decided to make a “horror movie” webcam filter which takes in a webcam image which 1) plays a normal video until it time shifts 700 frames back for about 100 frames (to give the unsettling video playback effect that some horror movies have) 2) turns the RGB values from the webcam into pure luminance values and then filters the luminance value to create a grainy/distorted image .

Gist of code: gist:6f11dc358eabb3a5d7dc3f2dba39f493

Assignment 1 – Neeraj Sharma

This is a simple example of feeding back output to input in a software we use very often – File Zipping.

Ever seen a file’s size increasing when you zip the file? In the video below, I have tried to give an example. Interestingly, if we zip an already zipped file something unusual happens, and the erratic behavior of the algorithm goes on as we keep on going in this loop of “zipping a zip”.

Assignment 1 – Amanda Sari Perez

In the above video, I demonstrated the gradual degradation of the speech-to-text feature on my android phone. Using the original text from our assignment, I used Convert Text to Speech V2 on my laptop to convert the text to a sound file. I played the sound from my laptop speakers and recorded it directly to speech-to-text on my phone using facebook messenger (just for speedy transfer back to my laptop).

I fed this text back to the Convert Text to Speech V2 software, and did this loop for 20 iterations. I’d had the phone on my lap for most of the trials, and around version 14, I shifted positions and leaned back in my chair. That change of distance from the laptop speakers introduced more errors into the speech-to-text. I maintained that distance for the rest of the trials, which led to weirder misinterpretations. At the end of the video you can see how drastically the text had changed from the original passage.