Category Archives: Uncategorized

Proposal 1 – Anish Krishnan

The cv.jit.faces computer vision object allows me to track faces and find their positions. I want to implement the lessons we have learned over the past few weeks to be able to alter the face or the pixels surrounding the face in a live camera feed or video to be able to do many cool things such as:
– Blur all faces in any video (for online publishing)
– Enhance lighting around face and darken background
– Use convolution and time-shifting to make the frames of the face lag, while the rest of the video progresses normally

Project 1 Proposal – Sarika Bajaj

For my first project, I would like to use the Kinect, Max, and a projector to create a generative art piece that enables some sort of live interaction. Preferably, I would like to create a piece similar to the one depicted in this picture:

Depending on whether particle interaction, shape manipulation, etc. is easier for me to bring up, I might pivot the project more to one of those directions. However, as a minimum, I would like to be able to create a system that creates a live time interactive piece with the Kinect and Max.

Assignment 3 – Anish Krishnan

For this assignment, I transformed an audio signal of my friend singing “Sorry” by Justin Bieber by convolving it with four different Impulse Recordings. The first IR was a ballon popping in the CFA building, and the second was a ballon popping in Porter Hall. For the third IR, I recorded a speaker playing a sound of a ballon popping inside a closed room. The fourth IR was a recording of footsteps on hardwood.

I have attached the respective audio recordings and the gist for my project below.

Original Audio Track (Friend singing):

 

IR1 and Convoluted Signal (Ballon popping in CFA):

 

IR2 and Convoluted Signal (Ballon popping in Porter Hall):

 

IR3 and Convoluted Signal (Speaker playing ballon pop sound)

 

IR4 and Convoluted Signal (Footsteps on hardwood)

 

Code:

Assignment 3 – Alex Reed

Mostly Screaming

Proposal #1 – Alex Reed

As an interaction designer, interfacing the virtual world with the physical is super interesting to me. For this first project I would like to see how Max interacts with sensors and microprocessors like arduino.

I’ll be using physical inputs, like motion, light, buttons, poteniometers, etc. to “dj” a short piece. Kind of like an unnecessarily complicated midi board interface.

Assignment 2 — Jonathan Cavell

Video Synth

Below is a patch which utilizes color information from a video to generate synth sounds through a set of delay channels.

The patch starts by sending the RGB matrix from a video source to a series of jit.findbounds objects, which locate color values within each plane of the matrix (this includes the alpha plane, but it is not utilized here since I am interested in capturing red, green, and blue color values specifically).

There are a series of poly synthesizers — one for each set of bounds the jit.findbounds objects send, with a channel for the x value and one channel for the y value. The number values are “seen” as midi data by the synthesizers which then turn these values into frequencies.

The values which are actually turned into frequencies have been scaled to a less jarring range of possible pitches using a scale object and the transition between frequencies has been smoothed using the line~ object.

Finally, the frequencies are sent into a delay patch which utilizes two different controls. One is the static set of delay for each channel shown in the tapout~ object. Additionally there is a changeable integer value to add additional delay to the preset amounts in the tapout~.

Since I wanted to use a clear video to create an additional feedback effect (for an extra layer of delay using a randomly generated number value), I added a control to adjust the saturation levels. This is helpful as, depending on what is in frame, the change between values is too subtle to produce a recognizable variation in pitch. By manipulating the saturation, you can get around this issue. Additionally, this control can provide sweeping effects by changing the set of values sent to the synths.

The final product provides a crude sonic impression of the colors moving within the digital image.

 

Assignment 2- Tanushree Mediratta

For this assignment, I used the concepts discussed in class and I also added aspects of my own. The basic idea behind my assignment was to strip the 4 depths of a matrix (ARGB) into individual layers using jit.unpack, and then manipulate their values (which range from 0-255) by performing different mathematical operations on them using jit.op. Once these changed ARGB layers were packed back together, and a time delay was added through feedback, it produced a daze-like effect. The gist given below.

 

 

Assignment 2 – Will Walters

For this assignment, my first was to filter video feedback through a convolutional matrix which could be altered by the user, allowing for variable effects, such as edge detection, blurring, and embossing, to be fed back on themselves. However, using common kernels for this system with the jit.convolve object yielded transformations too subtle to be fed back without being lost in noise. (The system I built for doing this is still in this patch, off to the right.)

My second attempt was to abandon user-defined transforms and instead utilize Max’s built-in implementation of the Sobel edge detection kernel to create the transform. However, applying the convolution to the feedback itself led to the edge detection being run on itself, causing values in the video to explode. This was solved by applying the edge detection on the input itself, and then adding the camera footage before the final output. (It looks maybe cooler without the original image added, depending on the light, so I included both outputs in the final patch.)