Author Archives:

Air-DJ, A Final Project by Anish Krishnan

As a pretty heavy music listener, I have always wondered to myself if it would be possible to mix a few songs together and create a mashup of my own. After eagerly surfing the web for an app that would let me do just the thing, I quickly realized that using a mouse and keyboard is not the proper interface to work with music. This is exactly why DJ’s use expensive instruments with knobs and dials so that they can quickly achieve the effect they are going for. For my final project, I made an Air-DJ application in Max so that you can convolve your music in a variety of ways using your hands and never touching the mouse or keyboard. Using a Leap Motion sensor, I used various different gestures to control different aspects of a song.

After selecting a song to play, you can use your left hand to add beats. You can add 3 different types of beats by either moving your hand forward, backward, or to the left. Lifting your hand up and down will change the volume/gain of the beat.

Your right hand controls the main track. Again, lifting it up and down will control the volume/gain of the song. With a pinch of your fingers, you can decrease the cut-off frequency of a low pass filter. I also implemented a phase multiplier when you move your right hand towards and away from the screen (on the z-axis). Finally, moving your right hand sideways will increase an incorporated delay time.

Here are a few screenshots of the patch:






And here is the video of the whole thing!

Original song:

Air-DJ’d Song:

All the important files are below:

Google Drive link containing all files:

Github Gist:

Project 1: Enhance It! – Anish Krishnan

As I make a lot of videos and short films in my free time, anything related to processing videos excites me, so I really wanted to learn how to use the computer vision object built into Max. For this project I used the cv.jit.faces object to be able to alter a face in a movie by either blurring it or placing a virtual spotlight on it. First, I downscale the image to 1/5th of its original size, then convert it to greyscale, and run it through the cv.jit.faces object. I use the output matrix to determine the positions of the face and accordingly place a blurred image with an alpha layer that I made on top of the face or add a spotlight. I hope you like my project!

Original Image:

Blurred Face:

Enhanced Face/Spotlight

Google Drive Link to Code AND Necessary Media:


The code:

The helper patch “process”:

Assignment 4 – Anish Krishnan

For this assignment, I used the pfft Fourier Transform object to cut out certain frequencies in an audio file that can be controlled through a slider. I combined the output audio from this with a modified version of the sound visualizer that we developed in class. By moving the slider up and down, you will notice a change in quality of the audio which is also reflected by the characteristics of the moving shapes in the visualizer.

Input Audio:

Output Audio:


Main Patch:

Frequency Gate Patch:

Sound Visualization Patch:

Proposal 1 – Anish Krishnan

The cv.jit.faces computer vision object allows me to track faces and find their positions. I want to implement the lessons we have learned over the past few weeks to be able to alter the face or the pixels surrounding the face in a live camera feed or video to be able to do many cool things such as:
– Blur all faces in any video (for online publishing)
– Enhance lighting around face and darken background
– Use convolution and time-shifting to make the frames of the face lag, while the rest of the video progresses normally

Assignment 3 – Anish Krishnan

For this assignment, I transformed an audio signal of my friend singing “Sorry” by Justin Bieber by convolving it with four different Impulse Recordings. The first IR was a ballon popping in the CFA building, and the second was a ballon popping in Porter Hall. For the third IR, I recorded a speaker playing a sound of a ballon popping inside a closed room. The fourth IR was a recording of footsteps on hardwood.

I have attached the respective audio recordings and the gist for my project below.

Original Audio Track (Friend singing):


IR1 and Convoluted Signal (Ballon popping in CFA):


IR2 and Convoluted Signal (Ballon popping in Porter Hall):


IR3 and Convoluted Signal (Speaker playing ballon pop sound)


IR4 and Convoluted Signal (Footsteps on hardwood)



Assignment 1 – Anish Krishnan

The system I chose to work with was Youtube.  I clicked on the #1 video in the trending section, and it was “Taylor Swift – …Ready For It? (Audio).” I then clicked the 5th video in the up next sidebar. I repeated the process 25 times, but prevented videos that I had chosen earlier from showing up. The 25th video was “Teen Titans Go Transforms Baby Raven Starfire Growing Up Surprise Egg and Toy Collector SETC” which had no relation to Taylor Swift, or even music for that matter. Through this process, I destroyed any evidence of the original video through feedback.

Links to first and last videos: