agkrishn@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 Twisted Signals Thu, 07 Dec 2017 05:03:31 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.24 https://i1.wp.com/courses.ideate.cmu.edu/18-090/f2017/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 agkrishn@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 32 32 115419400 Air-DJ, A Final Project by Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/12/04/air-dj-a-final-project-by-anish-krishnan/ Mon, 04 Dec 2017 05:42:24 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1519 As a pretty heavy music listener, I have always wondered to myself if it would be possible to mix a few songs together and create a mashup of my own. After eagerly surfing the web for an app that would let me do just the thing, I quickly realized that using a mouse and keyboard is not the proper interface to work with music. This is exactly why DJ’s use expensive instruments with knobs and dials so that they can quickly achieve the effect they are going for. For my final project, I made an Air-DJ application in Max so that you can convolve your music in a variety of ways using your hands and never touching the mouse or keyboard. Using a Leap Motion sensor, I used various different gestures to control different aspects of a song.

After selecting a song to play, you can use your left hand to add beats. You can add 3 different types of beats by either moving your hand forward, backward, or to the left. Lifting your hand up and down will change the volume/gain of the beat.

Your right hand controls the main track. Again, lifting it up and down will control the volume/gain of the song. With a pinch of your fingers, you can decrease the cut-off frequency of a low pass filter. I also implemented a phase multiplier when you move your right hand towards and away from the screen (on the z-axis). Finally, moving your right hand sideways will increase an incorporated delay time.

Here are a few screenshots of the patch:

 

 

 

 

 

And here is the video of the whole thing!

Original song:

https://drive.google.com/open?id=1Z7nWcNn6fCZ3dw5nnWZ5tU52breicnIr

Air-DJ’d Song:

https://drive.google.com/open?id=1KseRhpuURgx3AZ6PN6Z14abrB5dDtoBS


All the important files are below:

Google Drive link containing all files: https://drive.google.com/open?id=1FmMiDLyB4gIbOK6bx0KgIbESSKyNBcA1

Github Gist: https://gist.github.com/anonymous/4570d6ae97e13fe29337a57a97fb81e5

]]>
1519
Project 1: Enhance It! – Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/10/29/project-1-enhance-it-anish-krishnan/ Sun, 29 Oct 2017 23:32:09 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1383 As I make a lot of videos and short films in my free time, anything related to processing videos excites me, so I really wanted to learn how to use the computer vision object built into Max. For this project I used the cv.jit.faces object to be able to alter a face in a movie by either blurring it or placing a virtual spotlight on it. First, I downscale the image to 1/5th of its original size, then convert it to greyscale, and run it through the cv.jit.faces object. I use the output matrix to determine the positions of the face and accordingly place a blurred image with an alpha layer that I made on top of the face or add a spotlight. I hope you like my project!

Original Image:

Blurred Face:

Enhanced Face/Spotlight


Google Drive Link to Code AND Necessary Media:

https://drive.google.com/drive/folders/0B_T97VaALHA0U1Z6bVllU0MzWEE?usp=sharing

 

The code:

The helper patch “process”:

]]>
1383
Assignment 4 – Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/10/15/assignment-4-anish-krishnan/ Mon, 16 Oct 2017 02:42:07 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1310 For this assignment, I used the pfft Fourier Transform object to cut out certain frequencies in an audio file that can be controlled through a slider. I combined the output audio from this with a modified version of the sound visualizer that we developed in class. By moving the slider up and down, you will notice a change in quality of the audio which is also reflected by the characteristics of the moving shapes in the visualizer.

Input Audio:

Output Audio:

 

Main Patch:

Frequency Gate Patch:

Sound Visualization Patch:

]]>
1310
Proposal 1 – Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/10/01/proposal-1-anish-krishnan/ Mon, 02 Oct 2017 02:38:43 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1062 The cv.jit.faces computer vision object allows me to track faces and find their positions. I want to implement the lessons we have learned over the past few weeks to be able to alter the face or the pixels surrounding the face in a live camera feed or video to be able to do many cool things such as:
– Blur all faces in any video (for online publishing)
– Enhance lighting around face and darken background
– Use convolution and time-shifting to make the frames of the face lag, while the rest of the video progresses normally

]]>
1062
Assignment 3 – Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/10/01/assignment-3/ Sun, 01 Oct 2017 21:21:09 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=992 For this assignment, I transformed an audio signal of my friend singing “Sorry” by Justin Bieber by convolving it with four different Impulse Recordings. The first IR was a ballon popping in the CFA building, and the second was a ballon popping in Porter Hall. For the third IR, I recorded a speaker playing a sound of a ballon popping inside a closed room. The fourth IR was a recording of footsteps on hardwood.

I have attached the respective audio recordings and the gist for my project below.

Original Audio Track (Friend singing):

 

IR1 and Convoluted Signal (Ballon popping in CFA):

 

IR2 and Convoluted Signal (Ballon popping in Porter Hall):

 

IR3 and Convoluted Signal (Speaker playing ballon pop sound)

 

IR4 and Convoluted Signal (Footsteps on hardwood)

 

Code:

]]>
992
Assignment 2 – Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/09/17/assignment-2/ Sun, 17 Sep 2017 18:31:46 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=813 In my program, I time-shifted the various color layers (RGB) from the camera feed to create a delayed-vibrant effect. I also implemented code to randomly change the dimensions of the camera feed every 0.5 seconds and used feedback to make an echo-like audio effect.

]]>
813
Assignment 1 – Anish Krishnan https://courses.ideate.cmu.edu/18-090/f2017/2017/09/04/assignment-1-anish-krishnan/ Mon, 04 Sep 2017 21:02:24 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=659 The system I chose to work with was Youtube.  I clicked on the #1 video in the trending section, and it was “Taylor Swift – …Ready For It? (Audio).” I then clicked the 5th video in the up next sidebar. I repeated the process 25 times, but prevented videos that I had chosen earlier from showing up. The 25th video was “Teen Titans Go Transforms Baby Raven Starfire Growing Up Surprise Egg and Toy Collector SETC” which had no relation to Taylor Swift, or even music for that matter. Through this process, I destroyed any evidence of the original video through feedback.

Links to first and last videos:

https://www.youtube.com/watch?v=T62maKYX9tU

https://www.youtube.com/watch?v=I4HLidGdVmY

]]>
659