Author Archives: slakshma@andrew.cmu.edu

Final Project – Isha Iyer

For my final project I decided to explore more ways of using the leap motion sensor to control different elements of drawing. I made a game through which the coordinates of a hand are tracked to translate to both a rotation and resizing of a square to match up with a target square. When the squares are matched sufficiently, it moves to another one. I have attached a demo of me playing this.

I was also very interested in learning more about different uses for the machine learning patch. I trained the leap motion to detect three different hand gestures: a palm facing down, a fist and a “c” for camera. As shown in the demo below, when I make a “C” with my hand, I am able to take pictures with the camera. I can then use my left hand to distort the image taken. This distortion was influenced by this tutorial.

Here is a link to all the final files I used for this project including the machine learning data and model for convenience. I also have included all the gists below.

Draw Game:

ML patch:

Distortion Patch:

Project 1 – Isha Iyer

For project 1, I used a Kinect to control the motion of a particle system using my hand. I am very interested in different applications of motion tracking and I think this was a good introduction to help me learn how the Kinect works. Here is a download link to a video showing my project in real time: IMG_1479

I used this tutorial to help me create particles from an image that I would then control using input from the Kinect. To control the Kinect I used output from the object dp.kinect2. This took me a while to set up initially. I wanted to have the system use real-time image input from the Kinect as the image – that did not end up working quite like I wanted so I stuck to using one preset image.

Here is the gist for my code:

 

Project Proposal 1 – Isha Iyer

I have two ideas for projects that I am interested in.

The first involves analyzing the frequencies and rhythms from an audio file to create special effects with the lights on the ceiling in the Media Room that correspond with the tempo and other elements of the audio.

After seeing the video in class last week of a saxophone sound begin constructed, I also became interested in learning how to use Fourier transforms to reconstruct the sounds of different musical instruments. I am not sure exactly how this would work, so maybe the first idea would be more doable for Project 1.

Assignment 3 – Isha Iyer

I used a recording of Prokofiev’s Romeo and Juliet Suite  by the London Symphony Orchestra as my original signal. The two IR “pop” recordings I used were taken outside CFA and in the CFA hallway. There did not seem to be much difference between the two resulting signals in these recordings. The main difference was the volume of the sound convolved with the pop taken outside CFA was much quieter than that of the CFA hallway. This does not seem to come through on this post for some reason.

Original Signal:

CFA exterior followed by convolved original:

CFA hallway followed by convolved signal:

The other two recordings I took were of wind while standing outside and water being poured out of a water bottle into the sink. I then experimented with recordings of fire and glass that I found online.

Fire:

Glass:

Water:

Wind:

The resulting signals after convolving with glass, water and wind added interesting effects to the original piece. Convolving with fire turned the original signal into cacophony.

Here is the modified version of 00 Convolution-Reverb I used to do this assignment. I had kept all my impulse responses and original signal in a folder named “impulses”.

 

 

Assignment 2 – Isha Iyer

I used the concept of time shifting by delaying a sine wave to demonstrate noise cancellation properties. By changing the number of samples by which the wave is delayed, we can shift it enough so that the shifted wave added to the original becomes 0. My code and a demonstration of the patch are shown below.

 

Assignment 1 – Isha Iyer

I used two phones to distort an image I took a couple years ago. Using one phone, I took a picture of the image displayed on the second phone. Then, I used AirDrop to quickly transfer the file to display the new image on the second phone and repeated the process until the image became mostly one color after 50 iterations.