Author Archives: tmedirat@andrew.cmu.edu

Project 2: Object Generator- Tanushree Mediratta

For my second project, I decided to continue using the leap motion device, but for visual purposes. I decided to create an object generator. The object’s position, size and color are all manipulated through gestures and positions of the hands. I was able to incorporate topics we learnt in class such as Machine Learning, Open GL, etc in my project as well.

Here is a short demonstration:

 

Like project 1, I created my main patch from scratch:

I modified the visual subpath of the leap motion help file:

This is modified patch of the machine learning sam starter and training patch:

Project 1- Tanushree Mediratta

I took Project 1 as an opportunity to explore controlling systems using mere hand gestures. I used a leap motion device to detect my hand gestures and movements to control the different aspects of granular synthesis of an audio signal. While one hand controlled the pitch rate, grain size and speed of the synthesis, the motion of the other hand was used to choose an audio file on which the synthesis was done.

Here is a short video:

I created the main patch entirely from scratch. I used a modified sugarSynth patch as a sub-patch and also used the leap-motion patches for collecting and routing data.

This is the gist of my main patch:

This is the modified sugarSynth patch:

I modified the visual subpath and fingers subpath in the leap motion patch:

  1. Visual sub patch
  1. Fingers sub patch

 

 

 

Assignment 4- Tanushree Mediratta

This assignment required us to use fft- an object that segregates the different frequencies a signal is composed of. So, I used Alvin Lucier’s “I am sitting in a room” audio signal and got rid of frequencies above a certain threshold. I then delayed this signal and added feedback. This added a certain villainous tone to Alvin’s voice. I also modified the patch we made in class to create an audio visualizer that visualized the audio signal before and after the fft manipulation.

The code for the main pfft~ can be found here:

The code for the sub-patch is here:

Assignment 3- Tanushree Mediratta

I decided to use one of my favorite songs – We No Speak Americano (by Yolanda), as my original signal. In addition to convolution, I also periodically varied the amplitude of the convolved signal which added an interesting “twisted” effect to the resulting signal.

The original version of the song sounds like this:

(you could also hear this on youtube)

 

The first IR I used was a recording of the popping of the balloon in the CFA hallway, which sounded like this:

 

After convolving my first IR with my original signal, the resulting music sounded like it was coming from a distance or getting reverberated in a large room:

 

The second IR was a recording of the popping of the balloon near the stairwell in Baker Hall:

 

The convolution of Papa Americano with IR2 was less muffled since the ‘pop’ of the balloon was more defined and the sound was contained within a short span of time:

 

For my third IR, I recorded myself clapping twice- the first clap being louder then the second one:

 

When I convolved my original signal with the third IR, it produced an effect of an echo because I was essentially convolving my signal with two claps (impulses) at different times:

 

My fourth IR was created by extracting a piano piece from another song in my playlist- a mashup created by Conor Maynard:

 

On convolving this piano piece with my song, I got an abstract sound that was pleasing to the ear:

 

Lastly, for fun, I decided to convolve a convolved signal with my original signal. So, I decided to use the signal which was produced by convolving my original song with my claps, as my IR. The end product was surprisingly good:

 

Below is my Max code:

 

Assignment 2- Tanushree Mediratta

For this assignment, I used the concepts discussed in class and I also added aspects of my own. The basic idea behind my assignment was to strip the 4 depths of a matrix (ARGB) into individual layers using jit.unpack, and then manipulate their values (which range from 0-255) by performing different mathematical operations on them using jit.op. Once these changed ARGB layers were packed back together, and a time delay was added through feedback, it produced a daze-like effect. The gist given below.

 

 

Assignment 1- Tanushree Mediratta

FOUND SYSTEM- ONLINE THESAURUS

 

The system I chose was an online thesaurus- Thesaurus.com. The original input word was ‘candy’. This word was chosen randomly. Once this word was fed into the system (the thesaurus), synonyms were generated. The first word from this list was chosen and fed back into the system. One rule that I applied to prevent an infinite loop was to exclude any word that I had already chosen earlier. The end result was a word that had no relation with the original word whatsoever. Thus, destroying the original word and its meaning through feedback.