Category Archives: Assignments

Assignment 2 — Delay Party!

I have created a patch which can work with any audio file or a live input, but this patch is set up for a stereo environment. The left channel goes into 4 state variable filters: a low pass at 250Hz, a bandpass at 1k, a bandpass at 4k, and a high pass at 8k. The output of these filters goes into live.gain~ objects where the amplitude level at any moment is sent into a sub-patcher that slides the amplitude and then scales it to get to a delay time. Through early iterations, I found that while the sound file is playing, the majority of the file is spent between -30 and -10dB, so that is what I based the scale object on; however, also due to this I would get negative delay times, so I have it take the absolute value of the output of the scale object. This is then sent into a line~ object that smooths it over 50ms. The only difference between the left and right channels of audio is that for the left channel -30dB will give you a delay time of 10ms and -10dB will give you a delay of 1000ms. For the right channel, -30dB will give you 1000ms of delay while -10dB will give you 10ms.

Messing around with songs- Assignment 2

The following patch helps mess around with audio files by adding time delays, shifting pitch and adding feedback. This, of course, resulted in a lot of interesting experimentation.

The first video shows how the ‘chipmunk’ voice filter can be achieved. The pitch shift for heavy voices generally worked with 1.8x transposition factor and for lighter voices 1.4x usually gave a good chipmunk voice. The songs used in this video are Pillowtalk by Zayn and Hello by Adele. While experimenting, I also found a time delay from 50ms to 200ms usually complemented the original. You can see me messing around with feedback as well.

While experimenting with the pitch shift, I remembered a song, ‘Cinema’ by Skrillex, which had a kid-like voice before the dubstep drop. I decided to find out what the voice could have actually sounded like before it was modified and here it is. The first one is the original, followed by what I thought would be the actual voice-

Continuing on the path of exploration, I encountered a slight effect of reverb or concert while using feedback. I found out how the effect of slight echos are created-

In the end, I just had fun with the Iphone ringtone, creating drumbeats(may need to increase volume), really cool sounds and some other effects-

 

-Tushhar Saha

Assignment 2: Negative Thinking

What I wanted to do with this assignment was to use delay to isolate movement within a video, and show only the subject on an empty background. I got something pretty cool – but only by accident.

I started with the first video, which is from Generate by Rasmus Ott (on YouTube). By delaying the initial matrix and then subtracting the original matrix from the delay, I got the second iteration. Pretty cool, and I sort of isolated the subject, but it wasn’t what I wanted. Then, when I unlinked the delayed matrix from the jit.expr object, it froze it and left only the (anti-?)silhouette of the original behind. I really like the aesthetic of a moving subject revealing the background, but I couldn’t figure out how to replicate this in a non-janky way. Anyway, here’s the gist:

Assignment 2 – Trippy Time Machine

Hello, for the time machine assignment, I decided to experiment with several different effects. I tried to mess with the color of the video feed, and I split the jitter video in to two halves to work with. Each half was passed through a feedback time loop, and the feedback in said loop was the video received by the other half of audio. I used a motion detector to make the moving elements of the final frame yellow tinged. I had to tinker with values of feedback, transparency, and color in order to get the intended effect.

Assignment 2: Rampsmooth

Hi!

For the second assignment I decided that I wanted to play around with rampsmooth and randomly generared tones.

And here’s the overview of my patch:

What I tried to create a normal speaker that would randomly generate tones. I then used tapin and tapout to create a delay for the final signal. I also multiplied the signals, and used rampsmooth and kink to help get rid of the pops in the speaker. What ends up being made is a much smoother version of the first thing we made in class, which I think sounds pretty chill!

 

Also here’s the code for it from GitHub!:

<pre><code>
———-begin_max5_patcher———-
1491.3oc6Z0sapiCD9Z3oHJWthEY67K6c6ywpipBfK0mlXG4D5VNGUd1WGON
A3PRvcw.mKpTaP1Nwy7My7YOiS94zI9KEuSq789Ku+waxjeNcxDcWMcLwzdh
eQ16qxypz2lOm9uhke2eFLTM88Zc2xrhxpBgn9k8djH0es2QojVQ40Y0LA+I
IcUMHrz34Qy7HKHyQy7PlKdey7P7sELdNsVKx.SmOK307rBpVd+sjkk6e31E
aqauebqjypW8Biu4HohwIMhhDm17CFq+gPNRxr05oWAw+LnCCvjWuqjBSieE
aCWI9tmpQ0pX+POLtARP2U06xA00uoiOlNs4xLKszzerNa0d+9LIjdANZXfG
GrnwdGFDp+IpA2MW6C2jzVY9Ba8ZJ+Xa50gnAhcxEYqKnUUd3dAK9y5kMfM.
BtRHi4jIC5iO3ccsYnLWTu2MPM.AtRhlBgI5VInAv5BK.aQlbCiaVR.2wNw8
PQKELNngggXTWP+1kM.raQkNifZrUhbgDF.MObV6kjHs.Zm6lXiWXqdkSACF
dNoa.sLe5f8mK3T+tQyYb5QC1zr5vnJUdIUN3C+LKulJ6Yf0hhLFW8bYRsYG
0RvOLFkuFzTz4O0wZSl7roUYvekJqZsJZyrxQ17CbMHQ+i9RLzEzS541Mybl
msjlalxiFUlw2P6GGvP8.i1mYHT.i2GHHGT6yUT3wFPOERlJF7o2+Ukzz+NS
+P2eL06aOVRXXTf8jPL4KR3WjvuHgZbrRTTnRN7rLB9LIBLRVOQo.wDG27S.
j62P7xP+y1b+EZdoJfzXh7cH.QtEfIAWFfAn6I.gLj8XUdPUAz0dPPa9NOwa
TYSQBUpZFxoUyG0VXaV+nKmjzhPs8IIcTyTWr2MIo9ARAt7krJgbuW37Qy22
Ak.YLFAQ5EHf.mAq.h7Xq.5yVq4sprwvHcMDlBJtTYiAOVi1PLxisZkT4yBY
QkgSZ3nJP6I3dYbOFubasGnt2K5YTfYQr3QomgWfd55h0ZJLMaC87snTABHO
bj5eUtdnOQY5iQNi00mmpMHlkqFpv0XKxi71TttJlY+.0p2OhIiPsHXfSsPu
Dc7nPN7RLqYd9KU4zbiM.cKVG5J2NIApbPy.RFc4Eh8KubS.+qL9q6cDrMqm
lhfUWGE1jGKr+i8MIS6JVNrqabLT13n3F+Xw8pcqxUrchS83DfzGEMJxQ+F3
wibCtaO24.zk833jeO73tZOsVriznNdzE2vQOZetavLQkRSDbhP52xg95PX9
Ayvc2KYvjXrA0lpQGB08TMpCAUcVoRm26EgPt4L9LXqYoaE1RFM0rEOVGpB6
LtJAMj6vt1UFfWbYrmNH1Up0JAmqm0aa.MMOamiHx3EI5iPKlb4zSuWIlUK1
rIm5TWKI7zq8xXGNrlwOxmVlopzjVSkOQ4YKAbdSON.kvjhlvcmteE3wiGME
E7v0f4zxPzGY9mvgStH5vv9vvg.MD7FoByyJz5140WtstVvcZ39hjKGtGdeb
sCcBW70hhA285+aLMN.d0TiGTaGMuebqG17JeN4iJQqTM8epwnRrUtpc9aKu
06fdslVUy35OdjiuIxI2TAa8g2BlpHi.82VBF3tgHbWqAdGEVqaHazM7X5lQ
aHQX3rtHcstVcCagt0jpoGdP6FnMAAlEEh5Zck5F1FcKcLyFAa9hUfMfiVbd
q.D7M8PzNecqqTsWXiEEchEUHWCu7R704LsINyXvtJA07INYqm45BARrPPsZ
y0AIaV.ASbgjrwM8KpyuR4v5y6MHECoWh6ZcdHE51qrjSC4cTzbrMNDW35I1
HolCQ95kTns6RguVIYkeC4BJpMqEzxiu8Krh6OTDceVRePFKlj.eagpcGOW6
HWk1kZg1svEKbYgbbAKAGeuDjU90XGHIa38XWfHKjy43Ax9Nqr7MprxbyZQn
J536v27UxLcSFGZpeI49R5ar16OR2SlTUBQsp9gsRHI+2SgpA8KDpXc9VlYe
Akj+X5+wRUmXn
———–end_max5_patcher———–
</code></pre>

Assignment 2: Pitch shifts and accidental overtones

Hey all!

For my second assignment, I wanted to create a Max patcher that would create echos, but with specified pitch shifts. Then, I decided to do some experimentation, which led to me accidentally discovering cool ways to create overtones just by taking the same audio and changing the delay for L and R.

Here’s the high level view of my patch.

The input to my subpatcher is:

  • A mono input using ezdac~
  • Delay L (ms)
  • Delay R (ms)
  • Decay (0-1)
  • Pitch shift (by ratio)

The output is an audio file with echos. Each echo will be scaled by the decay and pitchshifted by the pitch shift ratio. Each echo will be scaled and pitchshifted again– this means that we will hear multiple successive pitches for each note sung.

My favorite intervals to play with were the m3, M3, and M2. Making the decay scaling value high made the higher pitches more audible, which made it so you could hear more of the higher pitches. With the delay L and R, setting them to be slightly different was a fun way to hear phasing. When phasing, I noticed that noise levels grew while my voice seemed to interfere with itself, which makes sense to me– noise is audible at any time, so it’s understandable that it would compound with time.

Here is the subpatcher:

Here is an audio demo + the patch for anyone interested.

 

 

Assignment 1 – Washing Machine

This idea was inspired by my experience of washing papers left in my pants pockets, which I used to do all the time when I was younger. Sometimes I could salvage the papers and they would come out mostly untouched and sometimes they’d look like a melted vax version of whatever I’d originally put on it. So the very beginning of my idea was using paper (and the image upon it) as my signal and medium. I decided I couldn’t just run papers through a washing machine a bunch, because of all the potential water usage, so I decided to use a crockpot. With a total of about 80 ounces of water (I had to refill several times) and half a bowl of coffee and lemon juice each (to alter the colors further, although I’m not sure it had any effect) I took 17 x 11 inch sheets (from an old issue of The Economist) and repeatedly folded them into 4ths, layered them, steamed them within the crockpot for 8 minutes, and then unfolded and dried them before repeating the process. This started with the cover and I managed to get through about 20 sheets before the inner layers began completely breaking down. Each iteration had 3 layers – the sheet or sheets previously steamed, and a layer on each side of the central one that was a unsteamed sheet from the Economist. The unfolding and folding process ensured that it wasn’t just a giant stack of paper stuck together – by spontaneously and randomly damaging the sheets, introducing rips and tears, I was able to produce some funky looking sheets that as I went further and further on, deviated very much from their original rectangular shape. The end results are probably not too surprising, but nontheless by the 20th iteration, very little of the original sheet was left (a lot of chunks of paper just fell off).

 

Links:

Before the 1st iteration:

https://ibb.co/m9i4ee

After each iteration:

https://ibb.co/eU6CXz

What folding looks like:

https://ibb.co/cKTRsz

5th iteration:

https://ibb.co/jRAHze

15th iteration:

https://ibb.co/mj6CXz

20th iteration

https://ibb.co/deX85K

 

Assignment 1 – Layered Vocals

For this project I took a song (Mr. Blue Sky by Electric Light Orchestra), isolated the vocals through Audacity and then layered this new vocals track on to create a new “mix”. I then repeated this process by isolating the vocals from the new mix and again layering it on. I did this 30 times.

I was curious to see how feasible it is for a program like Audacity to separate vocals from everything else, and use the program as a feedback system to create new mixes.

To create mix #1, I separated the vocals and instrumentals individually and then layered them together.

Original recording by Electric Light Orchestra:

 

Mix #1

 

Mix #10

 

Mix #30

 

I have never worked with sound media or used any sort of audio software before, so I had no idea what to expect. You can clearly hear how the original signal is destroyed, particularly at around 1:02. The results were certainly interesting, and I’m sure that continuing to isolate and layer the vocals would eventually produce something quite different!