Category Archives: Assignments

Video Noise Convolution

 

This assignment convolves two matrices. The first matrix is an RGB white noise grid, and the second is grayscale, with a scaled down proportion. When the two matrices convolve, the different sizes make the shimmering out effect at when both grids have  a high number of rows/columns. A MIDI controller changes the grid sizes. When the viewer claps, the grayscale matrix’s values are reset to 0, this allows the full RGB matrix to show through.

 

Code:

https://github.com/TyVdz/Assignment-2/blob/master/Assignment3

Convolution with My House

For the IR I recorded the sound of a balloon popping in both my room, and an all tiled public bathroom. I also used a recording of the character Navi from the Legend of Zelda, and a recording of a chord being played on the melodica.

For my original signal, I used a recording of myself speaking the line from I am Sitting in a Room. Here is the result:

  • Nick Castellana

Dreams

The signal I convolved was a recording of Shia LeBeouf saying “Don’t let your dreams be dreams.”

The four signals I used for convolution were:
1. IR of metal structure outside Carnegie Museum of Art
2. IR of Cathedral of Learning Stairwell G, 4th floor
3. Recording of person falling down steps (www.freesound.org/people/cinevid/sounds/144160/)
4. Recording of single note from a synthesizer (www.freesound.org/people/modularsa…s/sounds/285519/)

The reverb from recording (1) had a lot of destructive interference which quickly killed the sound, especially compared to that of recording (2), which echoed back from 12 floors.

Recordings (3) and (4) were interesting to me because the outputs sounded pretty similar at first despite the differences in convolution signals. But closer listening showed the different frequency content in the output.

https://soundcloud.com/bobbie-chen/dreams

PastPresentFuture

I jumped headfirst into this project the other day when I came across an article on Audible Range Magazine’s site about what scholars and linguists think english will sound like a century from now. They provided three audio clippings from separate readings of Tale of Two Cities: one spoken in the old english of approximately 200 years ago, one in contemporary english, and one in the projected estimate of english in 100 years time. The first and third iterations are mostly indistinguishable from our modern-day utterance of the language, highlighting the enormous transformation english has undergone within the last 3 centuries alone.

I thought that using the multi-convolve function in Max would be a lit way to draw a sonically driven depiction of linguistic evolution over time. I first started by playing around with some impulses that I thought would have an interesting, discordant texture to them (sawing through high-density foam, slamming my metal tabaret drawers close, etc.). To establish the relationship of a consistent sonic progression with the evolution of the english language, I directed my exploration of impulses towards musical relationships. I strummed the strings of a piano which I divided into thirds- the old english was convolved with the lower pitches, the contemporary with the middle ones, and the future with the highest pitches. I recorded the convolved audio together but staggered them so that the convolution from the separate files would not create an overwhelming output (I still definitely did not achieve this, lmao).

To take the theme of evolution one step further, I decided to convolve the old english narration of Tale of Two Cities with Nicki Minaj saying the word “ass” in her straight banger, “Anaconda”. It’s a playful juxtaposition that emphasizes the spectrum of functions and contexts for which english has been used from the past up until modern day. I also honestly just wanted to see what it would sound like.

The four latest tracks:

Assembling the World’s Most Complex Machine

The source came from some experimenting I was doing with machine sounds. I started putting one sound next to another and it began to make me think of putting together some sort of hi tech machine. Then I got to imagining what would be the sound of assembling the most complex machine there was. The result is the first recording in the playlist above.

I recorded two balloon pops: one was in the hallway outside my apartment—fun for the neighbors; the second was in a long corridor outside the theater at the ETC. The first, when convolved with the machine sounds, created a reasonable reverb for putting together the machine in a normal room space. The second seemed comically exaggerated, as if trying to be quiet while constructing the machine in a large auditorium.

The experimental impulse response recordings were a sample of a staccato note on an upright bass, and the end of a timpani roll. (Actually, this to me would pass as the end of a timpani roll, however it was actually just some experimenting with a bow on the bass.) Using the bass produced an interesting percussive texture with the original timbre. The “timpani” seemed to conjure a more abstract result—something I could see using in an ambient piece or as source material in sound design.

I Was Really Bored…

For my project, I adapted the convolution patch shown in class to take an audio clip and convolve it with different gain levels with 4 impulse response clips. The gain levels of the audio file are controlled by the x and y coordinates of a 2d slider (where the output is a sinusoidal function of the distance of the coordinates from the corners). I also added a delay line. (I will upload the patches once I figure out how to do that. Sorry!)

The source sound is Ter Vogormia by Diamond Galas. My 4 impulse responses are a bottle cap falling, a whistle, and a couple of other sounds I had lying around on my computer. Here are all the original files:

Finally, I used a Leap Motion to obtain gestural data from my fingers to control the inputs to the convolution patch. To do this, I downloaded a third party max object that processes the Leap raw data. I then used this object to extract the 75 dimensional vector that contains all the finger data at any moment and send it via an OSC message to Wekinator, a GUI that allows you to build regression models using neural networks. I then trained the model, mapping the 75 inputs to 3 outputs which I sent to my original Max patch (via OSC), scaled, and smoothed logarithmically. These 3 outputs controlled the x and y values of my slider along with my delay time. Here is the final outcome:

that’s really convoluted!

IR’s in order
-click through a Pignose™ Guitar amp at full volume recorded directly out of the output jack.
-clapping backstage in the Chosky theatre between two parallel walls.
-a reversed recording of a balloon popped at the other end of a long lighting pipe.
-a clip of the noise my car stereo makes when i plug my phone in to charge while it’s plugged into the sound system.

Test signal: a free drum loop from a looploft sample pack.

inspired results.  @____@

get the gist? -> https://gist.github.com/nikerk34/36983305044aa23809d6f1225e33ec98

Baker Hall Shenanigans

I recorded 2 balloon pops in Baker Hall, because I’ve always loved the incredible reverb you can get if you stand in just the right spots. The first one was recorded directly under the center of the dome just inside the main Baker doors, with the Zoom recorder about 2 feet from the balloon, pointed directly at it (which probably wasn’t the best orientation in retrospect). The second one was recorded from all the way down at the other end of the building in Porter Hall, with the balloon in the same spot. The sounds I got were surprisingly different, so I decided to use both. The third sound is actually me tapping directly onto the microphone of the media lab computer, recorded straight into audacity. The fourth IR is an excerpt from a piece by one of my favorite composers, Hector Berlioz’s Symphonie fantastique. Here they are in order:

 

The signal I convolved was an audio clip of me attempting to play a staccato version of Mac Miller’s ROS on the piano:

 

 

Here’s a playlist of the audio clip convolved with the 4 different IR recordings:

 

 

Convolve it!

Hi there,

I created a few Impulses using Apple’s Impulse Response utility software. Which is slightly different than the balloon trick used in class. The Impulse Response utility sends a 20 Hz to 20K Hz sweep through the room, that is then recorded and flatted to an impulse.

slide

The first IR is of the Great Hall in the CFA:

great-hall

The second is through my VOX AC30 amplifier’s spring reverb. I also included another file where it was a clap. The high transients really make the spring reverb sound interesting.

vox

The experimental pieces I chose were a vinyl sound from the internet. I thought it would add that vinyl sound to a modern recording. For my last convolution, I did a Lion’s roar. Which created these very interesting sound clouds of tones from the Cherokee talking clip. Here is all of the recordings:

 

Have a good one!

 

-Steven Krenn

Convolution and Toms Diner

For this assignment I chose the classic “Toms Diner” as my original signal, given its long history of being used to test compression as well as for mixing sound systems. This is not the exact recording, but I have no way of hosting the original MP3 without it being removed for copyright reasons.

 

For my traditional signals I wanted to find more acoustically interesting spaces around campus. First I chose the stairwell in the CUC, I have always loved the echoes in that space. I also felt the new locker rooms would be an interesting space. I thought they would be more reverberant than they were however.

For my third recording  I thought ambiance might create a cool effect. I tried recording is starbucks but the background music kind of ruined what I was going for, so I just sat at the blackchairs in the CUC and recorded the space.

For my final recording I chose a flushing toilet, I thought the gurgling of the water and the tons of little peaks and valleys could create a cool effect.

Below is a playlist of all recordings produced. First the IR is played, and then after is the original signal convolved with that IR. For time sake only the first verse was run through the system.