Author Archives: bobbiec@andrew.cmu.edu

Dreams

The signal I convolved was a recording of Shia LeBeouf saying “Don’t let your dreams be dreams.”

The four signals I used for convolution were:
1. IR of metal structure outside Carnegie Museum of Art
2. IR of Cathedral of Learning Stairwell G, 4th floor
3. Recording of person falling down steps (www.freesound.org/people/cinevid/sounds/144160/)
4. Recording of single note from a synthesizer (www.freesound.org/people/modularsa…s/sounds/285519/)

The reverb from recording (1) had a lot of destructive interference which quickly killed the sound, especially compared to that of recording (2), which echoed back from 12 floors.

Recordings (3) and (4) were interesting to me because the outputs sounded pretty similar at first despite the differences in convolution signals. But closer listening showed the different frequency content in the output.

https://soundcloud.com/bobbie-chen/dreams

Time Delay based on Fundamental Frequency

For this assignment, I created a system which took the fundamental frequency of the audio signal and (after scaling it) used it to set a time delay. This same time delay is also used to control how often the fundamental frequency is sampled to set the next time delay.

The result is a new signal which stutters, especially around areas with large variation in fundamental frequency. The stuttering effect is compounded in this particular recording (Sidney Bechet’s “Si Tu Vois Ma Mère”, from Midnight in Paris) because of call and response, especially around 1:38 of the SoundCloud link.

https://soundcloud.com/bobbie-chen/bechet-stutter-1

The beginning to 3:13 is the original recording in the left channel and the output of the system in the right channel; 3:14 to the end is the output only. I tried to include the original recording as reference, but SoundCloud removed it for copyright reasons. Here is a YouTube link.

The Max patch is below.

Sitting Stretched

For this assignment, I took the eponymous line of Lucier’s “I am sitting in a room” and repeatedly processed it using a few of Audacity’s built-in features.

Originally, I had intended to simply change the tempo of the recording and add echo with each iteration, but I found that doing so led to an extremely uncomfortable beating effect as the sampling rate became perceptible. I switched to using Paulstretch, which allows for extremely long stretching without losing (subjective) quality. I also added a low-pass filter at 5000 Hz to prevent the high frequencies in the “s” sound from dominating the recording.

On each iteration, using Audacity:
Paulstretch – Stretch Factor 1.15, Time Resolution 2s
Echo – Delay Time 0.35s, Delay Factor 0.15
Low-pass filter – Cutoff Frequency 5000 Hz,  Rolloff per octave 6 dB

The end result after twenty iterations (all included in the link) is a soundscape which is completely unrecognizable from the simple original message.