Category Archives: Assignments

Back it up Terry, Impulse Response

Utilizing the latest and greatest video, featuring Terry, I utilized a convolution reverb plugin, to generate my recursive art. To make this piece recursive, I used original audio from the video.

  • Original audio, with convolution reverb, using the original audio as the impulse response.
  • Record the output
  • Make that output the new impulse response AND the file being fed into the reverb plugin.
  • Repeat bullets 2 & 3

 (note: I did some EQ’ing of the Reverb, to prevent high frequencies that were feeding back.)

Using the output of the reverb device as both the impulse response and the content being fed into the reverb really expedited the process, not taking many iterations to lose sight of the original all together.

 

Enjoy!

Assignment 1 – Trout Filming in America

This weekend, my family gathered in the North Georgia mountains for a memorial. The rolling hills and creeks of the Southeastern US will always feel like home in many ways, and this trip was no different. The pricks of light through the trees, the swirl of insects, the smell of damp earth – these are deeply embedded sensations. In my family, at least for the last few decades, we do death slowly, and no one gets buried. Whatever kind of peculiar spirituality my family has feels indelibly tied to inland bodies of water. So this time, we gathered on a lake, a place that has seen many family events over the years, and said goodbye to my cousin. Later, my immediate family whispered our own devotions (or tolerated them) near a creek. It was as much a grounding as it was a departing.

This video is a family affair. Everyone was there for its making. The system takes in a short guitar clip (from my sister’s music), and stretches it recursively until the notes all but disappear into each other. The video is a tiny glimpse of a creek, that also stretches, slows, and eventually barely crawls. None of it is recorded in great quality. To further break up the imagery, I also applied an effect called “Scatter” in increasing amounts (doubled every time) to the video. Seemed appropriate. The sound editing was done in Reaper (by my sister) and the video was put together in After Effects. I stopped when AE quit letting me slow down the video (at 9000 percent). The title is a reference to the novel “Trout Fishing in America” which I was mostly unaware of, but my father says should now be “Trout Filming in America” because at a river, we all just get out our phones. No fishing rods in sight.

I’m sure something like this could be done in Max. I tooled around in the interface, but settled on the Adobe standby for this week. I would ideally like to stretch this out to a long drone, that could go on forever. A little like water.

Here are some screenshots from the software:

Reaper:

After Effects:

Assignment 1 – Noise Reduction

My roommate and I bought an elliptical trainer about a year ago, and as its parts get old, now it always makes obnoxious noises every time we use it. Here is an audio footage of my roommate using the elliptical trainer in the background and me watching an Oreo tasting video on YouTube:

 

For this assignment, I thought it would be interesting to see how well the noise can be reduced using Audacity, the audio editing software that I am most familiar with. By performing noise reduction over and over again, it was exciting for me to explore how the correction of irrelevant “background” signals can actually turn the foreground into noise.

First, I had to sample the noise just by itself, as shown in the following snippet:

 

Then, I performed noise reduction on the entire recording using the default parameters in Audacity (Sensitivity = 6.00, Frequency Smoothing (bands) = 3):

Here is what it sounds like after 1 noise reduction:

 

After 5 iterations:

 

After 10 iterations:

 

After 25 iterations:

 

After 40 iterations:

 

After 50 iterations:

 

Throughout 50 iterations, I could see from the waveform that the amplitudes are reduced, and the few seconds of “pure noise” at the beginning is almost completely erased. After the 50th iteration, the lady’s words are basically incoherent.

I then tried the same process with another set of parameters (Sensitivity = 8.00, Frequency Smoothing (bands) = 5).

After 10 iterations:

 

After 25 iterations:

 

After 40 iterations:

 

This time, the audio is destroyed even more quickly.

 

Original video:

 

Evie Bot Recursion- Assignment 1

Evie Bot is an online bot which is programmed to be smart to a certain extent which enables it to have good conversations with people. The concept of feedback sparked an idea and I thought ‘what if I made Evie have a conversation with herself?’ So whatever answer one Evie bot gave, I directed that towards the other Evie bot. As expected, the conversation grew pretty interesting. Through the progression of the conversation, we could observe the conversations dying down and one of the evie bots changing the topic or asking a different question. Some flaws in the programming of the online bot could be seen, particularly with its memory. For example, asking the same question again- ‘What is your favorite book?’ After having a look at the video, it becomes clearer the kind of conversation they are having. (Feel free to increase the playback speed). Since the file was large, I uploaded the video on Youtube. Here’s the link

https://youtu.be/u6ym9nC0z7s

Feeding back the responses resulted in confusing, humorous, complimentary, sensible, serious, random and creepy dialogues. One can observe the evie bots being confused about their gender and making assumptions. Then, you can see them making a joke about having a brain disease without having a brain. You see them talk about calling each other nice one second and being rude the next. From serious conversations of being a machine with the avatar of a human to a random comment about being asexual, we can all agree that Evie bot was creepy(she agreed as well).

What we got to learn from this is that feedback results in lack of control where the conversation loses its meaning occasionally and thus, doesn’t replicate a real conversation. However, the conversation is really unique and goes on to show the kind of programming underlying the Evie bot.

Here is the link to the Evie bot if you want to have a chat with her- https://www.eviebot.com/en/

Assignment 1 – Audacity is a Good Program

For this assignment I have taken the Pokemon Theme Song and subjected it to a strange phenomena with the audio editing program Audacity. If I shift the pitch of a sound up 10 semi-tones and then down 10 semi-tones, you would think I get the same sound. Wrong! Audacity does not perfectly convert the waves through this shift, resulting in a slow degradation of sound. [In retrospect using high quality stretching may avoid this issue from happening. I’m sorry, Audacity, you aren’t as bad as a program as I said you were]

I took the original song and subjected it to this treatment 99 times in Audacity, and then took the first 81 clips and created a “timeline” version of the song.

Here’s the original nostalgia trip (Loud Noise Warning).

10th Iteration

40th Iteration

75th Iteration

100th Iteration

 

“TIMELINE” (Sound warning again)[Note: Audio clipping is present. Had trouble fixing without altering the whole song]

Robert Keller – Feedback using a phase vocoder

Hi, for my feedback with found systems assignment, I used a phase vocoder that I improved (initially from http://sethares.engr.wisc.edu/vocoders/matlabphasevocoder.html). A phase vocoder is essentially a system that slows down or speeds up audio without altering the pitch of the audio. Designing an industry standard phase vocoder is rather difficult, and the implementation below is not without it’s shortcomings. Namely, the process of speeding down audio using the vocoder below introduces a “phasey” aspect to the resulting audio. The phasey artifacts introduced by the vocoder are initially passable, but if one were to say, slow the initial audio down by a factor of 2, then speed the audio up by a factor of 2, and then repeat the process 20+ times, one would drastically alter the original quality of the audio. The final result is incredibly phasey. The initial piece of audio I used was from the song “Message in a Bottle” by the Police. Below is the resulting audio and code from Matlab.

Here is the script I used to generate the audio:

Original Audio:

Audio passed through phase vocoder once:

Audio passed through twice:

Audio passed through three times:

Audio passed through nine times:

Audio passed through fifteen times:

Audio passed through twenty times:

Audio passed through thirty times:

Hoffer_Assignment1_feedback-recursion

I AM SITTING AT MY COMPUTER. ALL DAY.

Similar to the YouTube codec edition of “I am sitting in a room,” I decided to focus on encoding and decoding within the computer world. No real things! Not allowed. Except the original sample loop, which is real:

This loop is a simple keyboard figure and some tin cans with piezo elements taped to them, looped and run through an eighth-note delay. It’s a little noisy, has a mix of a room mic and a direct line– something pleasantly human, ready to be turned into Robot Time.

TEST 1 – AUDIO TO MIDI

Ableton Live has a cool feature where you can take any audio source and turn it into MIDI notes. The software comes with settings for Melody, Harmony, and Drums– so I assume it works a little better when you have an audio source that is one of those things. Instead, we have this one loop. The Harmony setting seemed to result in the largest number of notes getting encoded from the loop, so I decided to start there.

So we get a scrambly pile of weird MIDI notes. Rather than play the MIDI back through an external instrument, I thought it would be more true to the project to somehow use the initial audio with the resulting MIDI to make the next generation of audio. My first idea was to use a vocoder on the original audio, using the MIDI pile as the carrier– that way I could use the full original audio, and avoid the intrusion of a software instrument. G1 sounded like this:

Cool! Sort of like a Harry Partch version. But I got sidetracked by using the original audio as a sample to be triggered by the MIDI. This would mean triggering pitched versions of the first few fractions of a second of the original audio… Maybe less faithful to Lucier’s idea of serially processing the entire original signal, but it sounded nuts so I couldn’t resist. Here’s the full output, from the original through seven regenerations:

So the natural state for this system is… silence. This is because of the MIDI encoder– it tends to leave areas silent if the audio is too fast, or too complex, or not percussive enough… I think. Anyway it starts to insert rests after a few generations, including at the beginning of the loop. To keep things from falling silent too quickly, I allowed myself a cheat of setting the sampler to play back starting at the first recognizable audio, rather than from the now-silent front of the original loop. Also, oddly, the MIDI’s pitch tended to come out a perfect fifth above the original file. Recursion of that feature also took us out of the piano range pretty quickly, so I corrected resulting audio down 7 semitones with every generation as well.

TEST 2 – STRETCH CONTRACT

Over to Logic! I’m really into using the natural glitchiness of pitch-maintaining time stretch algorithms when they are stretched over-long. I wondered what would happen if I used Logic’s Stretch Time to stretch the loop, printed it, then used the same process to contract it. I’d then print the contracted version, stretch it again, etc etc. The time stretcher maintains sample rate, so when stretched to double its length, you’ll get an equal number of original samples and algorithmically generated “guess” samples. On contraction, they would erase half of those samples. Would the system selectively remove the guess samples, or would it get a mixture? Would I eventually get a loop that was mostly guesses? I decided to go with a 2:1 stretch.

After eight regenerations, the short answer is “no”. Hard attacks were blurred by the time stretch algorithm– no surprise to me– but a lot of the other primary sounds sounded pretty much the same after a bunch of iterations. Fascinatingly, the biggest change was a reduction in the noise floor: the final loops feel like a gated version of the original, but like a really transparent gate.

TEST 3 – MP3-O-RAMA

Again, inspired by the youtube iteration file. In the misty past, I’ve tried recursive mp3 encodings– an indeed, after a few hundred recursions you do indeed sound like a total monster. I thought it might be interesting to see what the difference was between an mp3 and an original, and somehow iterate that process.

I created an mp3 version of the loop, inverted it, and combined it with the original. Therefore, anything that made it into the mp3 that is identical the original will be cancelled out, an we should be left with just the difference between the tracks. I’d then take that resulting ‘difference track’ and use it as the original audio for the same process, over and over. Through this process, we should eventually get something more and more like the character of the mp3 encoder itself. In the photo above, the top track is the generations of “original” audio, and the bottom is the mp3 versions. Here’s what the top line sounds like, at 20 regenerations:

Spooky. But not exactly the distilled nature of mp3. That, it turns out, was what started to come out of the bottom line. Mp3 re-encodings of the vague rhythmic noise heard above resulted in some pretty isolated examples of mp3 random pitch generation:

Some technical notes: to maximize the amount of audio that would show up in the inversions, I used a very low-kbps mp3 encoder. In addition, after I was finished with recursion, I normalized the resulting samples to boost them up to a more audible level– boosting overall level, but more or less maintaining dynamic range.

 

Assignment 1 – Famous Dex likes liquids

For my first assignment, I decided to expand on “I am sitting in a room.” I wanted to compare how a feedback loop would be impacted by placing various drinks between the speaker and mic.

My setup was as follows. I had a cardboard box. In opposite corners, I placed a diaphragm speaker and a simple cardioid mic with a foam filter. I would create my wall of various drinks between the speaker and mic, and close the box. For the input sound, I used a clip of a trap artist named Famous Dex recording ad-libs for a song of his. Why? Cuz it’s funny.

For the liquids, I chose to use Spindrift and Almond milk. Why? Cuz I like those drinks. The containers and method by which they were set up were also different, which I expect to impact the output.

Here were the results of each after 10 iterations.

SPINDRIFT

Audio:

Spectral analysis:

ALMOND MILK

Audio:

Spectral analysis:

 

My breakdown

With the Spindrift, you can see many spikes in the frequency analysis. It’s also somewhat understandable, and you can hear the original sound to some degree. I think this is because of how I set up the cans of Spindrift– there were three different methods by which sound could pass through– Aluminum, the liquid, and air (there were gaps between the cans). Because of this, many different frequencies were able to find ways through the barrier and stay inside the system. In fact, I had to turn down the amplitude often to avoid clipping, because the system’s amplitude was not stable.

With the almond milk, there was less flexibility for ways the sound could stay in the system. Because of this, there is a spike at a particular frequency (about A#7), which, I guess, is the almond milk frequency. You can hear this in the sound as a cool whistling tone. In contrast with the spindrift, I actually had to turn up the amplitude of the sound, to prevent it from becoming too quiet. I think that may be the reason you can hear some glitches in the sound. Also, since most frequencies are quickly dampened by the system, Dexter’s voice is practically unrecognizable.

I found this to be a super cool project; one variable I did not consider was that the amplitude would be unstable. In retrospect, this totally makes sense mathematically– unless the amplitude was totally unchanged in each iteration, it would exponentially grow or decay. I expect to use these principles in some of my music; feedback systems in live performance, in particular, seems like an area of interest for me.

More audio clips

Spindrift rounds 4, 6, 8.

Almond milk rounds 4, 6, 8.

Assignment 3: Convolve it

Transform an audio signal by convolving it with four different Impulse Response recordings. You should make at least two of your IR recordings by using portable audio recorders to record the reverberations of a “pop” in two different acoustic spaces. Try to find unique acoustic spaces that will create interesting reverberations. The other two IR recordings can be more experimental. For example, one can got interesting results by treating musical sounds as if they were IR recordings.

To deliver your work present:

• The original signal
• The original signal convolved through 4 different IR recordings
• The 4 IR recordings, and a brief description of how they were produced

Convolution of images is acceptable as well if you’re interested in doing a visual version of this project.