Category Archives: Assignments

Assignment 3 – Supposition of Eggs in D♭ Minor, alternate title: Eggs Over my Guitar

I was cooking eggs and really liked the sound it was making in the pan. I then recorded my brother asking me if I’m cooking eggs and decided to use that as my original signal.

This is the original signal:

For the two “normal” IRs, I decided to convolute my eggs using a recording of knocking my hand on my ceramic bathtub and clapping my hands in my basement.
The eggs in my bathtub sound like this:

The eggs in my basement sound like this:

I had recorded the actual sound of eggs frying in a pan, so I decided to use this as my IR. It created a nutritious soundscape.
This is the IR:

This is the soundscape:

Here is an ambient piece called Eggs Over my Guitar. The IR is a recording of me playing this aimless twinkly guitar noodle:

Eggs Over my Guitar:

Here is a bonus track called Playing the Guitar with my Dead, Dried Flowers.  I had originally recorded myself caressing a vase of dead flowers to use as my IR for the eggs, but it sounded much cooler over this cheesy guitar phrase I played:

This is the IR of the dead flowers:

And finally the bonus track:

Project 1 Proposal – Kevin Darr

I would like to make a project involving the use of a MIDI Fighter 3D to control lights in the media lab as well trigger samples. This project is inspired by the work of Shawn Wasabi, an electronic artist and performer known for his work with MIDI Fighter controllers. Here is a link to one of his works using the MIDI Fighter 64.

 

Assignment 3 – Kevin Darr

I decided to write a short drum+synth loop in Ableton Live to use as the original audio to be convolved. Here is the loop (2 iterations).

To get my impulse responses, I traveled into Schenley Park. This impulse was recorded on the trail underneath Panther Hollow Road.

Here is what the original sounds like under the bridge:

 

The next impulse was recorded near Panther Hollow Lake. Notice the background insect/bird noises.

Here is the original played next to the lake:

 

For this “impulse” I recorded the sound of the stream that flows into the lake.

The convolution (my favorite of all the convolutions):

 

Finally I attempted to convolve the original with the sound of wind, but I didn’t feel like finding a good sample. So instead I put the original recording as the IR and convolved them. Here is the audio:

 

(Shout out to my trusty assistant Ben for being the balloon popper.)

Project 1 – Adam J. Thompson

I am interested in creating a patch that uses a Kinect, other infrared cameras, and/or infrared sensors to map audio-responsive video to moving targets. I’ve been toying around with the potential for mapping video to musical instruments, with the video content generating and animating in response to the music which the instrument(s) is/are playing. As the instruments inevitably shift in space during any given performance, the video sticks to them through IR mapping. I’m curious, also, about how the video content might represent and shift according to the harmonies at work in the music itself.

Proposal 1-Jonathan Namovic

For project 1 I would like to combine the things we learned in the past units in order to make a homemade effects launch pad. It will use impulse signals to generate different tones and noises and filter or convolve them to create different effects. I would also include a time-shifting component allowing users to create loops and then play over them as a sort of controlled infinite feedback.  The pad will also have a basic record function so people can export the music they create.

Assignment 3-Jonathan Namovic

I decided to base my project around my trip to 18-090 from my first class in Wean. The signal I chose to convolve is a sound clip of me running with a backpack on.

Me runnining

I took my two real world impulse recordings from the entrance to Baker Hall and my classroom in Wean.

Wean Classroom

Running in Wean

Baker Hall Entrance

Running in Baker

 

I then decided to use a snippet from a song called “Prom Night” by Anamanaguchi for its airy sound to emulate  the sound of running in a dream.

Snippet from “Prom Night”

Running in a dream

I tried to use the sound of a toilet flush as my last sound to create the sound of running in a cave but it ended up sounding more stuffy than originally planned so I decided to call the last convolution Running in a Nightmare.

Toilet Flush

Running in a Nightmare

For recording, I used the convolution patch we studied in class with an sfrecord~ object to capture the audio.

Project Proposal 1 – Isha Iyer

I have two ideas for projects that I am interested in.

The first involves analyzing the frequencies and rhythms from an audio file to create special effects with the lights on the ceiling in the Media Room that correspond with the tempo and other elements of the audio.

After seeing the video in class last week of a saxophone sound begin constructed, I also became interested in learning how to use Fourier transforms to reconstruct the sounds of different musical instruments. I am not sure exactly how this would work, so maybe the first idea would be more doable for Project 1.

Assignment 3 – Isha Iyer

I used a recording of Prokofiev’s Romeo and Juliet Suite  by the London Symphony Orchestra as my original signal. The two IR “pop” recordings I used were taken outside CFA and in the CFA hallway. There did not seem to be much difference between the two resulting signals in these recordings. The main difference was the volume of the sound convolved with the pop taken outside CFA was much quieter than that of the CFA hallway. This does not seem to come through on this post for some reason.

Original Signal:

CFA exterior followed by convolved original:

CFA hallway followed by convolved signal:

The other two recordings I took were of wind while standing outside and water being poured out of a water bottle into the sink. I then experimented with recordings of fire and glass that I found online.

Fire:

Glass:

Water:

Wind:

The resulting signals after convolving with glass, water and wind added interesting effects to the original piece. Convolving with fire turned the original signal into cacophony.

Here is the modified version of 00 Convolution-Reverb I used to do this assignment. I had kept all my impulse responses and original signal in a folder named “impulses”.

 

 

Proposal 1 – Willow Hong

I want to capture motion data using camera or Kinect, and translate those data into audio signals using Max. 

More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…

Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.

Assignment 3 – Adam J. Thompson

I’m an unabashed Alfred Hitchcock fanboy, and with October just around the corner, I’ve been in a Psycho mood. For this project, I decided to expand a bit on the requirements and make a 4 x 4 convolution mix-and-match patch which contains the required 4 IR signals (In the Dryer, Studio 201, In the Shower, and Crunching Leaves) and allows the user to match them with each of four short excerpts from Psycho (A Boy’s Best Friend, Harm a Fly, Oh God, Mother!, and Scream).

The patch with these amendments looks like this:

This playlist documents each original IR signal, each original Psycho excerpt, and all potential matches between an IR signal and Psycho excerpt.

Just for fun, and because I’ve been toying around with the jit.gl tutorial patches, I created an animated audio-responsive “3D speaker” to accompany and visualize each convolution as it plays. Here’s a short video of how that appears.

And here’s the gist for the whole shebang.