Proposal 1 – Anish Krishnan

The cv.jit.faces computer vision object allows me to track faces and find their positions. I want to implement the lessons we have learned over the past few weeks to be able to alter the face or the pixels surrounding the face in a live camera feed or video to be able to do many cool things such as:
– Blur all faces in any video (for online publishing)
– Enhance lighting around face and darken background
– Use convolution and time-shifting to make the frames of the face lag, while the rest of the video progresses normally

Project 1 Proposal – Sarika Bajaj

For my first project, I would like to use the Kinect, Max, and a projector to create a generative art piece that enables some sort of live interaction. Preferably, I would like to create a piece similar to the one depicted in this picture:

Depending on whether particle interaction, shape manipulation, etc. is easier for me to bring up, I might pivot the project more to one of those directions. However, as a minimum, I would like to be able to create a system that creates a live time interactive piece with the Kinect and Max.

Assignment 3 – Kevin Darr

I decided to write a short drum+synth loop in Ableton Live to use as the original audio to be convolved. Here is the loop (2 iterations).

To get my impulse responses, I traveled into Schenley Park. This impulse was recorded on the trail underneath Panther Hollow Road.

Here is what the original sounds like under the bridge:

 

The next impulse was recorded near Panther Hollow Lake. Notice the background insect/bird noises.

Here is the original played next to the lake:

 

For this “impulse” I recorded the sound of the stream that flows into the lake.

The convolution (my favorite of all the convolutions):

 

Finally I attempted to convolve the original with the sound of wind, but I didn’t feel like finding a good sample. So instead I put the original recording as the IR and convolved them. Here is the audio:

 

(Shout out to my trusty assistant Ben for being the balloon popper.)

Project 1 – Adam J. Thompson

I am interested in creating a patch that uses a Kinect, other infrared cameras, and/or infrared sensors to map audio-responsive video to moving targets. I’ve been toying around with the potential for mapping video to musical instruments, with the video content generating and animating in response to the music which the instrument(s) is/are playing. As the instruments inevitably shift in space during any given performance, the video sticks to them through IR mapping. I’m curious, also, about how the video content might represent and shift according to the harmonies at work in the music itself.

Proposal 1-Jonathan Namovic

For project 1 I would like to combine the things we learned in the past units in order to make a homemade effects launch pad. It will use impulse signals to generate different tones and noises and filter or convolve them to create different effects. I would also include a time-shifting component allowing users to create loops and then play over them as a sort of controlled infinite feedback.  The pad will also have a basic record function so people can export the music they create.

Assignment 3-Jonathan Namovic

I decided to base my project around my trip to 18-090 from my first class in Wean. The signal I chose to convolve is a sound clip of me running with a backpack on.

Me runnining

I took my two real world impulse recordings from the entrance to Baker Hall and my classroom in Wean.

Wean Classroom

Running in Wean

Baker Hall Entrance

Running in Baker

 

I then decided to use a snippet from a song called “Prom Night” by Anamanaguchi for its airy sound to emulate  the sound of running in a dream.

Snippet from “Prom Night”

Running in a dream

I tried to use the sound of a toilet flush as my last sound to create the sound of running in a cave but it ended up sounding more stuffy than originally planned so I decided to call the last convolution Running in a Nightmare.

Toilet Flush

Running in a Nightmare

For recording, I used the convolution patch we studied in class with an sfrecord~ object to capture the audio.

Assignment 3 – Anish Krishnan

For this assignment, I transformed an audio signal of my friend singing “Sorry” by Justin Bieber by convolving it with four different Impulse Recordings. The first IR was a ballon popping in the CFA building, and the second was a ballon popping in Porter Hall. For the third IR, I recorded a speaker playing a sound of a ballon popping inside a closed room. The fourth IR was a recording of footsteps on hardwood.

I have attached the respective audio recordings and the gist for my project below.

Original Audio Track (Friend singing):

 

IR1 and Convoluted Signal (Ballon popping in CFA):

 

IR2 and Convoluted Signal (Ballon popping in Porter Hall):

 

IR3 and Convoluted Signal (Speaker playing ballon pop sound)

 

IR4 and Convoluted Signal (Footsteps on hardwood)

 

Code:

Project Proposal 1 – Isha Iyer

I have two ideas for projects that I am interested in.

The first involves analyzing the frequencies and rhythms from an audio file to create special effects with the lights on the ceiling in the Media Room that correspond with the tempo and other elements of the audio.

After seeing the video in class last week of a saxophone sound begin constructed, I also became interested in learning how to use Fourier transforms to reconstruct the sounds of different musical instruments. I am not sure exactly how this would work, so maybe the first idea would be more doable for Project 1.

Assignment 3 – Isha Iyer

I used a recording of Prokofiev’s Romeo and Juliet Suite  by the London Symphony Orchestra as my original signal. The two IR “pop” recordings I used were taken outside CFA and in the CFA hallway. There did not seem to be much difference between the two resulting signals in these recordings. The main difference was the volume of the sound convolved with the pop taken outside CFA was much quieter than that of the CFA hallway. This does not seem to come through on this post for some reason.

Original Signal:

CFA exterior followed by convolved original:

CFA hallway followed by convolved signal:

The other two recordings I took were of wind while standing outside and water being poured out of a water bottle into the sink. I then experimented with recordings of fire and glass that I found online.

Fire:

Glass:

Water:

Wind:

The resulting signals after convolving with glass, water and wind added interesting effects to the original piece. Convolving with fire turned the original signal into cacophony.

Here is the modified version of 00 Convolution-Reverb I used to do this assignment. I had kept all my impulse responses and original signal in a folder named “impulses”.

 

 

Assignment 3 – Alex Reed

Mostly Screaming