Category Archives: Assignments

Assignment 3 – Supposition of Eggs in D♭ Minor, alternate title: Eggs Over my Guitar

I was cooking eggs and really liked the sound it was making in the pan. I then recorded my brother asking me if I’m cooking eggs and decided to use that as my original signal.

This is the original signal:

Audio Player

For the two “normal” IRs, I decided to convolute my eggs using a recording of knocking my hand on my ceramic bathtub and clapping my hands in my basement.
The eggs in my bathtub sound like this:

Audio Player

The eggs in my basement sound like this:

Audio Player

I had recorded the actual sound of eggs frying in a pan, so I decided to use this as my IR. It created a nutritious soundscape.
This is the IR:

Audio Player

This is the soundscape:

Audio Player

Here is an ambient piece called Eggs Over my Guitar. The IR is a recording of me playing this aimless twinkly guitar noodle:

Audio Player

Eggs Over my Guitar:

Audio Player

Here is a bonus track called Playing the Guitar with my Dead, Dried Flowers.  I had originally recorded myself caressing a vase of dead flowers to use as my IR for the eggs, but it sounded much cooler over this cheesy guitar phrase I played:

Audio Player

This is the IR of the dead flowers:

Audio Player

And finally the bonus track:

Audio Player

Project 1 Proposal – Kevin Darr

I would like to make a project involving the use of a MIDI Fighter 3D to control lights in the media lab as well trigger samples. This project is inspired by the work of Shawn Wasabi, an electronic artist and performer known for his work with MIDI Fighter controllers. Here is a link to one of his works using the MIDI Fighter 64.

 

Assignment 3 – Kevin Darr

I decided to write a short drum+synth loop in Ableton Live to use as the original audio to be convolved. Here is the loop (2 iterations).

Audio Player

To get my impulse responses, I traveled into Schenley Park. This impulse was recorded on the trail underneath Panther Hollow Road.

Audio Player

Here is what the original sounds like under the bridge:

Audio Player

 

The next impulse was recorded near Panther Hollow Lake. Notice the background insect/bird noises.

Audio Player

Here is the original played next to the lake:

Audio Player

 

For this “impulse” I recorded the sound of the stream that flows into the lake.

Audio Player

The convolution (my favorite of all the convolutions):

Audio Player

 

Finally I attempted to convolve the original with the sound of wind, but I didn’t feel like finding a good sample. So instead I put the original recording as the IR and convolved them. Here is the audio:

Audio Player

 

(Shout out to my trusty assistant Ben for being the balloon popper.)

Project 1 – Adam J. Thompson

I am interested in creating a patch that uses a Kinect, other infrared cameras, and/or infrared sensors to map audio-responsive video to moving targets. I’ve been toying around with the potential for mapping video to musical instruments, with the video content generating and animating in response to the music which the instrument(s) is/are playing. As the instruments inevitably shift in space during any given performance, the video sticks to them through IR mapping. I’m curious, also, about how the video content might represent and shift according to the harmonies at work in the music itself.

Proposal 1-Jonathan Namovic

For project 1 I would like to combine the things we learned in the past units in order to make a homemade effects launch pad. It will use impulse signals to generate different tones and noises and filter or convolve them to create different effects. I would also include a time-shifting component allowing users to create loops and then play over them as a sort of controlled infinite feedback.  The pad will also have a basic record function so people can export the music they create.

Assignment 3-Jonathan Namovic

I decided to base my project around my trip to 18-090 from my first class in Wean. The signal I chose to convolve is a sound clip of me running with a backpack on.

Me runnining

Audio Player

I took my two real world impulse recordings from the entrance to Baker Hall and my classroom in Wean.

Wean Classroom

Audio Player

Running in Wean

Audio Player

Baker Hall Entrance

Audio Player

Running in Baker

Audio Player

 

I then decided to use a snippet from a song called “Prom Night” by Anamanaguchi for its airy sound to emulate  the sound of running in a dream.

Snippet from “Prom Night”

Audio Player

Running in a dream

Audio Player

I tried to use the sound of a toilet flush as my last sound to create the sound of running in a cave but it ended up sounding more stuffy than originally planned so I decided to call the last convolution Running in a Nightmare.

Toilet Flush

Audio Player

Running in a Nightmare

Audio Player

For recording, I used the convolution patch we studied in class with an sfrecord~ object to capture the audio.

<pre><code>
----------begin_max5_patcher----------
1922.3oc0Z07iiZCE+7rR8+ADpmpxFYavfcOsG5kJsUUZuUsZ0JmDGF1RvTf
jYlcUm+16y1.Ax.IjLjraSTBeXSd98688y4a+zatycg5QYgqyu57Qm6t6avc
tybO8ctq9F24tQ73xDQgYhtkpnnDo6rpwxD4hMxRY9mkohEv.vbP0Cp1VlHK
KeJSZogabZoqympGNc6l3TXBleXb8ciWYniZwWdKNzs0bs+Zcmblnb48woQe
NWtrzRDBmMGMywmDpOP7q+dOcKJextPccM24e+o2nOBGlMZXXirnPDsGGJkO
VZW2YxT2AAfg3dRebO4B3duffVbumECHSM2mJe.Vguj4KVCqDU9pmcHCiAEw
QohjyBIdEpAbj9PH91BDqjIhmbnHzvvvBQZz4.BdzK.EngDid.mpOD3ciUGb
Jjkti1bmvGfCQmlCqU3CtsbXdGNbzl6n93duKQKGwmyPvKuYNXd3bON7xGvA
+aKN7KO6.1YzoznmbQV8A9yCo.+SQVq+++CCnK.FvXi89UFEjeckX4ytimUn
Wf8slU.ApuuwcNMn96okURh2ImGIhS2yN6D4oPpMGL7Gwe5n49rWpjKKjokh
xXUZK1IfYCKgLRFq7wi0lcFRmYV6yzeVmnDk5SRhKZmVkJOtlvcVQmV9DLf7
gNBUMabVb.+kbTw8pGpgxFQcgXmb0mEkk4wK1VJ2eVQkbpVPokDIakp002uY
f1RfDUZzwEWclNrjxK6Y98N4ZYApuA2.fpdv2FpAfdmfvn5Ez+vwowkwZWBZ
vD0B15LqsvrZzy8qlPkh99SlzDYg.aNX38u+g299YN1qH5q9vzjdKw+UFui3
QZh2g8uwo1T5r.dm3jbhD7ls+n4yYT8CYn7+8OB.E5YcoXsJA+KzaIpDmtV8
LniLLnz0oUauXi4D8mysFR1.nH+HnXPPUwilbKwnarx0hsqWKye1XrAT+XUQ
z.MGqbhdwkgBFSNBtv5TcIFE7cCWd+UCW7u.bIzZn0fKWoruG1WcVRboEVlN
Gz3Kofj.pWqBNYzqCPXBXuJWkMb1ZcF8nYp8RPpqi5DYjL0BK+F7iBHry8xb
4IpqckboZkbC7YPMMbGA4RUhJuJmJMpglyC4zP9LSIEThtrFbafb7pmTlwOV
.yDMvlCcP2VhcsyFqs33TYhM3bqkQ98mH0t3h3Zg6TlhzPtifT7y.MCG3PhX
o77s55SqfeIc4wJewTyAd32KuxP9gWIuxL2KQq2F115M9lGDeyiewAfl4+Y1
QSH5bPgvK2gbUnI1MtuPa1lTFuTktSkrS9roBhuJyUtiuX2y.dtjLZpKe0pc
fAWjgWkLlAODOsPj6dzfNmYpsdWN+FXBw3QrIvwuc6OfFGz5AuSrcUrZcbhz
IeaZJrHwyePry4cIJHDKdLJHmaCzIWNbUcnpdfgzNVIWK.8c2tAUARC+bGKh
5BQQ7xxsZPvDcyuoQAtqUIIpGhRTKDIkxMYpN8g.FNeiHERcHWujqZ3y9gqy
9vRglPp5NDEECnXhLMp79JukDvCAIL.CIb3B1r+cwdrc+SzrFvjVs6vMKFPt
gVE+yVAjc5S8sPJigTZKykvSW0.DmOccaoQQ7WkN+LdhxP9RBVWUgNgYNDbk
xPFrkjZsimGQv3ibRcU5CgJMcFzlDPSdbsh301Bbn.XAutjWGDuryA3hzWrM
2FJoG3.TrPsMeYMDU4k0oEwVAJrfkPsZ9Ga7szn61q3ZzjiLFxQmLxgGE8vS
G8BuszS2SaHbyHnGtcGzWIMkhglpUvX43qxJfOlE.q6j1DuJSA4iTY1P4Xs4
GlYrBobRyUShThMJgDaxzJnigdD7jRuS6CAMYzyabDjOslYmlfzAzxw2LKcB
c.S8IYI3eacdSFWrI+oifiJZwgZeG3KIziO2rE8lRtnLbyUS2Zz6jNSNwZz9
GFxydfEZ1CZyUS2Zjbx0n+wWiAnlU0LyUnIeMdZmxdGeMVsC2TiHOHfXk07I
JtAAOJCfJOCSAAQilfST9RiJ3MY5x+zDkM3jh8oKzAdTNwBmN5MJcl94up5I
DYY6j4EUOgkTP4Wew1+9vY1qiSsW6YuNWp6Os8QrMv1UjCE6TBU5rM2V+xiA
U69joz87zsw0glzLqlzlZcNrVllxt17jsVHKu0zPhtHRylMnaYNvFxl+6AyZ
8UmsZXsJsoG8..8GpTwRUqZBWD0ytW.uB8qOiiO3mbQz53jjlGqSCQpKR0MJ
WrR+mYYeGCZQGzbBGg4AZR3g3Lju8L3VGPrpGCW+bLJFY1WElOg3aOyCg7I8
9XjVrUmuZMQQZjsDTRXm9hjqxT4MsDYtGe+SrsT0vdMI52tuGypUrygozpF2
p6aZsZyS10XnQgv1Ht+Rp6hTyei1AzKzBYc6QpXWnpaJGgX6EYhkKkokcj.L
BEGpwuPFlgsaUEm5WuUUSIq7f1TwYiJUUjse2VlNc793twomzUoDpJhQzSkv
X3JcRJ1y6PC.YhbygDLjg3XyivPHBlYosGNf8ZLHAKPFg66Y2SQFkFZOiPvj
eXrJGkWitVj.1f8BQF0NFILvyhbf.fv+Qz.UaO.G9O.64yKB
-----------end_max5_patcher-----------
</code></pre>

Project Proposal 1 – Isha Iyer

I have two ideas for projects that I am interested in.

The first involves analyzing the frequencies and rhythms from an audio file to create special effects with the lights on the ceiling in the Media Room that correspond with the tempo and other elements of the audio.

After seeing the video in class last week of a saxophone sound begin constructed, I also became interested in learning how to use Fourier transforms to reconstruct the sounds of different musical instruments. I am not sure exactly how this would work, so maybe the first idea would be more doable for Project 1.

Assignment 3 – Isha Iyer

I used a recording of Prokofiev’s Romeo and Juliet Suite  by the London Symphony Orchestra as my original signal. The two IR “pop” recordings I used were taken outside CFA and in the CFA hallway. There did not seem to be much difference between the two resulting signals in these recordings. The main difference was the volume of the sound convolved with the pop taken outside CFA was much quieter than that of the CFA hallway. This does not seem to come through on this post for some reason.

Original Signal:

Audio Player

CFA exterior followed by convolved original:

Audio Player Audio Player

CFA hallway followed by convolved signal:

Audio Player Audio Player

The other two recordings I took were of wind while standing outside and water being poured out of a water bottle into the sink. I then experimented with recordings of fire and glass that I found online.

Fire:

Audio Player Audio Player

Glass:

Audio Player Audio Player

Water:

Audio Player Audio Player

Wind:

Audio Player Audio Player

The resulting signals after convolving with glass, water and wind added interesting effects to the original piece. Convolving with fire turned the original signal into cacophony.

Here is the modified version of 00 Convolution-Reverb I used to do this assignment. I had kept all my impulse responses and original signal in a folder named “impulses”.

<pre><code>
----------begin_max5_patcher----------
2688.3oc0bs0iiaaE94I.4+ffRKBPwrdDotuMuDjffVfzFzz1GJRBFPKSayc
kDcnnFudC57auGRJIK40RVislX2cwZIQQIdNe7bmT6u84e1c1y4efVXa8Vqe
x5t69Mnk6zsoZ4t5FtyNi7gjTRgti1I7rLZtz99paJoePpuw+tfZIWyJrfaW
ZI4VIq4PSKXKWREvSXUvKyWTnuCO+Id5SzYVMulTVNMA5f9c4V25FhLYMKe0
iBZhzPmHrOdl+8Vdg9ybt2B46nN3EMyw5WpeL1BMEwm+t2faFgB4tTpt8llx
KyX4oTolyPsZkWJqa1Q25+8y+L0Q3v8iFo1jR1kxJjO2LdlWqb2FpgWrKXqx
Io12e7yT+aAKQx34DwN6872QgEMNf7hagJtN8fJHzkBK90MufHIUnPMLbmcR
JaSibU66.2aIKklSxLi4FA+87kL5Sy1Rdpl.p606Y4FRlTtfwUsztGEzTpFa
Liig8awuZoJNeidRrUafvmDDGerPRjzZJuCIBu6MT5hJfs6qDnLdZJe6pT9b
RpjlsgWM9c60uVRRYxcUyyyIErD6t8fKXqXvLMfFqjqawD1RVx6KNn2EokBI
Ki1zuCIJQFIW1CI2cnxJ54kjwWPGffqFiDtPzA3O9X0BZzxlc6k98KKyAgXS
e77NnGaXfL9IFKceJVyV1GiqPrBofBc63u.PEs4FMsWorWcZ8Mpa8bLEzqQS
3nfTX8FqesDlzs.lkKVXOfddnCVIiDzVMG2mZdX70y5WurbA6iTK9Rq4kJGC
CwqQwdZ1KvaTLa30iYAorBxJ5mxrXT.dFJLHJDcbWc3d8MLr89HWC13GnNTY
tOnOnY7dAwGGZPmMzLuTJA02dYy4DkIfAcsU4TyMRe.6U+6w4UmKUL3740b5
VfD9To.AIeAOyxseTfojFFCH34pC6I.oAAbOfPPz0aBuWcge19ad6CPThhhG
XEqIrcTwC+PN8aErmnO7cf2IP2FE9v+ZKDyDcg0+TGGTwCecgJhHksDK2GXY
aJSKnEO7sjuqL+8yHrk+r8w0rPdmopEDgoFi88C2qagc5EqCudXM8iKHIOOj
MTriWKdwyu92iyJWrchWA+EjRHigMbXdGhWyBxu.hdbdJHfvxg7MfVxgn0Rj
bwILvdzYZ2vVJU0QM2moE+3aPOLBJDDSB05OfrOW4cjqqJiJejFLBBGzzhev
0SbuTkZ4vFQqRa5TbrgUqygzwYXV1qYLUBi0xhclmGs.ACBLtJ.b6u469ZSd
OPdvpefqe7u.FB2R10o8kLAcV1F2lFVofjNsjwEcdjCRrpp0s.YK57bagbrz
MzFvDP5YPGezno0NApClJ2mntftjom6lRi71Su5hgPslTeQ84ABe1Jjw9JER
2HmVw+1azcw2f95++JX1X8GabQ1KL6cE8y2Wfk.LugluvpR86bABmPS.1XMP
DMrC.Ouau.rk7hcYy4oW.B3oDEpT07iGFAvWODXBRmxAYJYJZDYS4gt8lrK1
H.O8Ks9iE6cTboy6NZYdHIqAm3cCu8PiTNYgxLrEvdqOeb.D4AbvTvkngE+c
cudnP+UURYFTkNPkKIquTgGeoZ0FTKJglmoB0kqnRcGU2WUGppySVeFqEAJx
UCXdiYoHbuhqEQuFN2iMCH6b5.pcBz5QpLIZpNYuBP3KVMpwAaA4oZp7K96J
5rCCYY+mU+7E+C0uf.hokpB5Nohe63kh5E3BFZp8vJaybig+3cfjSuk1D4bC
VG2u5qrbEPnGDgbmEPk.+YsTvyrVKkad6COrc61YftELeTLiKV8vKO8bbbjY
UrviI67faOsqEzTxNKeGmKwesenl8cM1lCbG11r+sWPpEVETo8o4PObPc.3C
Y7H91adVzgCeg9dwNwyhbf+3Bx3wAMlFbGNVD738B6760L8e5Y0hJ5ex06dX
7.bkDpRKqpf2mvWB51SfeRfADx2XtaTnfy0CERgb4mshvx2WG5mHhlU3e+s+
Izu7x1ID5ReAw2pqpmZmTLNDyD9JJnZQi5CxBNA9brxfgZUuKHhSIQszzsHg
fHsGKOSAEq7p2wgUwZ91Zrw4DasBtfUOFcq0HDzyhGIRofAoCR2eVwmrQLdh
jVR4Ka1lCMaxgVrWJOe0vSWc5NvAB4Q5+Q6b8LrywtYF.6pa9lPEPczNPzhd
AG+1rbljozi9zcyPqdUB8pQovqpCMKw+krp98V8KvafEB96e8Gey2euk4Jr5
pe7rcR.YqW6j.6E13j.4Mb7.XuaOyiRq4veSsROQTQ2u+3oy.Iz0DcnICdPy
yeHT4hyDyaxQEV9R9yfLR+fRWSgssMNlST+6jqxbP0h+XpMDxYXgKTzkBiwS
NLZ1UIOqU1TKriyHvySGCdTPPqHTgrcGFW7u37aeMwkue5vkPihVCtLbHqHu
qGtzus5MoLoAVtDCzA9tsxRKxeXfv81yp7dADvU0jIf3aJPF1XI4TFTht8za
x9v6T6TsY+vlAsMOJgiJ0jng0RtAqwbVYpjUuoseVGLyGoB9KHZ9QE3dUcdB
hz4+0+xweKZEAd0p8034uKLpBfAaVJtfSX.YxWyWyvnqM2geP.ZhUciCPlBd
oHol4pTeAYiF5cAsPxxI628tMAMzraaO5bvnGO0V8q0N5smALX5Fv5W0vCnZ
6hLQCnZy1b5A7.pJisXCGB0qnYmCps936Y1h7NUaYztWhLYM6Gq+3JzWMMbf
2XDJT6gt18hKVPEmnVcSLIbHT2PBnofDFCEnJk0TI2LFwFryjMddduDEiWEH
dbjf2qoflZeALBRX5LO3gFy.pVu1IZ.UK.9H3P7zMfiSRNd5FP2QMfgS1.hC
GisA2Ib.GkOk.7qnt5KfDdszUwiyh0zg5iStZ5rMfGkwnC0uNH1gPW81Bw0W
GYteDp4poiFcOYDimfFC0EqxUWDYqnPyd5KzYJoQ7IoQugow.mFp5d8UNSNM
d5HucGlFqVec+HEoFDfMy0vUSCMNJeUZKCnoY.cF8.NMyBn3QMfSmyQcpTAm
Z.CeMMiNNR3PjYRclfFkk0oy+IBc9dNpxxlrYySTQQ0SXFJ6Lx63ZPI7dy0r
by0lcflsf9Dq9QLeZS1DQxZljlHKElR.7gfpRrp+VdE4krZTVwrpgVWtfCyv
uY07x1YJmfg2.VaIoLUd.hLe0RVZZBO0PdcVuw5xcXuRPVvZsugTe20UOfZs
5vwNn3..hl45DG43YNCZBcvGws4wP0OWjOxIV04HOL1yblqiiG9nOFt4yB99
t+zpij7Ul5mfauJjpuehMbQ8jJPbw6ehRIug8ZzipQXyWJcdyJjBxC+MdNIg
2p7OpxFoIvGY4JAAZqu968+zlNAPuE9gL8QsFfgd0mEWCc0xb2WKXq9eegOo
LQlh50P+cUFZDHLUD7+PUex6uAMrbghqUEAq9aqZlieriSzddfjjPykcjChv
9nPEGDFghPZF2O12y+UfU1pTUrx347hMjjaVY7gmaOp7M.iHHXBM7EgCCb0O
G.nQ33eOD26v0vnFgUiONJBUwz9HW2C3fWccD.3.12yUCggQ99glyvXzAFKN
pb43ryXSSoYG9rgQNwHyTfiCFEYdMtnfnKTpV8jvg+GPCK52W
-----------end_max5_patcher-----------
</code></pre>
view raw conv hosted with ❤ by GitHub

 

 

Proposal 1 – Willow Hong

I want to capture motion data using camera or Kinect, and translate those data into audio signals using Max. 

More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…

Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.

Assignment 3 – Adam J. Thompson

I’m an unabashed Alfred Hitchcock fanboy, and with October just around the corner, I’ve been in a Psycho mood. For this project, I decided to expand a bit on the requirements and make a 4 x 4 convolution mix-and-match patch which contains the required 4 IR signals (In the Dryer, Studio 201, In the Shower, and Crunching Leaves) and allows the user to match them with each of four short excerpts from Psycho (A Boy’s Best Friend, Harm a Fly, Oh God, Mother!, and Scream).

The patch with these amendments looks like this:

This playlist documents each original IR signal, each original Psycho excerpt, and all potential matches between an IR signal and Psycho excerpt.

Just for fun, and because I’ve been toying around with the jit.gl tutorial patches, I created an animated audio-responsive “3D speaker” to accompany and visualize each convolution as it plays. Here’s a short video of how that appears.

And here’s the gist for the whole shebang.