Category Archives: Assignments

Project 1 – Interactive Vocoder

For this project, I wanted to try something interactive beyond mouse control, so I decided to incorporate data from Leap Motion sensor into a vocoder.

The super handy patch shown above is made by Jules Françoise. Leap Motion sensor can detect data including coordinates of each finger, coordinates of palm, rotation of palm, etc.

The Leap Motion data processing patch is as followed:

I took the y-coordinates (heights) of both index fingers directly, and calculated the 3-D distance between index and middle finger to define a gesture so that when you only have your index finger pointed out, it only encodes the pitch, whereas if you give a high-five, the left hand creates a delay and the right hand encodes feedback intensity.

Here is the main audio processing patch. I combined the vocoder patch from Delicious Max Tutorial, which allows two tones to be defined by two fingers, and the delay+feedback patch we made in class, and had them receive the scaled-down data from the leap motion patch.

The pfft patch is pretty straight-forward:

Here are two demo videos with different types of source audio, enjoy!

Leap Motion patch:

Audio processing main patch:

FFT patch:

Project 1 — Bird People

The Second Commandment in Jewish tradition bans drawing/art of anything heavenly or earthly.

Thou shalt not make unto thee any graven image, or any likeness [of any thing] that [is] in heaven above, or that [is] in the earth beneath, or that [is] in the water under the earth.

However, early Israelites wanted to make art! Who doesn’t? To not break the above commandment, many artists drew characters that were a mixture of humans and birds– they were not from this earth, and they were not from the heavens, so they were fair game for early orthodox artists. The most prominent of these examples was the Birds’ Head Haggadah– a Haggadah is a book that a family will read from together during Passover. Other Ashkenazi Hebrew texts drew characters that were neither human nor bird.

Reading this passage during my last passover, I pondered about the fact that bird-human hybrids were used as a way to depict human experiences. I was curious if I could sonically do the same– use bird-human noises to depict human experiences.

For my project, I made a patch that downloads bird noises from online and loads them in for granular synthesis. I then used a combination of the source materials and synthesized bird noises to create an ambient piece, with the intention of capturing the wonders of exploration.

I chose to use the Xeno-Canto database, an online database of bird noises. In the research phase, I found that many unedited bird noises sounded really weird, and almost granular on their own. For example, the woodpecker’s call sounds quite a bit like a 50ms-ish grain repeating over and over. Calls with a “vibrato” sound like short grains repeating. It was very cool to play around with this concept. Because of this, the piece also serves as a challenge– listeners may have trouble knowing whether some clips are doctored or not.

Here is my patch: https://gist.github.com/KenTheWhaleGoddess/dcdb026ae2af56d5d316c1886315f53e

On the top are links to bird sounds. The ULDI object downloads the bird sound when sent the message “download (URL)”. It outputs a message with a 1 if the download is successful, a 0 otherwise. Once the file is downloaded, we go to the appropriate path of the downloaded file and start grain sampling.

For the grain sampling, I used the stutter~ object. I use mathematical manipulation to create an interface to use granular sampling that takes in a grain size and a sample rate, using a mathematical expression I made using expr.

I recorded audio from the patch using the sfrecord~object. Then, I made a ~3 minute ambient piece to demonstrate the capabilities of my patch, and explore how I could manipulate and compose with my chosen source material and patch.

Assignment 4 – Nasally Controlled Filter

Hello!

For my pfft project, I wanted to experiment with facial tracking and use information from my webcam to control parameters to a pfft. The x and y coordinates of my nose provided a high and low cutoff for the frequency bins of the original signal. As an additional point of control, I added an adc~ object to take in audio and alter the magnitude and phase of the original signal in the frequency domain. Below is a clip of the original audio that was used:

Now here is a clip of the audio after I used my nose to alter it (the adc~ picked up a bit of the feedback from my speakers as well):

Here’s a video of me using the patch:

Below is the gist for the top level patch:

And finally, here is the pfft I created for use in this project:

 

Assignment 4 – pfft~

For this assignment, I created an fft~ subpatch that uses delay and feedback to change the input signal. I then used this in a patch that uses the resulting sound to change the matrix values for an audio-visual reactive patch using jitter. The patch also takes audio input from the built-in microphone, and uses both this and a loaded sound file.

Here’s a video:

And heres the code for the full patch:

<pre><code> ———-begin_max5_patcher———- 2265.3oc2ZssjiZqE8Y2eET9wS0sKjDhKmmb9.xWPpTtjwxzZFYDAv8kjJy2 9YKI.C1fC1FRO0YlpArDRZuWZeW7WOsX4V0G7hkN+WmeyYwh+5oEKLMoaXQ0 uWr7.6iXIqv7ZKk723xC7Rd9OV9rsewNSOpse6ELotwziGDoRdoYTnSMpNVd dqGX4emmWQEu369ryKdgvEhObA6AWPX3B7KnKemeuZX1Yp7yLtcjKEokKa5s ZROVvMTWXUyEkeJMu+xZBMiUF+pHMYSNOtzNSDDcErTdgX8MDNTey2akqyuq Gye+zS5KOORDKk+N.M0KWI+CyprbGKtWDD2KBh6EAcuEthh8Mbkqq9FIPeEi uSl5.unfkvufqTY7z93JzMvUng1fOs6NNFNvrMh7oS.COvtHaWu6hH+aPO.O D+VHRRYxkO27zsw+jvHCK6GLA7+1ikkp91Z8Q2gJ+Er5VVZxMxdAAF4YTn4F 1q95Tt8BF5xUNHW2dXbZzTHS+.LtGwXWhNKJxwRNKuOt16KPSNHv0t6RmSIY J9KRRNLBY4n+Mjj6UT18KRTthyqzgCPOlI5iG1x6Sl0K5DIjyLAurgmx1ZoO 2IvRs1F88v99FUXBwHWScmCOTwpio.C67BEhipOvg16dOsW.v6Zwd8rEGz+0 ITrwAEVqY3JIgn4.Jx1uu7GNvkWXG2ITuTbbqgR5yANcJzHtK21TWrUnvpZP HODTTpRRj793PxDoTfFU732BqicCmOCgE6yjrO+gyZoRk4zqFwMr0OhX2tC6 BdQVbfZTChZ4uqdVXuw2sAHVXHaXkk4BvqlMQtEMvA.PrBQb4wTglB.sWups 0EK2qjR06IR0VlrjeHS0ZSW2a9AVZYrJWSSBvaY6dOn1Y4.yzurtYUtHQ.7r jmlT9pkQPgQggqhnDDlBPQoH96EMXwogzPAHLXCT+uv5WIS.ny.DxebjIEke 1CsTJf3ZJy4ZkaCgTIBYkglbyIuIJ.RowdhCEg6Si6VhT18NRXk3YjWPzvGx jw0Swm9+CY36gLVU8vj5T6Az6dwquIJWEykxsRU726aeuI5p8pzxTvVqoqeI WnMPX6IW89433kldhUx2E6r5VDRq4bOKtiUacaEh+jaUoV4da9xkhhNNyOA4 .AXIHp+ULh6aLd4iMxjQDiLIweZshm6rsGjt+Lxbm0bS7BqBdyZklNGAtnkv Nv.q7e3rVK+3TDyjbjqyZMc6rWpXkDryZvyVJ2DvmCwY8NwA8NUu93Bmh5S. j0FKY0UL4lvMqMKRvr.bYNGTqxTEBsuihdvAxjHxzBHt0.dLXPj4VX3E94MC Axo4L+56EZaz4EUNDQ0t8XYYsZdQqgngquoLSTvyMMIRsMQZZJmC9xpFOsoU VNP3k.UeL2xReDVWfqp3AxSOJLjhsQXi6o5I7TD9cplpqsDqtm3VXONQaB0Z OuwKutjhhzrbdAOsjcQP.636YGkka52lW29OyNY2N60v7hkI4hcpTScMaC05 lqWNHNGakFosYFyajxx5Yv1.2FnyBfIOVrkkq2IpBGG2DXiRI61Uy3j78kUc mIRSOCEKUYC2IDF1qWYraUPmGt1ba5oXywTaua.8xxM5fT69dPvgUJpcm9OX oBP+gqCayvttMcZSI40h3bHh0N7qsm25omcfPbLuwUoaagA30EY0BQKa1k2I Rf.F61VIKonaKWnSCMUEyGj8jNJVIvEcegNmvRaMx1lz5z90Lscoeg+iy52X RGzp1uQekhdv3JF1j+MY1e.D5pV9vndK7yoH0WT6DXFPNv0PgH0Y89b9e3Dt B7XVjw46.4E7PXIcPrD8EhkHe+UP5UUGWk81WJXhVgOAlDxPno2OmnoqMKJa 4GPgQeQv42T.dRzNLGB+vChejubM6pR5FQ+xEFcWQOILhFBKI+TJKVgkTzbp YaI8A.FR3c.LtOfGgHS7gD25qc.tX0gC7zxKlNQ5N9Gsh+rCtbJjLoHcHewF BR2e+.Vg5Xdb8VasaUmtDGDbQoHsIB0e6D.5L1MralJnijJz1JbvyEU3cKTA ZtnBxsPEy1NBdrTg+fTQSsJusp8dSA2ZK8aUnqMo9bsHcOEN7jWI0hdqlCxa 9pWZUnRF6pTzbUylD4pCP5khLI2YcIKOgaxxzI93V9mzdX4fo3nGtihwXs85 GZc2DQmQ.wjV7qrL.Qr2zfA7rthV5nEa9acrRpxAyEZG35+PNqyUPxw7O97O 0mjsNMQ8myjyZ1QH+TPAKFTnV2BgcVmojetQWfBXzM0ASqNapRVaXr8mzxUk 65TngvAJt57TyLj87wBr4KAPf86TKZpOaveBOZPDxqhmmuiFTKg9tJWty4Ut Tp.wGKWqk7L0XEHI8yZwsMuxR2Y6hmyJ3arRqtl+2WEWukC+g7O9cFbGxNU. nG1dxG3d0wMyY23jr.549Aqf0K7+E0182.t9ZEQzEabiccBGw5DXi1nFLy2Y qjJZ1WX8I9z5kZVY2GZk0eoc+iKM0aB.W5X1EqImGak7FCbRmhUBOhUxaRVI 2wrOgmfUxKZrnG5QWI5XPunofm92RffDMRSHO7BMFNZBVG7XrOLElG5l66.K DlLG1.G0RiP8uzOlgezXTAn8ux34ekw3uLlFi62Qq67uzCsUSdrkdLprm4jb p.7QszyR7EiZo8lEA7NxOCrSeokKavpmcn2503rC69rC59xC4d3C397C21TS Fa.2mEfbSMTdWOSNGTophL84Ma3p5SZtKPny7vj2vFQplGpvDSJNmtfZ8AAz 9fpAf4WUorX0oLA1lrWHklo77pCUmEwxjb1NgtHp0E1o400qLNxEEo+PrVQb iBc8rOAMg5PHUiBUkaQEYd5xYuGtd5C8wAdT8jFF3RCCsOE5QNe5gjbpN56f VmeaVtJSk27M.rhD079PNYMbVspvoRn0gGcQ9tQQFzEgBoHyStDB0qKXykbc 0l6L1fPBjBkdDAQg9DySgdHLNn6X2lzYXQAATTTE9.+LhR7IUi3oSDpNgaXE OOgtklpHUyVlBD92O8+f99WHp ———–end_max5_patcher———– </code></pre>

 

Assignment 4: Luma Convolution

I was really interested by the idea of mapping video to meshes, so I iterated on the patch we built in class to feed webcam video into the color array. I then created a sawtooth wave that uses the mean luma value of the webcam video to control a bandpass filter, and used pfft~ to convolve the wave with the input signal (either from the mic or an audio file). The result is an interactive video of… yourself!

Here’s a video demonstration of the patch. I used a lamp to light the scene up, and covered it with a book to alter the cutoff of the filter.

And here’s the gist:

Assignment 4: some vocoder thing

I was messing around with frequency bins and managed to generate this patch that makes everything sound like it’s being played through a vocoder. The concept itself is pretty simple and just required some futzing around with freq bin offset values, but I think the results sound pretty cool.

Code for the inner fft patch:


----------begin_max5_patcher----------
608.3oc0WtsbaBCDF9Z7SgFtsTO5.J3165yQmNcHX4T4.RLfH0oYhe1qzJH1
ICFiC1Ns2fGsdM+69wd.+zrfva0aD0gnuh9NJH3oYAAfImgf1yAgEoaxxSqA
2BUhequccXj+qLhMFv7m1hn747N65FStvXdrT3u2g0x6To4gnez5PYpI6WR0
c+rRjY79vv2LGGgVfcW4D2UJcN9kehbInjU8OeSmN0lGyAIB6rnZJjJq1PzR
2YzGQfUhy3yyl4tDMszdszLuTeuXKxmfHBBePFLX1yW3RXBOF9ffGJ8Y+qj9
k5biNKsZ6wdtGMpJfXN1CgDnDfAG9R+LHtSxUZkQkV3wv2pjN0NI53tA0x+.
dZEqWlQOeLaUkMXSyxZJ1hHSoegy7EIXnxIYPZwmLsHm.sN2MXBw8ipAaWw1
vkYz1lLJLugOD3Rtpf6bVlsxXuw6Uh0GHtA.ACGuaj6g.AAeUa3vmORXmOYz
1IUmogTj1BFJzv0N19PPi+e6TpUFoZDCnhdezi6RATb7fv6516w1CdP3DlKU
u8cifLxY+0DsV2Tk0Ql1WPAsKoVJpszL0H0p87Iw6SuOyFqNNFgnGQnWGL5p
khpC2hcRJSNhxwWBk4ifstFu8htWTlbwUNteguBvl0OroSWY76S4oA63Qjxt
kRNmljPIi7o5T6UiGScK9ZMTfcIZPFU0Bg+gMO5MiEJjKK0RkocHOEm3euPX
yNkwlyaOMLm7aKRKKePTU2JHDm1Msq0f6IQvQoxejAGqDOH67mCVRqr6KM1k
kMU9sZaV3+qtgEZqvpFYq1Vked1eArbfaJF
-----------end_max5_patcher-----------

Code for main patch:


----------begin_max5_patcher----------
791.3ocuWssbaBCD8Y7WACO1wMCRfM39V+NxzwiLVlnDgDUHRrSl3u8JsBry
E.y3PyCQIrZEm8bzdg7xLufMx8zp.+e4equm2Ky77.SVCdMO6ETP1mwIUfaA
B5SxM2GL2skltWCl+wQeTXqUYslS05CkT2aNnhkKH7.++z3PIQmcGSjuVQyz
NeVt3lv49nzH6uhVYWw3aBOcD1V.GC1+LsEmJ8AN.QPqEQcASXvFhU7YitHB
rhrFec1L6x7QR56Y5axnb9FtL6gdYImUoClaClleFhunDfooI103PDP63tI7
xVH2IEZAovw4eqXFMsYmLI2w4v3zFSJ4SNB+lCuijAGNbH8x5XE6YvQj4NnS
YL9JjwdxcrpaAQqX68cIJ9X+cbIQGgA9zqfaN3Z2Aunbiiwt7KPuQKGLAKdj
IXnNUF7zoLk61oO5aWWyDBhZqISY0R+3KUncI0vUrgwgfXXS65WLh91EibBS
bbDTjIzCxxEo.uRfq7TnBCgB6lk39YYIQYJ4zT0Zpfrg2e8yzo.zm2RxNFbI
pEifdHwKZW6hYH7WpcY3UD9ZYdtAo9tAuz8VynfPWZ4o0NIG5+1810LmnfVU
QxoepNVVRE8JGCOlHb0o7Vy.hjgpTW8sOWrm1VU6J4jCG8winH97esgHxGSd
wBPJVLXG7jujTD0dXxizsqMuOSHrlnMCZ1Tqcetj2IEwnQjJVltVvrwuYxXb
SVlcRJmKeJmK2P3ZZQo7M4f1cUEDgNSprbjIEua2B4VWfCu9fVyREKmYTLNU
jquqY31JbXHzJHPyxdn5jJd18Snifl9M6VxLhbO3+2ZBmoOzQHnYl7bshZNq
CeyU.rEj8zlCAwf4ahDe76KgaVq82mXUIqUYs4GMoy9mub2ZfzPj1v71S2ys
n+oT2whS5HvIZBvIYD3jNQ3f9FvIdD7Y4DfSzHvAOA3fGANn26jTskp5u44j
iL5yHG9kP1N+7JJxb01jxxGoppFmALLiGtWBwUxb3Qlv8HzRMPQej05+BvBQ
Y5xqMsepUttc6Sc+2NP+Oknl0HuFjec1+HBXmoD
-----------end_max5_patcher-----------

Here is Mr. Blue Sky – ELO run thru the patch

Assignment 4 : October Evening Stroll

The recent temperature drop has finally allowed us to feel the chill of October!

To be reminded of, and to capture the sentiment of taking a nice stroll in an autumn evening, I decided to take a walk around my neighbourhood and record the sounds I encounter, both natural and manmade.

I took both active and passive roles in the processes of listening and capturing sounds, before having total control of manipulating them by deciding which parts to edit, keep, and discard. The absence of the original visual information not only allows the viewer focus on the auditory elements, but also prevents the site-specific details of my neighbourhood from becoming distractions. The process of focusing on auditory elements is assisted through the visuals that are formed through collected familiar sounds.

The sounds, which had been creations at the time of my evening stroll, have now become creators :

 

<pre><code>
----------begin_max5_patcher----------
1448.3oc0ZssaihCF95zmBKtbTlHr4PR1qx9brZUjC3j5Vvl01zzril4Ye8A
HMsk3RFfVspsPv1w+9+ye+mL8G2MKXG+YhL.7Gf+BLa1Ota1LaSlFl077rfR
7yYEXocXALxQ9tGBl65RQdVYaNGm8q1FkpSEDaqssvpKorBhxNEnWZjWqZaM
ro0JrJ6dJ6vVAIS4VXIQQKBmCfwqL2hVZthPKBA+cy2glakldc88z.SS+7t6
LWl2Scpp.epfJU8UCfcpAIdzfT6Rek8JLIzpGgcqAIshzM0pSUD2jDHoGX3h
f4c+Iye4zLEkyvhSAmm5brB2n6MJ+rfrBZ04c8K5P20dZAggKcZOOSsk7Dgo
0lEGwO0rxZG1iTlaYiqyobSKWLfBNu5h8UqT4LEgo1JUXEocIcov0SKunfe7
PAeGtPQJq3tE4YfxAv5sqtZeOWThYpLtvf7ZbnqAwEzCTMjo0xCp6aFgY+HP
QydTF7pAWxyaA+cXIM608JqHjbW2vE9jRo7rb5bTWnoPzaGz+TiKnpSWeYTQ
07M48z8ptWKMvR2cZmOUsYG1Mf33WO.YQsPQKIcqAVY6GvMeYoRPzC7b2s8Z
sScenoQWK2n86U7I8.UsPVPyIfM1aaqq.IseNmejAPgCxhGccK93kVRUbr00
EBF5yoUB5p17ZcXaIVInOaswcnz3AOGJVXdtVnwHiUOP+zTAInUHKjrN15Ez
ue7DnWH4Pw1l08j.KLNUR.PyOSEZjrLwn+IPnMx.zGXDm9UvODfc8T4C6T4g
eX7v3k1anDuJ+0CHN566xLL6n.WA1XjBXeAGqhPfM4zR.Lz76FbNtRo8gsQG
EhQx30LEHZx7hjXMRhCsdQhRV6EnfeErju8KMrDNnT+7PTRiW5R8ygCod0+v
OJyowTuq1uW8Kf9x2U7uqivc+IMNfhGDSHzGP3xAtIExHuwShPA+uvMPbh0s
WTiC.utAhfeNtAj8Vcu48PTpk+BQ1.fHuw+hBCFWWaNy+ln73xJYW93PIo5X
deNt1hsYCDk1bKxGZfVMst1pK2QDAmWrBMFoHhs5Bg14T5voJmHXpyBvdKwq
UMxSfv4lJCLovOxAD+FXyS3hdmED5V0e3x3KXAtrBtp9G8UkkbEN6Qf1D4g5
xJPDX8TiFqcN4gdC2i9xJZnl8V.ArgueujXxKZXNLh9PvAsxFl.t1q6S3xdB
NiAToi9KwGHuCqzkmvtohHt4zihbdORsDkTu0P.+DSiFynkKxEzmHSUzCmhG
CiWjLGrzqaCHzqayonjZS5f.WNgap3Em1ZNGIKSXSt.ez8XEmxTRy.nlyDi9
ujda6r9lcrfrbijlXtq8yTBmTGKFLZek7PMsUL64L04ib7OETy4YdMbn8D4h
u36tGm8pfzl1L3okYYz3Qgws1EbdoEBW4NB2jtQvke5LtLM7Ivuv4jTywwoc
GGBhmJav1hxbtkC8xoh9JOMGM9bPPyk2i0461.QtGjU2SLG8kiSqWaU0JiUZ
FufKbttun3+IFHQgqc3o2pb+xh5ejKJxagOMYSR15foP2OKz4C3RX1.Z1xJz
p4.O.snOD0bEDG0vBulKsONy42AY14Infxd66GytVLs+ZbTxqEYsSaiP0Hw4
WCCQpwC7KmRdaYdyaOP72sU0WAYNqzWNz8qHIC8Ump1.kTODT5qFCWjqqrxv
++TDL78BFN4BNNraUNbPRNNsGhtcqeXRpWJY5XHo9vTMuMjI.N6inMkXN9bn
2vOthm.zHfuQ8RIWMBRpcR7yYFC1IJ4F13Fljh5ijFi8IDpudrGrj5gfVNE1
a8RCuhjGl41alUODlgBtVIA6CgYnTSXeLBfig4FD96QMcoKgqpdhHjMi1JDc
BlOvsaqKmaejxbOZysKPPdh1Nd6+AMAXgNUOkNOOS8.lL3ddk68eZK9SvzEN
53HZI+y69O3HZ.tG
-----------end_max5_patcher-----------
</code></pre>





<pre><code>
----------begin_max5_patcher----------
625.3ocuVssiaBCD8YxWgkeszTaGtk9qTsZkShyVmB1HvrMqVs4au9Bzssh.
dKw4EPdv3iOyblKutJBtSdl0BAeE7MPTzqqhhrlLFh5WGAqnm2WRasaCJX+T
t6DL18IE6rxZ9SW.3AispWJYVqCVDcUbQISYOBx6FkcpAq3dq0T09uyEO8XC
auxcwRyVihAjBj40lT6BxZD3g9egevBl9Z8YBZ.R2Qqdol4NDXK+IAsDBdv7
82Vsx7HdYj9DWstV9C1E.sptEfAjP4BvaSLzNAYcEXTwTNA71q5DBK8ClB.m
lai8YHenew8g9e4RvnKwRWbtimaWmNAayumJ9vw4hO.kytqTNbY0Drkza23A
mSuJm4B0+Ig6p1wZf+990PqXJVyiLAcmimH+HNdThSlf3tDYGwSQSlMmb8r4
X.bGU7zMMbe7nhKNJ8UmON0Slk5Ia6o9Twbxjw73wdEz56nfUvCiWmq8DaPD
OJvmeepuum1nj0xxkUxahrfB2bMXqTHaxrfYq3EGjZe1jA+mra7jgMy1nK2k
RjOkGXi+dfq3KrmJrjK92AdsWLi8+1A0J6Z1O.ReP.79c6.qU6bnJtT7G6wL
.pYSiFC7EHMWAjY.RmDnyHG1ir4ftRtQtEbfMSWNFxnvibw3HiWLx34PN+FE
UmS9fytA.MnAmU+ft05GuP1JfP2ZAjePWLNzKSAgy8.5LmLaY.k4IPKUAYl1
bdmYdHJD3GzYgnRfYdq4KEjFDIjWPmLehiqIGst9YVSa+uaQU2u+jzt87X6R
tvsz1dF1vdlOr+TqEZitasR2ptqw0x+bgaJDXkTCrni2SaMxus5WDzjIpA
-----------end_max5_patcher-----------
</code></pre>

Assignment 4: Messing with Recordings

For this project, I didn’t really know where to begin, so I went back to the tutorial on the pfft~ object to see if I’d be inspired. Thankfully, I was! They made a patch on that tutorial that dealt with capturing a recording and processing that with some fancy math, so I tweaked it to create recordings with a vocoder-like effect, and used it as my subpatch for the pfft~. For reference, I included a link to the tutorial at the end of the post.

Here is both the subpatch (framerecord) and the main patch (frame-player).

Where the fun really began though, was figuring out what I could do with that recording. Because I used both inlets of my pfft~ separately, I dedicated one to recording and the other to doing different playback effects that would be done in the main patch. The reason I was able to do this is because my fft treated each frame of audio separately and stored them in a buffer~ to be used whenever I desired. So, if I wanted to just listen to one frame for a while, I could stop the recording on that frame and hear that particular frame for as long as I wanted. I added this capability into the playback effects used in the main patch.

As for the other playback effects, which were all facilitated through a counter, I added the ability to speed up playback, slow it down, play forwards, play in reverse, and play the whole recording forwards then all backwards. Lastly, for convenience, I added the ability to change the maximum value of the counter so short recordings could loop back through without having a long gap of silence while waiting out the space in the buffer~ from the subpatch that wasn’t used in the recording.

I did a demo of all of these effects, which can be found here:

And of course, here is the code for my main patch:

And my subpatch:

Pfft~ tutorial: https://docs.cycling74.com/max7/tutorials/14_analysischapter04

Assignment 4 – color + mousestate

For this assignment I decided to experiment with the mousestate project. Based on a meshy patch we made in class and the freeze frame patch from Jean-Francois Charles’ tutorial, I embedded two pfft~ subpatches.

The first one called “hw4-pfft” is in charge of creating input matrix for the mesh object and a single float value from amplitude bin for color shifting. As shown below, the value is scaled to 0.0 – 1.0 before entering jit.gl.gridshape, and the specific range could be changed depending on the amplitudes of different audio inputs. Note that the two other float values (saturation and grayscale) in the message box can also be altered based on changing values like the first one, but since rapidly changing saturation or grayscale don’t really create drastic effects, I decided to hardcode them after experimenting with various values.

I also incorporated the mousestate object to track the speed of mouse movement, which decides how often the audio “freezes”. The last two outlets of mousestate indicate the mouse’s horizontal and vertical movement during the 50 ms interval. The faster it moves, the more dramatic the freezing effect.

This is the first audio input, Sapokanikan by Joanna Newsom:

 

Output: (Mouse movements are easier to see in full-screen mode)

 

The second input audio is a woman whispering in Swedish saying “never tell this secret to anyone else, not your best friend, not your parents” that I found on freesound.org:

Since the amplitude is relatively constant, the color of the mesh object does not change much, but the freezing effect is especially interesting with (creepy) speeches:

 

My main patch:

Freeze subpatch:

fft subpatch:

Assignment 4: pfft~ applications

I decided to experiment a bit with pfft~ and see what kind of output I could get by combining some techniques that were covered. I decided to take in 3 input sounds, with 2 of the inputs being cross-synthesized, and the third being convolved with the output. I used the examples in the pfft~ reference for help with this. The sound ended up being pretty cool. I then took the peak amplitude of the output, scaled it, and used it to control the saturation of input video from the camera. Overall, the output seemed pretty neat. I’ve posted the gists for the patches and a video of the output below:

Main patch:

fft~ subpatch: