Assignment 4 — Jonathan Cavell

This assignment had us look at signal processing utilizing the pfft~ object.

Personally, I wanted to do something interesting with the amplitude and phase data that the pfft~ object provides that we had not yet tried. The end result was a fairly straight forward use of signal information to create a reactive video — similar to what we achieved in class — in which the amplitude and phase were used as an external effect on a noise matrix.

I chose to use noise because I found it easier to produce a particle effect using the draw points in the jit,gl.mesh object.

The patch takes the amplitude and phase data from the incoming audio — in this case, from a microphone — and captures each as a number value (using snapshot~ rather than poltocar~ to convert the signal information after some minor processing). The number value is then used as a set of parameters defining the location of an attracting force on the particles — causing them to moving around the screen.

I also added a more extreme set of attraction forces which use the amplitude and phase information to govern how strong the particles are attracted or rejected to the center of the video window. When turned on, the particles become more erratic due to the constantly changing values which limits its applications — but I like it as an effect to instantly intensify the drama of the visual effect.

I am interested in developing this patch further with a set of filters and gates to create a combination audio/visual instrument. I would also like to refine the way in which the particles are acted on by different forces to create a more fine-tuned reactive effect.

 

Assignment 4- Tanushree Mediratta

This assignment required us to use fft- an object that segregates the different frequencies a signal is composed of. So, I used Alvin Lucier’s “I am sitting in a room” audio signal and got rid of frequencies above a certain threshold. I then delayed this signal and added feedback. This added a certain villainous tone to Alvin’s voice. I also modified the patch we made in class to create an audio visualizer that visualized the audio signal before and after the fft manipulation.

The code for the main pfft~ can be found here:

The code for the sub-patch is here:

Assignment 4 – Multislider EQ – Will Walters

A video of the patch in action (sorry about the clipping):

In this project, I connect a multislider object to a pfft instance to allow for real-time volume adjustment across the frequency spectrum. The multislider updates a matrix which is read via the jit.peek object inside of a pfft subpatch. The subpatch reads the appropriate value of this matrix for the current bucket, and adjusts the amplitude accordingly. This amplitude is written into a separate matrix, which itself is read by the main patch to create a visualization of the amplitude across the frequency spectrum.

At first, the multislider had as many sliders as buckets. However, this was too cumbersome to easily manipulate, and so I reduced the number of sliders, having one slider control the equalization for multiple buckets. At first I divided these equally, but this lead to the problem of the first few buckets controlling the majority of the sound, and the last few controlling none. This stems from the fact that humans are better at distinguishing low frequencies from each other. Approximating the psychoacoustic curve from class as a logarithmic curve, I assigned volume sliders to buckets based on the logarithm of the bucket. After doing this, I was happy with what portion of the frequency spectrum was controlled by each slider.

(Also interesting: to visualize the frequency, I took the logarithm of the absolute value of the amplitude. In the graph, the points you see are actually mostly negative – this means the original amplitudes were less than one. I took the logarithm to take away peaks – the lowest frequencies always registered as way louder and so kinda wrecked the visualization.)

This is the code for the subpatch running inside pfft.

And this is the code for the main patch.

Assignment 4 – Willow Hong

For this assignment I used two Pokemon models to represent the frequency spectrum. Larvitar is displayed when the sound frequency is lower, and Pikachu is shown when the frequency is higher. The scale of the models vary based on the amplitude for each frequency. In the video, the audio frequency changes from 1000Hz to 3000Hz, we can clearly see a greater number of Larvitars at the beginning, and Pikachus gradually take over the space toward the end of the video.

Assignment 4 – Kevin Darr

For this assignment, I created my take on a spectrum visualizer using jit.poke~. The visualizer works by doing spectral analysis on a number of different audio samples and writes the data into either the red, green, or blue channel of a matrix corresponding to the frequency bins. The result is the visualization of 3 audio signals simultaneously. I added a rudimentary spectral degrader to demonstrate how different effects are displayed on the visualizer. The audio used in the examples are short loops that I wrote. (My favorite is the 8-bit lead synth on the blue channel due to the pitch bending.)

 

Continue reading

Assignment 4 – Vocoder using fft

I use fft to build a vocoder controlled by three keysliders. Here’s a quick demo of how it works! In the demo I played a clip from “I’m sitting in a room” at different speeds plus at different directions.


----------begin_max5_patcher----------
2170.3oc6aszaiiiD9ryuBAg8zBmX9RulaKv.rKVf9zt6gEMVzfVl1gSKI5U
RtSmYv3e6CeIYIGKElHEmd5YZf1wjhR0W8whUwpn7ubyB+0huxp789AuO5sX
wubyhE5tTcrv1dgeN8qoYzJ8v7qE61kw7WZtTU8ixFp9a5o3PNuHiUqGM7Tm
hC0m2qoq5G2yL.vmWT668+rWcOsN8ddwtOUxRqMCf..2AV5EjDdGP9uP3ROD
Q0i5y1ajuQCHw5e51H+1GVIMmUyJ+Dqft1.Yf5R+5M2n9XoiZeNqphtqU8qY
es1Hr8rBG4DzKiSFkPv33yIDbjlPPWlPh8eEJsDoqYkGeg5WtXCqa6shh5sz
TC02ouJ9Oq6Chuqo6J9N8nZ6nOWgFhqj2WAMyeom+1LAs1EpKhDXXPMqguLq
AA9c.bgzRReo+VIWJsWCgxdP9XehQT0VI7DkaN5gdarkr7iCzRXRxcgx+EGI
oH3XVTvWkIkb9Qh4WnWjArV1JJyoZ3G9xLWTFJqoE6FkPfxETARRv34ALlYB
JZDyjqjinp8L1FOn2eA9BMgFfbmCuTVJDhzrWTvXbHddWosOi9XFup93jBXE
77Nbtz2T+eCOslKJnkONNEAzrBLFo+iwNCCFXIWRC12PqoVs2p9K7Sy36ain
24BJNkmwZ4zOPSkwaEU268O9weX0+ohUVs5exjWe0eWHjg389wR9WXq92OHo
OoI0+RqVUq9.aCmtpRbnXS0pRgH+1pB996n7sVP0HnOyKLPldXCWn5oy.xDh
8c3aMtkS3rh5OUUSqYMJUW3uvXbaTsak7UbC4bxMPQCedh4LqO3xEL0kLIwa
FP+K2Dq5iJGBU7T+dW8+eflwqeb3ArmKetU2y2NfzEk7cbI6IY+c0M.PMCKw
U5mq5+vpxNTpfa6v5cUszqOTvK1Y2SDA7TrHihnrsjFdWRa6Cm7pKKolQUyx
2Kr5E57AoVecIQXmLFGGaEYYhG1kIV2SJxA0LF8pcyWrcZ5Ydh2t5nG4cKPK
LhnbJF.0af0D0cnXrjj4bWFozL1QuagxnDf6RhUVqtteiv4e+FAAFWcgZGep
vBivCQyHOj9XphGdG2pkQ0Ql8WYaLjlGNiZ9e8nG7cSqQIZScLQGoCiFUqCl
S6d9tiuMos9BzZiYNdbqb7Lp040hsuMZ8n5afQQQAXyj8n5K58ICBzUJChXi
5iQlLMGctGGb8Rg32OLXTXr6LX3exfWfAAX2Yvn+jAeJCFZJrlaLX70iAelc
2b2TJC.5kVF.GhDFDg5xiIiwiIyaw.RE44xLLeBWsUlUoz34AZ4lUpuull94
6lGax97F34cwAfl8Aaph6.E.H75vK6n7hqNOXcTY3AbvXzPvUopr6umVIJOp
Rz9auES1sYEqWFEO9louNEwtlwpu+3KLexqEQgCMEViL5gjPfyKS84pL9FV4
jR4po28krJ4ZUpplJ87pZV1XV0fCahQ0pdW7r1V97G4l87.f1zzuvStaAvIW
ona62ts9n2WDoBIuda0g0ueIwizTNAqWABAitu.zrlOK8gipSD8cOS9H8hov
Qc9.I+9OmVq9ZqWy34zBC9F0QANxTnMyIo+V3oX3mbW9I7ObdJP5SUyQOEvn
um7T.M6OwMOEeGT8qF80r80mwSA5aTOE5kwuUdIHtrcBHbtcRv94MzziSh0F
I2EXndMcj4PLLpn5yKtGyWyrtJgroUCaGd0YdtYvlCtx9tPXNvZx.ong9Cma
d6I46na9tVA5A3mwKN+ExSKEU+84kJwgxzFjYKWxoSX0eCqplWPOc3qs413A
aGTNeydgbF2JwfXnZtEGfzSwxVfjjjnPaWlyf8IyXthuXGvGzNnIInnqkfHt
P4n4PPtnRjfwlawg50sXigIFAZaMUrE5B1hlCVHvkIVxbHIrq78jkDxEIAlC
IAcwXcTan.XfxpgfzwsIPRaqohMfq9tlpjvwtJIzPrPHIriWRUqH...QyiWR
bjq.DOD.ifnN.T0BIciGGOS.LzU.RFDfQQcAnrUhjAQjYBfAtBvfg.XLA2Af
caMUr4hGrjYvNG4hYDdNDjSrc+.kmGZBqCFQhAlWBRcBS5VSEaDWvFZTrQzl
pHhdibVGf5VSEat.swPlkmBSh0ucsQFuQpVmxRqTkjnbz3qBTgNBUyeF.pn2
bnB6GBnUxvqijgOUxfIIYXhqA1G1VpITtMubPaq2EvBibGrHDZDvBuBf0cr1
s0ba4CcJ8p4XijvnI6wuYRa183CcYyIv.m73adgO.mZMUr4Tjx4HlLj35hrg
SWzn2Dn1Ws5EWro0TwF1w0TCGMwD+vBstslJzbxG9rLA4RtXPj6loVVXVLSm
7p6F2bOypaSAun62+EVYk84qQkeN8mDZWhQK0M4EllXcyR1W3Mimn6gVldOu
lkVenzTruuFaNfI8urixhCbar.IebisnfmUjs1i4O+QSMC0plT82ROjU2mxT
kcLUjIJ+DuPgc14UL2n2c7wud2VdVl9dN+2qSS4E82UR2vO8d3n9EwXGt5Qi
R.vjP0iFCRhADy2jcA6II6cAats3.HHQM1XBBQLeCC.DzktKT6Oekk8+3z3n
E6L0SEE09yyTcDBh8hxFaDIvRZG+gZQqh0r4AyzfgW5njPC0knJ4Ho4aI80u
duRFRiwOHJnoB+F6pkVK8Ro3Nuzu94xIRdCH5txnct2Tc4+KS86R4V3Xl.ce
iU9n9cVIPlBa6uHIeZZJqnt2LXLJ.FoTqnXXLTajDjDPBr52rn.OnVF3kKJD
U6U+Va+dvH1Eyh9FvRtFhi.ZNNFEEh02lj0iQIus1y8zVo7hQJIihigVkM.h
wvyo0d2UhDjDrVOihCBhLeCgfH2VHLhAnS9B7YYr7yu0nXPBzvh..BFadJXX
X7q15UdO+5M+FlmFofC
-----------end_max5_patcher-----------

Assignment N. 4 – Jacob Randall Holmes

For this assignment I created a spectral system to be used in live performance based on a tutorial video I found on YouTube. The patch consists of a random pitch generator and a noise reduction patch linked through pfft~

 

<pre><code>
———-begin_max5_patcher———-
1714.3oc0akzjiZCE9r8uBJW4Th6NHP.lbJyobO4PNLUJWxFY2ZBVxADd5Yl
Z7u8nE.ClM4tA6jKfaIgdu22aUK82lOawF1q3zEV+h0GslM6aymMS0jrgY4+
8rEGPutMFkpF1hsrCGvT9hk5933W4p1+MlEglRhvVblEZKmbBwwVGQ7suTLV
Z1AVFOFyUyjcdqo7uDiUSQw3TeDgtecBdKWyaAvm8VZ4F9r8RKfms7ki3o0e
k+IjH0Lv17om7gUnGgVPNfrsuOet7wRCkU7WiPaOOBruqshyc7fxWPuhmsw+
vfV4em2.+SweVLiMTUG2sie1R73I7+jghIeEmX4AbZWNAcJm5Aw+xQrVHWjR
1SQwKJEptABfSfTeBbbUvhSGHg+noI6.InLRJ97cTvWoz798J0dimTmcXCNo
cwywbwawRqEaPz88Jffv.oT4qDybOztjvN7PKm7DzALGmrFSQazLm8aP52Ey
Dy+8Q587j1yddNFH8N2tzm22NVxAjhh9uA73.NMEsG2vI3mkN+O+tcBLAerU
V9PPe3i6pQK928zBXkqTBg9JID35zqH5+fLA1jw4L56VUOHX.UQAb0tCNvhm
shENScP9nziobQwHcDmGZtbSj08nLqPpej+2xWCiFPfJrfeuo7bsmZzPm7Wk
36oDbTlnRMF8dk9GB0Y+0uf8BENgSdcPVe.Z8mjs+MNx5OHGxhEFIQV+Nd66
NhQNTr7F.kU5RaC6ERZOznc04VjuWCD4HgHjAIFeBmjJzyUznyVfNdrRyyp7
IR36SL0DErrrIBU2jaYSI3SjhuGV1JJQHgbg3kkngoWWUDuSNMrHbBMiTZVq
TjyKlvBPw2UUNAvVEBYktPwfvJ07KT56iYRkWELPnKNhoD5wDbpXUJHdN2U1
cDdGJKludGixSEU9pXCIMZo+cns3N+XpHDsR59PBQppyGx9DRDiJYhZXsr4B
xIpUxSqrqJLpQPQGa4iEpdAtzQmxPaYoaPIRUQdxBmhN4LVb8tJ+tX7Ndd2G
IT5UnHmcr6NSH6eomucCSz4g9laUOoqyn5dWKbT4qSQmpi1bTbbtma8o+UDk
HRBh4DsJvwtrScByWR2lvhiqIu5dN0ROQBq3s3OSh3unHTUiAwvIGKLhVTpk
iH6wo75swQ6Sq2Ri.Ehlx1j6ktliObTFwo9.psd7ptjUiwUq89p4o6nXcvfl
U+SWQxbzY5bB5nL3qimAbuhOaTKzP0C0VMQyJRPjGa4MBj0V7zCBHcBLCHCd
633HhXwjS3m2iHzyUYmSnjxXksOh53pWeHXiDqK0nZYIYwjT9fvafpHcOnd0
35Lut98XmF1I95L.9VE+kQ3hVi37DhnPb7kekVAbqftRrKNCy1Us6p8WkjB2
988iy0Fd5KrD9ML9BcfcG8ePfIx9eJ3R7yliAoLi76bDDJgKylpzR10UH0FX
lXfkNcvKiozJt5OGifAUqfrdUj+TO1xfaOFwUEL1uILLTuBqP4ZO6MRqW2Vv
iHLUeKFpiS+.PjndTgpAAIPw1t5091Nz.kf2ETpGioXBEedTy4bINoQYdJfr
Pk8kq+fPF3QCYhJM28XLq.pEt4BGDir6N878AiRPzH1A41tOpHU0M9X.vBnq
ILvaHvJ7tXOUaivFGv3Vbu.5U21bmwtFMV8nsbDoaSXVPp0upNSOrE39CZg4
aEfpLX.XPSH+GcHoe7rn3kvGQYA41WtZuMnyPPk2iFpRQe97iDnbUVStgCAT
t+mvlB7HfJ8xOcCWYlIkyiFopcntOBjx1s8S4sQcA2kf6Zgoa3vdP3nykeDn
25ZW6hmcZS3azFTTb2RtlAHzH7q+u.W..vsALAiCvzlq0k8VVtphN1TQkLH6
ucDKkkkrsvKH2l0ptzDgS4DZ4Ns+wxn.xwYj95l4AaCYB49Kd0.YIQ5SHw9A
yZvtYMvzvZlp5bmRUGzT7ALkbgmobgnHNKvc0.5lXs6qATno999SotqoJoCt
HXB4BGS0RdSHS3ZHSLk7f2s.DfIhI7MjIVMg.wJC4gvIjGL0AskrTSbrCS4r
faKgcNNVbABxOgE84k2xwq7s2x45tOlsAEmepskmXReGx6kSBd9E98Qc0i0U
h53CF9lGC7G7F2n9r50SpklqMHxkolANq4mzUn6JNIMvMioTnATRJxWxlbkw
1cfz1MIM3cQZ4EHqh.0QEd9i.955XfP5ZOFTx2DJsZLnjIVmPmQfRPno9Af2
Kk7LQlFCKBnI5I4+dCiuGm4jd.ONc.tqtZbRpd0Uh6pqCWyqBW2WCtquBbBJ
+84+Ki.dzi.
———–end_max5_patcher———–
</code></pre>

Assignment 4 – Sarika Bajaj

The goal of this patch was to create a rendering that would react to the amplitude of the microphone’s input. I made this by originally looking online for a sample of a rendering tutorial I found interesting located at https://www.youtube.com/watch?v=qf1OGUeIs1s. I originally started off with the video’s original patch, removed all the audio processing they were doing in the patch. After then playing around with the rendering part of the patch that I had kept, I changed the noise type of the rendering, as well as the scale and appearance of the rendering. During this process, I discovered the “distortion” input that this rendering originally had set to a fixed value, and decided that this was the input I wanted to be dependent on the amplitude of the audio input (as it was giving an interesting zoom effect). Thus, I wrote my pfft to be filtered and then have only the amplitude passed out which would then be scaled down to act as my distortion input.

For this example video, I simply used ambient noise as a catalyst (people walking by and talking) as I’m interested in making renderings that will use ambient noise/images from an environment in a way that is obvious yet still interesting. Unfortunately, the Youtube compression ruins the effect quite a bit, but the general visual is preserved. A Google Drive link to the video is located here: https://drive.google.com/open?id=0Byn46tolhCwUUlNzNDVObGppY1k

Github Gist Here: https://gist.github.com/anonymous/f69fd0c33650aeab618f81ad8d37ecfe
*** When I tested the compressed code just for checking to make sure my file was all right, for some reason the rendering just stays stationary while on my actual code it is working fine. For this reason, I also am uploading a zip file of my files, just in case something messed up on the copy compressed feature for some bizarre reason.
Zip of Files: Assignment 4 – Sarika Bajaj

Assignment 4 – Anish Krishnan

For this assignment, I used the pfft Fourier Transform object to cut out certain frequencies in an audio file that can be controlled through a slider. I combined the output audio from this with a modified version of the sound visualizer that we developed in class. By moving the slider up and down, you will notice a change in quality of the audio which is also reflected by the characteristics of the moving shapes in the visualizer.

Input Audio:

Output Audio:

 

Main Patch:

Frequency Gate Patch:

Sound Visualization Patch:

Assignment 4 – Adam J. Thompson

I’ve used this assignment as an opportunity to continue my explorations of the writings of Virginia Woolf as transformed by digital mediums, as well as to better understand how to use a spectral system to create audio-responsive visuals.

I began by reviewing Jesse’s original shapes/ducks/text patch and reconstructing through the components of the PFFT~ one by one in order to understand how they work together toward being written into a matrix. I subsequently created a system outside of the PFFT~ subpatch which randomly pulls a series of lines from Woolf’s novels and renders them as 3D text to a jit.gl sequence.

The only extant recording of Woolf speaking about the identities of words activates the PFFT~ process, and the resulting signals control the scale of the text. The movement of the text uses Jesse’s original Positions subpatch, but filtered through a new matrix used to control the number of lines which appear at any given time.

At the top of her recording, Woolf says, “Words…are full of echoes, memories, associations…” and I aimed to create a visual experience which reflects this statement as an interplay between her own spoken and written words.

I spent some time altering various parameters – size of the matrices, size of the text, amount of time allotted to the trail, swapping the placement and scaling matrices, etc – in order to achieve different effects. Some examples of those experiments are below.

Here’s the recording of Virginia Woolf.

Here’s the gist.