A video of the patch in action (sorry about the clipping):
In this project, I connect a multislider object to a pfft instance to allow for real-time volume adjustment across the frequency spectrum. The multislider updates a matrix which is read via the jit.peek object inside of a pfft subpatch. The subpatch reads the appropriate value of this matrix for the current bucket, and adjusts the amplitude accordingly. This amplitude is written into a separate matrix, which itself is read by the main patch to create a visualization of the amplitude across the frequency spectrum.
At first, the multislider had as many sliders as buckets. However, this was too cumbersome to easily manipulate, and so I reduced the number of sliders, having one slider control the equalization for multiple buckets. At first I divided these equally, but this lead to the problem of the first few buckets controlling the majority of the sound, and the last few controlling none. This stems from the fact that humans are better at distinguishing low frequencies from each other. Approximating the psychoacoustic curve from class as a logarithmic curve, I assigned volume sliders to buckets based on the logarithm of the bucket. After doing this, I was happy with what portion of the frequency spectrum was controlled by each slider.
(Also interesting: to visualize the frequency, I took the logarithm of the absolute value of the amplitude. In the graph, the points you see are actually mostly negative – this means the original amplitudes were less than one. I took the logarithm to take away peaks – the lowest frequencies always registered as way louder and so kinda wrecked the visualization.)
This is the code for the subpatch running inside pfft.
And this is the code for the main patch.