Author Archives: eben

About eben

student

Assignment 4 – yay yay yay lights

For MANY A LONG YEAR i have desired to do a frequency-to-color lighting control machine. So this is that, or at least a gesture in the direction of that.

This patch does an FFT of an incoming signal and breaks it up into a certain number of bands. Yes, I could have done this with filters I guess, but I like that these are crossovers rather than filters here, to get slightly cleaner numbers for the colors. The ranges are based on an overtone series; each crossover is double the frequency of the prior.

It takes the peak intensity of each tranche, scales the values to 0-100 (for eventually going to DMX-land), and and returns it to the main screen. The fft part of this is boring. Sorry. It took me a surprisingly long time to figure out how to make the conditionals work though… Seriously…

OK so then we basically have the data we need to send color values to an ETC D60 VIVID+, which takes 7 values for color and 1 for intensity when in “direct” mode. It does this because it is very fancy. Next problem: the show I want to use this on has seven VIVIDs in it, and I would like each of them to be able to get a ‘rotated’ color value (EG red value gets mapped to green value, etc), with each light being offset. This might end up looking totally nuts, but it’s a first pass at some kind of variability across the seven units.

So I’m using poly~ to instantiate seven VIVIDs, each with 7.1 parameters. DMX channel and intensity from the pfft~ outputs get packed together into an OSC message using sprintf, and then that gets sent out to the main window to udpsend. Because the DMX channel is first in the sprintf message, it needed to bang every time there was a new fft value (often)… So I ended up doing something tricky with an uzi and a counter to continually generate those values. I’m ultimately ‘sampling’ at a set metro rate, globally controlling all the peakamp~ objects and such… rather than using the fft rate… but that would be too fast for comfort anyway, so it’s probably OK.

Anyway, finally, the luminosity intensity parameter on the lights is separate from the color intensity parameters (you color mix then turn up the Big Knob). So I’m taking a final peakamp~ of the original input, and passing it as well into each poly~ instance.

 

ARE THERE PICTURES? no! Sorry! This patch will be (maybe) used this January in New York for the APAP showing of the latest ROKE piece. And all this other gear will be there. Imagine a giant circular screen, backlit with all these crazy colored lights, all of which is responding to a modular setup somewhere back there sawing away. It will look vaguely like this:

Here’s the patch:
https://www.dropbox.com/sh/6n2uk8um97ev7sp/AABgysKzBmILQGxrnoie1n3ta?dl=0

Hoffer_Assignment2_HotTubTimeMachine

This is the patch
This is a video of the patch

At first I thought it would be cool to be make a patch that would make pixelated versions of your face infinitely blur and fall off when you got in front of the camera. That turned out to be well beyond my current skill level, because (1) facial recognition? and (2) how do you manage transparency? I tried to work on something where your face would constantly blur downward towards the bottom of the screen, but again I was having trouble getting the effect to seem… more or less relative to the face… And it didn’t work. But this did get me started on delaying and feeding back moving versions of matrices.

From somewhere back in the memory banks, I remembered some JRPG for the SNES where there’s a grid of identical sprites flying by in the background, left to right, while things are becoming emotional for a character. Can’t find it. But I decided to try and pick out an object in the center of a camera, and have a bunch of replications of that thing fly by in the background.

 

Step 1, I kinda-sorta figured out how to use jit.repos. Which is to say, honestly, I copy-pasted the help file and messed with it until it worked. I still don’t quite grasp how the guide matrix part works. But at any rate, I got it continuously scrolling vertically. I delayed the vertical scroll and fed it back into itself, with a horizontal scroll inserted just before the feedback point. I decided to leave the rate of the HV tracking (if you will) as variables. This was starting to work pretty well!

 

Next up, this looks stupid because I am tracking the entire screen and all I am able to notice are the straight lines and corners. So my usual analog approach here would be to do a luma-key, but this is a computer. OK. So I messed around with jit.expr and found something that would make a reasonable vignette mask. That got me close, then I used a boolean expression to filter out all pixels beneath a certain brightness threshold as well. I did the expression on all of the color channels individually… I think I would fix this in later versions to favor less red light in lo-light conditions.

Anyway, that got me pretty good pickup of single objects within the screen area. GREAT. Finally, I had a bunch of downsampling ideas laying around from the earlier version– and the camera had to be downsampled on input anyway if my computer wasn’t going to slow down to a crawl. So I put an additional downsampler on the delay line, so we could crunch up the flying images for fun.

And that’s about it. I went ahead and made a reasonable presentation-mode dashboard, so it would be kind of playable. Some of the more extreme settings are pretty cool, though I know I’m relying on my computer’s slowness to get some of these weird timings. The next step would probably be to hook the parameters up to audio inputs of various kinds, and wahoooo music video o’clock.

 

Hoffer_Assignment1_feedback-recursion

I AM SITTING AT MY COMPUTER. ALL DAY.

Similar to the YouTube codec edition of “I am sitting in a room,” I decided to focus on encoding and decoding within the computer world. No real things! Not allowed. Except the original sample loop, which is real:

This loop is a simple keyboard figure and some tin cans with piezo elements taped to them, looped and run through an eighth-note delay. It’s a little noisy, has a mix of a room mic and a direct line– something pleasantly human, ready to be turned into Robot Time.

TEST 1 – AUDIO TO MIDI

Ableton Live has a cool feature where you can take any audio source and turn it into MIDI notes. The software comes with settings for Melody, Harmony, and Drums– so I assume it works a little better when you have an audio source that is one of those things. Instead, we have this one loop. The Harmony setting seemed to result in the largest number of notes getting encoded from the loop, so I decided to start there.

So we get a scrambly pile of weird MIDI notes. Rather than play the MIDI back through an external instrument, I thought it would be more true to the project to somehow use the initial audio with the resulting MIDI to make the next generation of audio. My first idea was to use a vocoder on the original audio, using the MIDI pile as the carrier– that way I could use the full original audio, and avoid the intrusion of a software instrument. G1 sounded like this:

Cool! Sort of like a Harry Partch version. But I got sidetracked by using the original audio as a sample to be triggered by the MIDI. This would mean triggering pitched versions of the first few fractions of a second of the original audio… Maybe less faithful to Lucier’s idea of serially processing the entire original signal, but it sounded nuts so I couldn’t resist. Here’s the full output, from the original through seven regenerations:

So the natural state for this system is… silence. This is because of the MIDI encoder– it tends to leave areas silent if the audio is too fast, or too complex, or not percussive enough… I think. Anyway it starts to insert rests after a few generations, including at the beginning of the loop. To keep things from falling silent too quickly, I allowed myself a cheat of setting the sampler to play back starting at the first recognizable audio, rather than from the now-silent front of the original loop. Also, oddly, the MIDI’s pitch tended to come out a perfect fifth above the original file. Recursion of that feature also took us out of the piano range pretty quickly, so I corrected resulting audio down 7 semitones with every generation as well.

TEST 2 – STRETCH CONTRACT

Over to Logic! I’m really into using the natural glitchiness of pitch-maintaining time stretch algorithms when they are stretched over-long. I wondered what would happen if I used Logic’s Stretch Time to stretch the loop, printed it, then used the same process to contract it. I’d then print the contracted version, stretch it again, etc etc. The time stretcher maintains sample rate, so when stretched to double its length, you’ll get an equal number of original samples and algorithmically generated “guess” samples. On contraction, they would erase half of those samples. Would the system selectively remove the guess samples, or would it get a mixture? Would I eventually get a loop that was mostly guesses? I decided to go with a 2:1 stretch.

After eight regenerations, the short answer is “no”. Hard attacks were blurred by the time stretch algorithm– no surprise to me– but a lot of the other primary sounds sounded pretty much the same after a bunch of iterations. Fascinatingly, the biggest change was a reduction in the noise floor: the final loops feel like a gated version of the original, but like a really transparent gate.

TEST 3 – MP3-O-RAMA

Again, inspired by the youtube iteration file. In the misty past, I’ve tried recursive mp3 encodings– an indeed, after a few hundred recursions you do indeed sound like a total monster. I thought it might be interesting to see what the difference was between an mp3 and an original, and somehow iterate that process.

I created an mp3 version of the loop, inverted it, and combined it with the original. Therefore, anything that made it into the mp3 that is identical the original will be cancelled out, an we should be left with just the difference between the tracks. I’d then take that resulting ‘difference track’ and use it as the original audio for the same process, over and over. Through this process, we should eventually get something more and more like the character of the mp3 encoder itself. In the photo above, the top track is the generations of “original” audio, and the bottom is the mp3 versions. Here’s what the top line sounds like, at 20 regenerations:

Spooky. But not exactly the distilled nature of mp3. That, it turns out, was what started to come out of the bottom line. Mp3 re-encodings of the vague rhythmic noise heard above resulted in some pretty isolated examples of mp3 random pitch generation:

Some technical notes: to maximize the amount of audio that would show up in the inversions, I used a very low-kbps mp3 encoder. In addition, after I was finished with recursion, I normalized the resulting samples to boost them up to a more audible level– boosting overall level, but more or less maintaining dynamic range.