Will Walters – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 Twisted Signals Thu, 07 Dec 2017 05:03:31 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.24 https://i1.wp.com/courses.ideate.cmu.edu/18-090/f2017/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 Will Walters – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 32 32 115419400 Small Production Environment – Will Walters Final Project https://courses.ideate.cmu.edu/18-090/f2017/2017/12/04/small-production-environment-will-walters-final-project/ Mon, 04 Dec 2017 07:49:54 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1527 For my final project, I created what I’m calling a Small Production Environment, or SPE. Yes, it’s a bad name.

The SPE consists of three parts: the first being the subtractive synth from my last project, with some quality of life and functionality improvements. This serves as the lead of the SPE.

The second is a series of four probabilistic sequencers. This gives the SPE the ability to play four separate samples with probabilities specified for each sixteenth note in a four note measure. This serves as the rhythm of the SPE.

Finally, the third part is an automated bass line. This will play a sample at a regular (user-defined) interval. It also detects the key being played in by the lead and shifts the sample accordingly to match.

It also contains equalization equipment for the bass & drum (jointly), as well as for the lead. In addition, many controls are alterable via MIDI keyboard knobs. A demonstration of the SPE is below.

The code for the main section of the patch can be found here. Its pfft~ subpatch is here.

The embedded sequencer can be found here.

The embedded synth can be found here. Its poly~ subpatch is here.

Thanks to V.J. Manzo for the Modal Analysis library, An0va for the bass guitar samples, Roland for making the 808 (and whoever first extracted the samples I downloaded), and Jesse for his help with the probabilistic sequencers.

 

]]>
1527
Project 1 – 3x Oscillator – Will Walters https://courses.ideate.cmu.edu/18-090/f2017/2017/10/30/project-1-3x-oscillator-will-walters/ Mon, 30 Oct 2017 05:26:54 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1411 For this project, I created a synthesizer instrument called a 3x Oscillator. It does more or less what it says on the tin, the user can control three oscillators which can be played via midi input. When a note is played, the oscillators produce sound in tandem, creating a far fuller sound than a single tone. The oscillators can be tuned and equalized relative to each other, and the waveform of each can be selected – sinusoid, sawtooth, or square. Other options for customization include total gain of the instrument; independent control of the attack, decay, sustain, and release; and a filter with both type and parameters customizable.

Here’s a video of me noodling around with it:

(sorry about the audio in some places, it’s my capture, not the patch itself)

The main patch can be found here.

The patch used inside poly~ can be found here.

]]>
1411
Assignment 4 – Multislider EQ – Will Walters https://courses.ideate.cmu.edu/18-090/f2017/2017/10/16/assignment-4-multislider-eq-will-walters/ Mon, 16 Oct 2017 04:22:46 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1344 A video of the patch in action (sorry about the clipping):

In this project, I connect a multislider object to a pfft instance to allow for real-time volume adjustment across the frequency spectrum. The multislider updates a matrix which is read via the jit.peek object inside of a pfft subpatch. The subpatch reads the appropriate value of this matrix for the current bucket, and adjusts the amplitude accordingly. This amplitude is written into a separate matrix, which itself is read by the main patch to create a visualization of the amplitude across the frequency spectrum.

At first, the multislider had as many sliders as buckets. However, this was too cumbersome to easily manipulate, and so I reduced the number of sliders, having one slider control the equalization for multiple buckets. At first I divided these equally, but this lead to the problem of the first few buckets controlling the majority of the sound, and the last few controlling none. This stems from the fact that humans are better at distinguishing low frequencies from each other. Approximating the psychoacoustic curve from class as a logarithmic curve, I assigned volume sliders to buckets based on the logarithm of the bucket. After doing this, I was happy with what portion of the frequency spectrum was controlled by each slider.

(Also interesting: to visualize the frequency, I took the logarithm of the absolute value of the amplitude. In the graph, the points you see are actually mostly negative – this means the original amplitudes were less than one. I took the logarithm to take away peaks – the lowest frequencies always registered as way louder and so kinda wrecked the visualization.)

This is the code for the subpatch running inside pfft.

And this is the code for the main patch.

]]>
1344
Project Proposal 1 – Will Walters https://courses.ideate.cmu.edu/18-090/f2017/2017/10/02/project-proposal-1-will-walters/ Mon, 02 Oct 2017 06:00:43 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1175 I’d like to make a playable 3x Oscillator in Max. The basic functionality will be three separate oscillators with switchable waveforms which can be (de)tuned and volume adjusted separately. On top of hooking this up to a keyboard (and maybe functionality to have it read from USB Midi input?) I could also implement a bunch of user-customizable options like hi/lo pass filters, panning, reverb, and EQ options. I could also add some visualizations of the resulting waveform.

]]>
1175
Assignment 3 – Will Walters https://courses.ideate.cmu.edu/18-090/f2017/2017/10/02/assignment-3-will-walters/ Mon, 02 Oct 2017 05:41:46 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1144 My original signal is a demo version of the song ‘You or your Memory’ by The Mountain Goats.

The original:

 

My first two signals were created by popping a balloon from the other side of a door as the recorder, and by recording the sound through Snapchat, then playing it back.

Here’s the IR and the song from the other side of a door:

 

And here’s the IR and the song through Snapchat:

 

Next, I used as my IR the sound of me knocking on my desk with my recorder pressed to the desk. There was a plate on my desk, and the sound of a fork rattling on the plate creates a pitch.

Here are the IR and the song convolved with this IR:

I was a bit disappointed to see that it sounded similar to the first two, but it is cool to note that the frequency from the fork and plate cause a resonance in the song.

 

Finally, I recorded a short clip of myself eating yogurt and used that as the IR. I’d like to thank my roommate for donating his yogurt for the sake of art. Here’s that IR and the resulting song:

 

Sorry that the IR for this is so gross. But, the different spikes in the yogurt IR do create a cool preverb effect in the song.

]]>
1144
Assignment 2 – Will Walters https://courses.ideate.cmu.edu/18-090/f2017/2017/09/18/assignment-2-will-walters/ Mon, 18 Sep 2017 05:41:45 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=864 For this assignment, my first was to filter video feedback through a convolutional matrix which could be altered by the user, allowing for variable effects, such as edge detection, blurring, and embossing, to be fed back on themselves. However, using common kernels for this system with the jit.convolve object yielded transformations too subtle to be fed back without being lost in noise. (The system I built for doing this is still in this patch, off to the right.)

My second attempt was to abandon user-defined transforms and instead utilize Max’s built-in implementation of the Sobel edge detection kernel to create the transform. However, applying the convolution to the feedback itself led to the edge detection being run on itself, causing values in the video to explode. This was solved by applying the edge detection on the input itself, and then adding the camera footage before the final output. (It looks maybe cooler without the original image added, depending on the light, so I included both outputs in the final patch.)

]]>
864
Assignment 1 – Will Walters https://courses.ideate.cmu.edu/18-090/f2017/2017/09/04/assignment-1-will-walters/ Mon, 04 Sep 2017 22:02:57 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=670

 

The above video is the result of a beautifying filter from a photo retouching app (Meitu) being applied to the same picture (of myself) about 120 times. The filter itself was the one the app called ‘natural’, set to be as subtle as possible. (A slider in the app alters how drastic the changes are.)

To follow the themes presented in the in-class examples, this project was meant to demonstrate certain fundamental artifacts and assumptions in the process itself – in this case, what does the app consider beautiful? We see in the final result of the process those artifacts extracted and brought to their extremes.

The music on the video doesn’t have anything to do with feedback loops, I just thought having it in the background was funny.

]]>
670