Category Archives: Assignments

Trippy Rave Visual Processor

I wanted to make an interesting convolution effect such that the kernel was applied at a single point and then at all the other points it exponentially decays till the edge of the image, or a given radius. Unfortunately I was doing a lot of pixel by pixel calculation and, as I found out much later, jitter doesn’t really like that (apparently I would have had to write my own shader).

So I settled for this trippy video processor that takes video and audio, feeds the video through some delay lines and then convolves them with randomly generated kernels weighted towards specific effects, based on thresholds in the audio signal. I figured it would be fine, since people at raves don’t usually tend to be too picky.

For the video, I used one of my favorite songs, Figure 8 by FKA Twigs, paired with one of my favorite dance clips, an excerpt from the pas de deux of Matthew Bourne’s all-male Swan Lake. I hope you enjoy!

Halftone jitterator

This project takes an input image, reads through a jitter matrix, and creates a circle based on the position and value of the cell. The method this software uses is very time inefficient. At the last moment, I discovered an effective jitter block. Fortunately, This method would be a really interesting selfie-photobooth installation. The viewers would take a selfie, then would have to wait to see how it came out, lol.

parent patch:

sub 2:

sub 3:

Digitally Prepared Piano V. 1

Can Max MSP be used to “prepare” the keys of a digital piano in order to bring an additional level of expression to a composition?

The tradition of a prepared piano (credited largely to the twentieth century composer, John Cage) is to alter the inside of the instrument with elements that change the audio characteristics and response of specific keys, using physical objects that are attached to, or placed between, the instrument’s strings. Items like paperclips, erasers, springs, screws, bolts, tacks, cutlery, cotton, clothespins, etc. were commonly used to modify the prepared piano’s sound in the related works of Cage and other experimental composers.

This project sets up a MIDI piano for a performance of the piece, Bombarider in this digital setting.  It separates specific keys to be altered (or prepared) using techniques including: signal filtering, time shifting, spectral processing, etc.

Live performance will be presented in class.

Secrets (project 1)

For this project I wanted to focus on the idea of being alone in a crowd. The idea is that often when we are vulnerable in public spaces like parties, our deep personal moments become part of the cacophony around us and become indecipherable.

This patch is designed to be set up as an art installation in a space like the media lab. The computer running the patch, a microphone and a camera would be positioned in a closed space like a photo booth near that space and spectators would be directed to first enter the booth and then the performance space. In the booth they would be asked to tell something personal to the microphone, and press a button when done, then enter the space to observe the performance.

In the room whatever the person said will be convolved and played from one section of the room, then convolved again and moved to a different section and so on and so forth 4 times. After that time, their secret would find a resting place in one of the four corners of the room, where it would loop and slowly decay over time. On a screen in the room a video feed of people saying new secrets would be played, but in that feed is feedback of video from previous people telling their secrets such that they all blend into a kind of anonymous person approximate. After a few people have used the system the overlapping sounds and convolution would make it impossible to make out what anyone said or who said it, creating a sonic soundscape completely out of personal moments.

A few notes, do make this installation more flexible the IR signal used is variable and must be set at program startup, there is a simple drag box. Additionally, there is 1 button to turn on the microphone and camera, and a second button to indicate one person is done speaking and the next has begun such that their voices get bounced around independently.

The current version of this relies on 8 channel output and the ability to send signal from the booth to the room. If given more time I would love to find a way to transmit this data over the internet from one max instance to another allowing this installation to be set up with less equipment. Similarly, to make it more accessible I would look into using 2 channel output, but 3d sound techniques to create the illusion of directional immersive sound.

A small sample of the what this would sound like can be listened to here, but it really only makes sense in the surround sound live space and this is a 2 channel recording.

The patch is hosted at https://github.com/dtpowers/Secrets

OR you can copy paste from below

I actually created a presentation mode for this patch so I encourage you to check that out if you are running this patch yourself, it’s not pretty but it simplifies the system to just the important parts.

Motion Controlled Feedback

https://www.youtube.com/edit?o=U&video_id=0qdhJajKoYs

My goal was to make an instrument out of only tuned feedback (with maybe an impulse or two to get it going) that I could control by waving my hand around over a sensor.
The bread and butter of the patch consists of eight delay lines that are each tuned to a frequency.  Each delay line has two filters set to the same pitch as the line and a compressor on the feedback into the tapin/tapout line.  To choose the harmonic content I made a subpatch to propagate semi random pitches to each delay line, or you can play notes one by one on the kslider.  Finally, there’s some reverb and delay in line before the whole mess feeds back into itself.
The other important component is a Leap Motion controller being fed into Wekinator to provide control data for this whole mess.

I mapped the resonators onto the horizontal plane so that i could choose which pitch to emphasize with the position of my hand.  Height selects the octave, with high being octave up and low being the fundamental.  The openness of my hand controls the brightness of the timbre of the resonators.  The distance between my hands controls the speed of the metro trigging the random note propagation device.

i still consider this a work in progress but i’m happy turning this in and would have fun performing with this.

I tried to pass the same data to control the lighting rig in the lab but results were underwhelming…

Wekinator training files here: https://www.dropbox.com/s/4yijpatn9gsnsvv/feedback%20handcontrol%2020161106.zip?dl=0

Pt. 1

 

pt. 2

pt. 3

 

 

The Psychoacoustics of Sonic Space by Kayla Quinn

Can sound alone change the way that a person perceives a space? When you walk into a building are you ever surprised by what it sounds like? As an architect, I am more concerned with the application of real world spaces and how we can use digital tools to help expose and explain the wonders of acoustics and sound design. Sound is an undeveloped and unappreciated design that largely goes unnoticed because it is invisible. My project is an acoustic installation along the CFA staircase that looks at how people notice and understand the changing acoustic environment around them. As people walk up and down the stairs of CFA recorded ambient sounds will change in frequency. (as you go up the frequency get lower, as you go down it gets higher).  The installation will be run on max using a mac computer, an oct-capture interface, and  between 4-8 speakers, each with a different ranged filter. There will be four different ambient tracks, one of prerecorded CFA noise, one of mall/city traffic with a Mixture of CFA Manipulated sound put in, a track of ambient nature noise along a beach, and a track of classical music.

The installation is to have a neutral observation. Will people notice the change in acoustic atmosphere? will the like it so much they take the stairs instead of the elevator? do the hate it so the go out of their way to walk around it? do they only notice on the way down instead of the way up? Do they take their time to listen to it? to wander through the stairway? do they point it out to someone else?

The installation is on schedule to be ready on November 28th and to stay up for a week. It will be between the mezzanine level and the 3rd floor of CFA. A video camera will be set up to record peoples response to the installation, and a few interviews will take place and be recorded for the results.

Here are the max files:

screen-shot-2016-11-06-at-10-24-25-pm

screen-shot-2016-11-06-at-10-25-17-pm

and the ambient noise of CFA:

https://soundcloud.com/user-233892197/cfa-ambient-noisewav

<pre><code> ———-begin_max5_patcher———- 2173.3oc4bs0aiaiF84jeEF54TO7pDUepEXvh8gcPAJP6hcKJBjkncXGYIWI 4IS1hN+1WdQRQ1UNl1zCiaT.hsLEozgG9c99n3E8G2dSvhxOyqCl8sy9kY2b yeb6M2nSRkvMs+9lf0IeNMOoVmsfB9ikK9sf6Lmpg+4FcxYIoeoKwhsqEE47 FcAPsItIoI8AQwp6q3oMlaXbLbN3tYLHU8ENR8IBMGL6We9BUtso6JAZSUjo uiRT7MTR28rt4obtN8.UB+4s2p93NKqTaxSdJWT2zWEL21lm1vMPMnVrpHIO 3twOR8elHsQTVjT8TvvJvyLA7EXhPckOBDp9BRAZBAbPpfNBUfGgJZSIKoIo s12V8uIHMWrougevIjmZoHmWjr1bM9PRpnnor9gY+y2+su6mp4U0uaMOSjjm rXq7Wu68kOVjWljU+t+6O7Ce..fn6+W+37+82+ys28tq3GEEFzlrMSTpRYPF xKK2LnMVCvxhFdQy80MIM7NzODmxZ5FNOyTGfOyUl6XY05jhlwOYYkXkP1z0 vWuorMKHvAxjjKV07v5VtZ+bUmuspQrlO9YWjTKsJ1VHasMYfP1MCaDRagzx Jkofz3o8praVj1lik9xx77xGWkWtXmJxtYRAs5lJt7tL1o+8sI4hl1KuAsAi QiGAgpJQ8ChkGftWWlwO7sXWdtmGkJpFQ5Gkz9u1kWsn1bPahlTNCw9hjpCJ 0UhYgxz4EUwOHx3UOtb4QT1H.QKowZebvXi.O9fJazeUYigWFmbGvysR6JMR pmAI.vgIkd7tTpKqE+O8IfRu0mnqNDFGZ7toc9GiNhWe3ekPHQACvRumpuuR n7FeInpk4kRT7hVHKRTZ5QIE3bZLExh5OmTCoq8gcbeknNMwfOv73ykAwL0W D8OPvSwjhvNIFrGKUxb1vqtWFdXQK7ubFhptPLaGfcHF0ZlRI6nRJhhrhoFo GFj3u11ZGUVB7hrDgXNJKofosrriAOeYIEdEJK2rbYyWlsrh+6a4EoOk9fjg 4RqRDYF5XcWd7Pn3iqXwsAJCiOC6Pzap3kfHnqwKgSagYGC5P7Rz0Z7xnKY7 RYI.tFuDOMhWJ6IO1UYIYRKK6YPGjkzod7xNEqKwKIgukhWBiBoNJLwzosvr iAOegIN7ZMdY3EMdIMj3X7RbzDIdYHMxUYIaZKK6XPGjkwS93ksJVmhWBdSE ujB.NJLQ3osvriAOegIJ5ZMdI8hFuDEG6X7RDahDuDaThtDubZOdr8LHAc1w KwS93ksJVWhWhIukhWRicc3Xm1iFaG+4vn9bsFqDcIiURjjhagJoSiHkDlyB xocfxNB77UjvvodbxV0pKgIgQuodrRDwYYIZZ2+0NFzAc4UanRxE8wJADWm1 R3T4wJgXWm1Rzzt+q8LnCi1yjObYmh0k3kn32RwKiwNOJrSZYYG+4vjibsFq DeIiUFgbcDXCmFQJYHWG+0o8pTui+NeAIapGkrUq5zCU90cuiTuTsYZ9xruq eOcM68I+isEebdhX4ruSsqtlAsXO0sqYps6ePXTnhefshS3w3mwdBoWXWyUm 7Id18xbIug2mzHkCK11X1rjC2Hc6sGuHjt8v136PqAmcjsVU+Y61yTs6Xptj GayRg.Qz4grXBEOXSSseI5A.TxSwx+hh5xxHaBsdbLXGhsGTFaqkc9aJqqid ggAttIHPS6E0YGA5vCGQuV6GF7hNlERFxwYBGMUVo0tpIgwS7EZsyZRvje.K LxUW5J1QFtB8EJHWTr+qiAMZTouKEUWtsJsqZ0d8edeYGjIiMKC6+7V1t2mw fL8fHKiOLdePlnV0Tkc31JagCyB3n5c5LnWfSjEvg4MxQ8xZ3nvQ8xs3pCO9 o0hhr.OjX+wOPa3Gj+3GfM3A5M9gvrkeP9AO1H2I9Suq1hMGGOX+gGpM3Iza 1yDhM3g5O9AYK+3I6Ya7+P7WvchM9evQ9CO1zYiHu0bgsouXZNzOxKrMtmw9 KbJNzV9wSsW13NbOP+UEO1XOi7W3KrMcOD6utGhswcHzegSwV4Nzi3w1m8xO xKjMtCg9ybFEYq4rm3GabGpIQ+HuP13NL1eziUdC8WmCQ1HuP9qyOHa7Fh7W zKDvRyGOYMaiX2ajCzpgoC3O7XiuP+osfg11Sd+X7.I15Z1O9BgXK0VuTy0Z Q1lRQQS6.MCIsu4c0CpMjXls.Fcvfa64J.EcZ0.DLrcI.D95AZR3IR6wQOut Ed0.M3DAcHTar.iPudfFSNQPiosV3vWOPiNUUIfXljIhd4FCAsuTnYjWu5.L 5zpCDjF6nH7qHlgmFliH.Csq+JBGaX8uRZTa5bBzeyCDzltR5udRBsomjPOh Ga7ndlnwL+uIa17IdUc6kTCjf0I+VYk5m5EHfzdsv7S8D0GTw+jnK+DcJIUo OHZ3oMaqLSy7mYl0rqdQkUUrUnmg5aUTvssSG8dy4b+T7u9IyrUqAurFuLYa dytrzhUok4FDoe0eemZIJnVeYjtihg2M7cBtdJ70k4dQgp1xG7R+94OFVhEq VJxy6uOCW9ccStevppjLAunoeApM.Vf4nX.LNTcowfXFfXNRlztXqsTvthwn PPrJuLBBQLGgA.BZrRgFvA67wy4KoXkYt+QQf48KxtMUkaJq5W6cyww84eaS YeEqyhwzvs+xwPZ+8gxhjzx12X5sYRsLLjEd+kmQvZYCon6RNTLz21aV2E+G tZQL1so0G2DHIMkWzrCeyPTXjhthXPFT2jRioD5tsqCWFL+hdwAQiA.1EpB7 nRFLacYQY8ljT9oVCrpoOfmyWueQiXfXn1DiA.HHybUvvP1tkcmBAYLFRkSD iAaMPoPLFtuTXmREyPwDrVoEwnzHyQxd5f9aplyFOH6p2jbKDGAzFYLTTH1v 7R9DE+2E4mrL+4s+enzZFWC ———–end_max5_patcher———– </code></pre>

Cosmic Revolver

For this project, I decided to create an audio visualizer. In the window, there is a background of particles that are rendered within a 3 dimensional plane. These particles rotate around and are sensitive to a specified bass frequency utilizing a bandpass filter. The particles shake and grow “brighter” the larger the amplitude of that specified bass. Along with the background particles, there is a rotating object in the middle of the screen. The colors can be customized as desired and they rotate and change in size based off the amplitude of a given track.

visualizer

Within the presentation mode, the main object in the middle can be customized according to minimum and maximum size, although the volume of the track itself, which can also be adjusted, will also influence the size. Along with this, there is the option to loop a track and also to change the sensitivity the background particles have to the specified bass frequency.

code

Unfortunately, I was unable to figure out how to render the jit.window properly, as it would freeze my machine. Therefore, I decided to record the screen, which resulted in a poor video quality.

Drum Machine Project 1 Steven Krenn

Hi there,

For my self guided Project 1 I made a drum machine for Max for Live. I made a synthesized Kick drum, Snare drum, Tom (1 through 3), Opened Hi Hat, and Closed Hi Hat. As well as some master distortion effects.

Here is the plugin UI in Ableton Live:

screen-shot-2016-11-06-at-5-46-22-pm

The Kick drum has many envelope shapers to achieve the 808 sound. from top left to bottom right, the ADSR of the pitched Kick sound, then the pitch envelope right underneath it. Same ADSR for the noised kick, as well as a pitch envelope. The snare has just an ADSR filter. The Toms each of their own pitches, as well as attack and decay parameters. The closed hat, and the open hat are the same synthesis engine, however the closed hat has a fast decay to 0, while the open hat has a long decay. All of the instruments have their own independent volume sliders, as well as a master out slider.

So what does it sound like?!

I made one drum beat with a Clean setting, an Overdrive setting, and a Bit Crushed setting.

 

Here is what the patch looks like in Max:

screen-shot-2016-11-06-at-6-11-21-pm-2

The Max for Live plugin works as just a normal Max patch if you plug in a MIDI controller. It is expected notes:
C-(MIDI NOTE: 36) – Kick

D-(MIDI NOTE: 38) – Snare

E-(MIDI NOTE: 40) – Tom 3

F-(MIDI NOTE: 41) – Tom 2

G-(MIDI NOTE: 43) – Tom 1

A-(MIDI NOTE: 45) – Closed Hat

B-(MIDI NOTE: 47) – Open Hat

Try it out on your machine and make a beat with it! I learned a lot about routing signals while doing the project, so if you wanted to make your own Max for Live plugin feel free to check my code out on how to grab certain notes from Live.

Have a good one,

Steven Krenn

And most importantly….The code!!

Project1_Mingyuan Yu

You can image this project act as a cell phone noise cancellation. It has two inputs which simulate two microphone inputs in a cell phone, one Mic pick up voice + environmental noise and the other Mic pick up pure environment noise. This app combines both amplitude cancelation as well as phase cancelation. It also provided a gate therefore the pure environment noise signal won’t be heard by the user. The following video demonstrated shows this app can separate noise and wanting signal pretty good in both musical environments, speech environment, street noise environment and pure white noise, as long as some condition fited. However, if those conditions don’t meet (ex: not playing simultaneously, phase no match), the app won’t perform very good noise cancellation.

 

Spectrachoir

A very basic spectral convolution between the recording of a choir and an industrial drone produces an unexpected effect. It may possibly turn out to be useful for an underwater environment’s sound design.

The original recording is a composition I wrote for the Pittsburgh choir, Yinzer Singers.

Original choir recording:

Post processing: