dtpowers@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2016 Twisted Signals Thu, 22 Dec 2016 20:26:10 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.28 https://i2.wp.com/courses.ideate.cmu.edu/18-090/f2016/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 dtpowers@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2016 32 32 More Secrets (Project 2) https://courses.ideate.cmu.edu/18-090/f2016/2016/12/05/more-secrets-project-2/ Mon, 05 Dec 2016 09:14:33 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=516 If you recall from last time I wanted to build a max patch to create an interactive installment based around the idea of feeling vulnerable in a crowd. People who wanted to interact with the exhibit would enter a booth where they would tell a secret to a microphone. Their secret would then be convolved and bounced around a surround sound set up. As more and more people entered the booth the performance space would become chaotic and it would be impossible to make out anything one person said.

 

The primary issue with this is that it lacked the true anonymity required for the concept to work. There needed to be a larger disconnect between the people exposing themselves and those absorbing the work. What better way to accomplish than the internet!

For project 2 I built a web app. https://gatsby-effect.herokuapp.com/

The app replaces the concept of the booth in my earlier iteration of this project. Users go to a webpage, record a secret, and then are redirected to a page which plays a recording generated with the max patch I built earlier to recreate the experience.  Here is the repo for the patch from before.

https://github.com/dtpowers/Secrets

 

The one issue with the current system is that it requires me to manually create new performances from the data collected via the app. I tried to set up my backend to automatically generate new performances daily, but I had issues interfacing with max in any sort of hosted environment. If I had more time / a personal server space I would automate that process so the system is completely self sustaining.

To use the web app click the microphone once, say something, click it again when finished, and then you will be redirected to a performance of the system. Repo for the app is https://github.com/dtpowers/GatsbyEffect

 

]]>
Secrets (project 1) https://courses.ideate.cmu.edu/18-090/f2016/2016/11/07/secrets-project-1/ Mon, 07 Nov 2016 06:06:56 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=472 For this project I wanted to focus on the idea of being alone in a crowd. The idea is that often when we are vulnerable in public spaces like parties, our deep personal moments become part of the cacophony around us and become indecipherable.

This patch is designed to be set up as an art installation in a space like the media lab. The computer running the patch, a microphone and a camera would be positioned in a closed space like a photo booth near that space and spectators would be directed to first enter the booth and then the performance space. In the booth they would be asked to tell something personal to the microphone, and press a button when done, then enter the space to observe the performance.

In the room whatever the person said will be convolved and played from one section of the room, then convolved again and moved to a different section and so on and so forth 4 times. After that time, their secret would find a resting place in one of the four corners of the room, where it would loop and slowly decay over time. On a screen in the room a video feed of people saying new secrets would be played, but in that feed is feedback of video from previous people telling their secrets such that they all blend into a kind of anonymous person approximate. After a few people have used the system the overlapping sounds and convolution would make it impossible to make out what anyone said or who said it, creating a sonic soundscape completely out of personal moments.

A few notes, do make this installation more flexible the IR signal used is variable and must be set at program startup, there is a simple drag box. Additionally, there is 1 button to turn on the microphone and camera, and a second button to indicate one person is done speaking and the next has begun such that their voices get bounced around independently.

The current version of this relies on 8 channel output and the ability to send signal from the booth to the room. If given more time I would love to find a way to transmit this data over the internet from one max instance to another allowing this installation to be set up with less equipment. Similarly, to make it more accessible I would look into using 2 channel output, but 3d sound techniques to create the illusion of directional immersive sound.

A small sample of the what this would sound like can be listened to here, but it really only makes sense in the surround sound live space and this is a 2 channel recording.

The patch is hosted at https://github.com/dtpowers/Secrets

OR you can copy paste from below

I actually created a presentation mode for this patch so I encourage you to check that out if you are running this patch yourself, it’s not pretty but it simplifies the system to just the important parts.

]]>
Polyphonic vocoder with live midi https://courses.ideate.cmu.edu/18-090/f2016/2016/10/17/polyphonic-vocoder-with-live-midi/ Mon, 17 Oct 2016 06:22:32 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=417 For this project I used FFT to create a simple voccoder using a an input vocal file and simple saw waves. In order to allow the user to play chords and such in real time, I also added a poly midi system that takes in multiple notes, routes them to voices, and kills them when the velocity reaches zero (note release).

 

To use this patch simple load and play a file in the sfplay object, and then connect a midi device and play notes. As you play notes the file loaded into sfplay will be output in the frequencies being played on the midi controller.

To demonstrate how this works in practice I played my favorite jazz standard, Misty, while running an accapella for the death grips song “Death Heated” through the vocodder.

EXPLICIT CONTENT WARNING

It didn’t occur to me until after I was finished that this content might be too offensive to post, so I apologize in advance. I thought it was funny at the time and it didn’t really occur to me I was turning this in for a grade.

If I were given more time I would want to add an ADSR envelope to the voccoder. Currently there is a lot of cutting out because as soon as I lift my hand from one chord to play the next the velocity drops to zero and the voices die. Adding a sustain on note release would make the audio less choppy and make this vocodder more playable in multivoice situations.

MAIN PATCH

VOCODDER PATCH

SYNTH VOICE PATCH

SOURCES CITED

I had a lot of issues with the polyphony aspect, and I got a lot of help from https://cycling74.com/2011/05/05/polyphonic-synthesizer-video-tutorial/

 

]]>
Convolution and Toms Diner https://courses.ideate.cmu.edu/18-090/f2016/2016/10/02/convolution-and-toms-diner/ Sun, 02 Oct 2016 22:27:05 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=314 For this assignment I chose the classic “Toms Diner” as my original signal, given its long history of being used to test compression as well as for mixing sound systems. This is not the exact recording, but I have no way of hosting the original MP3 without it being removed for copyright reasons.

 

For my traditional signals I wanted to find more acoustically interesting spaces around campus. First I chose the stairwell in the CUC, I have always loved the echoes in that space. I also felt the new locker rooms would be an interesting space. I thought they would be more reverberant than they were however.

For my third recording  I thought ambiance might create a cool effect. I tried recording is starbucks but the background music kind of ruined what I was going for, so I just sat at the blackchairs in the CUC and recorded the space.

For my final recording I chose a flushing toilet, I thought the gurgling of the water and the tons of little peaks and valleys could create a cool effect.

Below is a playlist of all recordings produced. First the IR is played, and then after is the original signal convolved with that IR. For time sake only the first verse was run through the system.

 

]]>
Loop delay system max patch https://courses.ideate.cmu.edu/18-090/f2016/2016/09/18/loop-delay-system-max-patch/ Mon, 19 Sep 2016 00:11:32 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=224 Demo + explanation

 

To demonstrate how the this patch could be used I improvised a quick little vocal piece.

 

]]>
I am sitting in a Deep Dream https://courses.ideate.cmu.edu/18-090/f2016/2016/09/06/i-am-sitting-in-a-deep-dream/ Tue, 06 Sep 2016 19:41:41 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=127

 

For this assignment I chose the Google Deep Dream neural network as my system. I took a selfie and fed it through the system 90 times, the above video shows a compressed time lapse accompanied by a version of “Changes” where the the pickup into the chorus was fed through a digital reverb and delay system in FL studio because I thought it was kind of funny. Also attached are the original image and the final product.

178

]]>