The primary issue with this is that it lacked the true anonymity required for the concept to work. There needed to be a larger disconnect between the people exposing themselves and those absorbing the work. What better way to accomplish than the internet!
For project 2 I built a web app. https://gatsby-effect.herokuapp.com/
The app replaces the concept of the booth in my earlier iteration of this project. Users go to a webpage, record a secret, and then are redirected to a page which plays a recording generated with the max patch I built earlier to recreate the experience. Here is the repo for the patch from before.
https://github.com/dtpowers/Secrets
The one issue with the current system is that it requires me to manually create new performances from the data collected via the app. I tried to set up my backend to automatically generate new performances daily, but I had issues interfacing with max in any sort of hosted environment. If I had more time / a personal server space I would automate that process so the system is completely self sustaining.
To use the web app click the microphone once, say something, click it again when finished, and then you will be redirected to a performance of the system. Repo for the app is https://github.com/dtpowers/GatsbyEffect
]]>
This patch is designed to be set up as an art installation in a space like the media lab. The computer running the patch, a microphone and a camera would be positioned in a closed space like a photo booth near that space and spectators would be directed to first enter the booth and then the performance space. In the booth they would be asked to tell something personal to the microphone, and press a button when done, then enter the space to observe the performance.
In the room whatever the person said will be convolved and played from one section of the room, then convolved again and moved to a different section and so on and so forth 4 times. After that time, their secret would find a resting place in one of the four corners of the room, where it would loop and slowly decay over time. On a screen in the room a video feed of people saying new secrets would be played, but in that feed is feedback of video from previous people telling their secrets such that they all blend into a kind of anonymous person approximate. After a few people have used the system the overlapping sounds and convolution would make it impossible to make out what anyone said or who said it, creating a sonic soundscape completely out of personal moments.
A few notes, do make this installation more flexible the IR signal used is variable and must be set at program startup, there is a simple drag box. Additionally, there is 1 button to turn on the microphone and camera, and a second button to indicate one person is done speaking and the next has begun such that their voices get bounced around independently.
The current version of this relies on 8 channel output and the ability to send signal from the booth to the room. If given more time I would love to find a way to transmit this data over the internet from one max instance to another allowing this installation to be set up with less equipment. Similarly, to make it more accessible I would look into using 2 channel output, but 3d sound techniques to create the illusion of directional immersive sound.
A small sample of the what this would sound like can be listened to here, but it really only makes sense in the surround sound live space and this is a 2 channel recording.
The patch is hosted at https://github.com/dtpowers/Secrets
OR you can copy paste from below
I actually created a presentation mode for this patch so I encourage you to check that out if you are running this patch yourself, it’s not pretty but it simplifies the system to just the important parts.
]]>
To use this patch simple load and play a file in the sfplay object, and then connect a midi device and play notes. As you play notes the file loaded into sfplay will be output in the frequencies being played on the midi controller.
To demonstrate how this works in practice I played my favorite jazz standard, Misty, while running an accapella for the death grips song “Death Heated” through the vocodder.
EXPLICIT CONTENT WARNING
It didn’t occur to me until after I was finished that this content might be too offensive to post, so I apologize in advance. I thought it was funny at the time and it didn’t really occur to me I was turning this in for a grade.
If I were given more time I would want to add an ADSR envelope to the voccoder. Currently there is a lot of cutting out because as soon as I lift my hand from one chord to play the next the velocity drops to zero and the voices die. Adding a sustain on note release would make the audio less choppy and make this vocodder more playable in multivoice situations.
MAIN PATCH
VOCODDER PATCH
SYNTH VOICE PATCH
SOURCES CITED
I had a lot of issues with the polyphony aspect, and I got a lot of help from https://cycling74.com/2011/05/05/polyphonic-synthesizer-video-tutorial/
]]>
For my traditional signals I wanted to find more acoustically interesting spaces around campus. First I chose the stairwell in the CUC, I have always loved the echoes in that space. I also felt the new locker rooms would be an interesting space. I thought they would be more reverberant than they were however.
For my third recording I thought ambiance might create a cool effect. I tried recording is starbucks but the background music kind of ruined what I was going for, so I just sat at the blackchairs in the CUC and recorded the space.
For my final recording I chose a flushing toilet, I thought the gurgling of the water and the tons of little peaks and valleys could create a cool effect.
Below is a playlist of all recordings produced. First the IR is played, and then after is the original signal convolved with that IR. For time sake only the first verse was run through the system.
]]>
To demonstrate how the this patch could be used I improvised a quick little vocal piece.
]]>
For this assignment I chose the Google Deep Dream neural network as my system. I took a selfie and fed it through the system 90 times, the above video shows a compressed time lapse accompanied by a version of “Changes” where the the pickup into the chorus was fed through a digital reverb and delay system in FL studio because I thought it was kind of funny. Also attached are the original image and the final product.