7.1 Surround Sound Piece

For my piece I wanted to try a few things:

First I wanted to mix in 7.1, trying to pay attention to how I could use the space. In the original 7.1 mix I had the beginning straight forward with no panning between channels until after the sample of Frankie Laine. After this I had a steady synth with an automated filter, keeping the space from becoming stagnant. This kept the synth ground layer textual and interesting, even as a background. However, at the same time the automated filter was moving, I also automated the sends to each channel, to keep the synth moving throughout the room. Sometimes it moved slowly, quickly, smoothly, rapidly, and sometimes I even glitched it out because I thought it sounded cool. Rinse, repeat, and //2 for the bass and sub-bass.

Secondly, I had some interest in creating a piece that had an uncomfortable/uneasy feel too it. I began by playing with rhythm, and specifically some variations in rhythms that you might not even notice upfront, but subconsciously it might make you uneasy. I attempted this by placing a few heavily weighted moments back or forward just an eighth or so, as well as in the beginning when the drums seem almost completely derailed from the piano, until they meet again. I also had the idea of using the drums to make listeners almost uneasy. I thought that maybe a relentlessly predictable drum pattern might be cool, but soon found out that it was just really annoying.. maybe mission accomplished? I still attempted to use consistent drums to create an uncomfortable environment, just with slightly less (but still a ton) of predictability.

Here is a version mixed down to stereo, enjoy!

Battery powered Raspberry Pi Zero effects system

I wanted to research a Raspberry Pi powered effects system for a guitar or synthesizer. I wanted to create cheap guitar effects pedals, or an entire amp simulator. I originally built the system using regular Pure Data on my Raspberry Pi 3, and I thought it would be AC-powered. Once I got thinking about it, I thought the guitar DSP I built may be able to run on a Raspberry Pi Zero and be battery powered. The main differences between the Pi 3 and the Pi Zero is the price and processing power. The Pi 3 is a Quad core processor and costs $40, while the Pi Zero is a Single core processor and costs $5. Once I brought the Pure Data code over to the Pi Zero, I had some initial issues. The first issue was with latency, so I learned about running pure data on a headless system. Inside of raspi-config, you can set a setting that does not load/use resources on the GUI and just runs a terminal. This saved a fair bit of cpu processing for the Pi Zero. Then I also ran Pure Data in something called headless mode with the -nogui flag. In order to get my external USB audio card to run, and Pure Data to launch on startup I wrote a bash script, and called it in /etc/profile. The script ran “pd -nogui /home/pi/Desktop/Guitar_Pi.pd &” which launches Pure Data in no GUI mode and loads the patch on the desktop, and the “&” flag lets the OS keep running Pure Data in the background. With this script/system in place you can plug in the Pi Zero to the USB battery, and it will just start processing low latency audio without any user interaction/ssh.

In musical terms, I built an initial gain system, a WhaWha effect, a Fuzz/Distortion effect, then Reverb from rev3~. Then there is a load bang that auto-starts the DSP in Pure Data. I used mostly FLOSS manuals and the internet to help me build the effects. ( https://booki.flossmanuals.net/_booki/pure-data/pure-data.pdf ) In the future I would like to build a small MIDI controller from a Teensy that would give small knobs and buttons to change parameters of the effects. The effects used were time domain signal processing effects. I would be interested in doing some FFT processing on audio and seeing if I can keep the audio latency still relatively low. I think it would be really interesting to build a master effects “backpack” for small battery powered synthesizers like the Pocket Operator. Or possibly an open source cheap synthesizer using only the Pi zero, and a custom MIDI controller.

Here is a picture of the system while being hooked up to a Pocket Operator synthesizer:

Here is the Pure Data text code:

Phase Vocoder

This is the phase vocoder I built for my presentation. It slows things down pretty well. It uses a STFT (Short Time Fourier Transform) to grab a variable number of samples at a time, compute the STFT, and mess with the phase matrix accordingly to change the perceived pitch and playback rate of the original audio. The main thing that is cool here is the Paul Stretch algorithm is implemented which randomizes the phase going back into the STFT so you don’t hear any LFO when you slow things down with an STFT algorithm that accumulates phase. It sounds weird when you don’t slow it down, but that is supposed to happen. I included the max patch in this post, please let me know if you have any questions or want to mess with it at all.

I’ve never seen this done in real time, which is my favorite part about this project. You can create variable slowdown speeds with this, and even freeze the playback if you want which kind of acts like a permanent sustain.

-Garrett

PaulMax

Daniel Cohen Personal Research Project

Hey Everyone. Since I never got to show you my personal research project in class I would like to show you all now! I decided that I really wanted to make a game involving sound somehow. Where the main mechanic was sound. My first idea I had was creating pong where the slider was controlled by pitch because it made the player make really funny noises. That was too difficult and there was too much I didn’t know so I decided to turn to something easier and something I knew more about. I made a game using Unity where it is a 2D side-scroller where you collect lightning bolts to get points by jumping over gaping holes in the map. (the points are on the top left corner.)

How you play is to scream into the mic of your computer to jump. That is all. The game runs on its own and picks up items so long as you go through them. I wrote the music for this game as well, which I hope you can hear (be careful not to put it too loud because the mic might pick up some sound.) Below is a link to the game which is on drive. There is a Mac and PC build so choose which ever one that corresponds to your computer to build. https://drive.google.com/open?id=0B0CRVe4BR6hVN0ZZSTZCRlJQcDA

Please enjoy and know that this game is still buggy but it is still playable. Tell me what you all think!

PS. The game starts off really quickly so get ready to scream in the beginning. Also there is no restart screen/menu so if you get stuck on the side of the floor or fall to your death in the game then you must force shut the app and open it again. I am sorry for that inconvenience. I hope you still enjoy the game though. Thanks!!

The documentation for this project is below.

Documentation:

https://unity3d.com/learn/tutorials/topics/scripting/lets-make-game-infinite-runner

http://answers.unity3d.com/questions/394158/real-time-microphone-line-input-fft-analysis.html

http://wiki.unity3d.com/index.php/Mic_Input

Self Directed: 2HandedGrainThang

My goal for this project was to use the Leapmotion controller to create a rhythmic composition. I’d tried a few different kinds of drum sequencers with varied and less than musical results, and was on to trying to use it to ‘play drums in the air’ when i got the idea of using the Grabstrength and Pinchstrength measurements from the Leapmostion object to edit parameters.

I mapped the angular position of my hand to the playback position of a pinch of audio I’d loaded into Grainstretch~ and used Grabstrength to select a point to Jump back to periodically. I could then Pinch and drag vertically to lengthen or shorted that period.
I threw that on top of a rhythmic delay loop I’d built earlier in the semester and just jammed. The controls are still a little sketchier than I’d like. Future development will include different systems for monitoring data from pinch and grab strength and maybe more specific machine learning for gestural control. (apologies for the glitchyness of the video, I couldn’t hear the output and my computer my computer was running a little too hot) https://gist.github.com/anonymous/8bbfc42c4f1f2dc5ec2c6e0205e1258d

Go w/ Chocolate!

For our Go sonification, our group (Kayla, Joey, Dan J., and I) chose to incorporate a tasty element into the game. We used Hershey’s chocolate and oreo drops as the black and white Go pieces. On the board, we mounted two DPA microphones on the board in order to capture the sound of the game. Once a player captured a piece, we sent them over to a microphone to eat the chocolate sensually, and each of the microphones were processed through Ableton Live.

*Josh lost the video because he’s a dumpster fire, but suffice it to say that it was a good time.

Screaming Go

Roles: Luigi Cannatti – Max Patch Programming and Audio Recording
Arnav Luthra – Documentation and Audio Recording

Max Patch Description: Our project utilized jit.cv to read a recorded game of Go and determine the number of pieces in each row. Using this data, we created a step sequencer wherein the number of pieces determines the intensity of the player’s “voice”. We used two different sets of sample banks for the white and black pieces respectively.

Sound recordings: Luigi recorded himself grunting, moaning, and screaming at 11 different levels of intensity. I recorded a few clips of a little sister and then filled in the blanks with a few grunting and breathing samples of my own.

Final Impressions: Overall we were both quite happy with the result. Our original intention with this project was to riff on the stoic nature of Go and I think we accomplished this well. To improve this further, we could have it working with an actual Go board.

Patch: https://pastebin.com/nyicKEH8