skrenn@andrew.cmu.edu – Experimental Sound Synthesis https://courses.ideate.cmu.edu/57-344/s2017 57-344/60-407 Spring 2017 Thu, 13 Jul 2017 16:44:29 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.28 https://courses.ideate.cmu.edu/57-344/s2017/wp-content/uploads/2017/01/cropped-noface-drawing-tiny-32x32.png skrenn@andrew.cmu.edu – Experimental Sound Synthesis https://courses.ideate.cmu.edu/57-344/s2017 32 32 Big Glass Music https://courses.ideate.cmu.edu/57-344/s2017/big-glass-music/ Mon, 17 Apr 2017 04:16:12 +0000 https://courses.ideate.cmu.edu/57-344/s2017/?p=488 I noticed a lot of Alex’s music has a big sound to it. So I was trying to capture that in this demo of a Glass Animals Remix I made during class! [https://clyp.it/bqyc244c]

]]>
Battery powered Raspberry Pi Zero effects system https://courses.ideate.cmu.edu/57-344/s2017/battery-powered-raspberry-pi-zero-effects-system/ Wed, 05 Apr 2017 04:12:42 +0000 https://courses.ideate.cmu.edu/57-344/s2017/?p=394 Continue reading "Battery powered Raspberry Pi Zero effects system"]]> I wanted to research a Raspberry Pi powered effects system for a guitar or synthesizer. I wanted to create cheap guitar effects pedals, or an entire amp simulator. I originally built the system using regular Pure Data on my Raspberry Pi 3, and I thought it would be AC-powered. Once I got thinking about it, I thought the guitar DSP I built may be able to run on a Raspberry Pi Zero and be battery powered. The main differences between the Pi 3 and the Pi Zero is the price and processing power. The Pi 3 is a Quad core processor and costs $40, while the Pi Zero is a Single core processor and costs $5. Once I brought the Pure Data code over to the Pi Zero, I had some initial issues. The first issue was with latency, so I learned about running pure data on a headless system. Inside of raspi-config, you can set a setting that does not load/use resources on the GUI and just runs a terminal. This saved a fair bit of cpu processing for the Pi Zero. Then I also ran Pure Data in something called headless mode with the -nogui flag. In order to get my external USB audio card to run, and Pure Data to launch on startup I wrote a bash script, and called it in /etc/profile. The script ran “pd -nogui /home/pi/Desktop/Guitar_Pi.pd &” which launches Pure Data in no GUI mode and loads the patch on the desktop, and the “&” flag lets the OS keep running Pure Data in the background. With this script/system in place you can plug in the Pi Zero to the USB battery, and it will just start processing low latency audio without any user interaction/ssh.

In musical terms, I built an initial gain system, a WhaWha effect, a Fuzz/Distortion effect, then Reverb from rev3~. Then there is a load bang that auto-starts the DSP in Pure Data. I used mostly FLOSS manuals and the internet to help me build the effects. ( https://booki.flossmanuals.net/_booki/pure-data/pure-data.pdf ) In the future I would like to build a small MIDI controller from a Teensy that would give small knobs and buttons to change parameters of the effects. The effects used were time domain signal processing effects. I would be interested in doing some FFT processing on audio and seeing if I can keep the audio latency still relatively low. I think it would be really interesting to build a master effects “backpack” for small battery powered synthesizers like the Pocket Operator. Or possibly an open source cheap synthesizer using only the Pi zero, and a custom MIDI controller.

Here is a picture of the system while being hooked up to a Pocket Operator synthesizer:

Here is the Pure Data text code:

]]>
Sound Squad 5 Group Project https://courses.ideate.cmu.edu/57-344/s2017/sound-squad-5-group-project/ Mon, 13 Feb 2017 16:39:04 +0000 https://courses.ideate.cmu.edu/57-344/s2017/?p=228 Continue reading "Sound Squad 5 Group Project"]]> The ambisonic Squad Five project focuses on letting a user control the sounds he/she experiences. We accomplished this by linking TouchOSC to a Max Patch running sounds each group member recorded. An iPhone, running TouchOSC, was passed around the class allowing other students to control the location of pre-recorded sounds. Each group member created sounds to play using a technique that interests each of us. We decided on a fundamental structure for each of us to follow, tempo, key, and meter.

Abby’s Section, Violin loops:
For the violin Section of the controller, I created a set of 7 loops. Using Logic and a studio projects microphone, I set the tempo at 120 bpm and played a repeating G octave rhythm. I then recorded melodic lines, percussive lines, and different chord progressions over an 8 bar period. I originally recorded 14 separate tracks, but we were only able to fit 7 on a page. I also recorded odd violin noises that I meant to edit together making a melodic line out of it, but it was a little tricky because of all the other variables in all of our other loops. If we were able to continue this project, I would think about the length of the loops because if you play them too many times it can get a bit repetitive. While the diversity of loops available helps, there could be parameters set so after a certain amount of loops are triggered you can’t add more, or if you trigger certain loops, you can’t trigger others. I also think designing loops with basic parameters makes working on each of our sections easier, but has limitations in making the piece a whole unit.

Ty’s section, percussion:
For the percussion page of the TouchOSC controller, I created a Max Patch that reads two photoshop files then uses the images to sequence sounds. The first file is an RGB image, and each color value determines the volume level of a pre-recorded percussive sound. The second image file is a black & white bitmap; this image triggers the sounds to play on black pixels. Originally, I used a drum synth, rather than pre-recorded files, but found using RGB to control the synths difficult because of my limited synth knowledge. For a next step in this project, I would use my TouchOSC page to let the user build their own sequencer. Additionally, I would change the sounds I recorded. It would be interesting to find four sounds that fit well with the other two musician’s pages, and let user control the sequence and ambisonic location of those sounds.

image2beatSequencer:

For the section by Steven:

I made seven other recordings following Abby’s lead. We wanted to make these recording harmonic with each other and all stayed in G minor. We also wanted to stay in time so we all recorded at 120 BPM. For my recordings I played my electric guitar using my AC30 guitar amp. The electric guitar recordings were chords, the first one was just Gm repeated, and the second one was Gm and Dm repeated. Then I also did a Gm arpeggio on the electric guitar as well. I also did one recording using the acoustic guitar. The other three recordings were done using the Korg Monologue synthesizer. The first one was a Gm arpeggio. The second one was an analog drum patch I made. The final recording was a bass Gm patch.
On the Max side, I used the HOA library, and I did all of my panning in polar form. Doing everything in polar made everything very straightforward, and easy to visualize. From the UDPreceive object I routed the magnitude and phase information into the HOA map message. The magnitude information was very straight forward 0 to 1 in float decimal format mapping directly to the HOA map message. The phase information however had to be converted from a 0 to 1 float to a 0 to 1 float times 2*Pi. That would give the output to the HOA map of 0 to ~6.3 in float decimal format. I had to repeat these steps for all 21 recordings, so I created an encapsulation called “pie_n_pak” that does the magnitude and phase calculation, and packs all of the proper outputs together. All of the recordings are triggered from one toggle box. The UDPreceive object also gets the state of the recording being triggered from TouchOSC. The state is a toggle box from TouchOSC, if it is active it sends a 1, and if it is not toggled is it a 0. The fader encapsulation sub-patch fades the audio track in or out depending on the toggle present. It is important to note that all of the audio files are always playing in the background, and are just faded in and out depending on the current state being sent from TouchOSC.
In TouchOSC, I made three different pages for each of our recordings. The top gray bar is each of our pages. The toggle buttons on the top fade in (or out) for each recording. The long fader is the magnitude of the panning in the HOA panner. The encoder underneath is the phase information being sent to the HOA panner.

Here is a link to the code:

Here is a link to all the supporting audio files:
https://drive.google.com/open?id=0B6W0i2iSS2nVZ1RobkRQUzA0RlU

]]>
Golan Levin Reflection: https://courses.ideate.cmu.edu/57-344/s2017/golan-levin-reflection/ Mon, 13 Feb 2017 16:27:59 +0000 https://courses.ideate.cmu.edu/57-344/s2017/?p=225 Continue reading "Golan Levin Reflection:"]]> Golan Levin Reflection:

Overall, I thought Golan was an incredible artistic and musician. His ideal about linking sound and visual mediums together was incredible. He showed some very insightful videos that I have never seen before. One in particular stood out to me, which was the video of the oscilloscope images and it’s corresponding sound. The video he showed at first was the oscilloscope mushroom video.

Then showed the album made for oscilloscope by Jerobeam Fenderson.
https://www.kickstarter.com/projects/1230242223/oscilloscope-music
The idea of bringing these images in to sounds is really interesting to me. Hiding messages isn’t something new however. Aphex Twin did it first with the face in the spectrum of his song “Equation”.

However, making an actual moving image out of these sounds is really fascinating. Being able to tell a visual story from what sounds like unintelligible noise is incredible. The sounds are clearly not related at all, but together create this visual masterpiece.
In the future, I could imagine a piece of music technology software that uses it’s DRM or serial number inside of the spectrum at non-audible range. Or sharing some kind of codec metadata in the spectrum of the non-audible range. Imagine being able to take a recording of an audio clip, and it having some kind of hidden text/data instructing another piece of technology on how to do something. Like an over-the-air instruction system. Or being able to encode these unintelligible noises into moving images and making a sort of video synthesizer from these crazy noises. Being able to transmit a video-like content over sound. Even newer technology from Teenage engineering, the pocket operator “PO-32” can transmit audio data over sound.
https://www.teenageengineering.com/products/po
I think the data transfer system is very similar to the Aphex Twin face in the spectrum, just with encoded data instead of an image. Golan really got me thinking about the possibilities of sounds that aren’t really just sounds, but also images as well. That even though they may sound unintelligible, they are very indelible in their own domain.

]]>