skrenn@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2016 Twisted Signals Thu, 22 Dec 2016 20:26:10 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.28 https://i2.wp.com/courses.ideate.cmu.edu/18-090/f2016/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 skrenn@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2016 32 32 Expressive Guitar Controller Project 2 Steven Krenn https://courses.ideate.cmu.edu/18-090/f2016/2016/12/06/expressive-guitar-controller/ Wed, 07 Dec 2016 04:14:54 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=540 Howdy,

I wanted to make an expressive Guitar Controller using Max and Max 4 Live. I used an old guitar I had laying around that I wanted to create something fun and new with it. I used a bare conductive Touch Board ( https://www.bareconductive.com/shop/touch-board/ ) for the brains on the guitar, and an application called Touch OSC running on a mobile device.

Here is a picture of the guitar:

guitar

I used aluminum foil for the touch sensors which are connected to the Bare Conductive board. For this demo, the touch sensors are controlling my last project, the drum synthesizer. The sensors go from top left; Kick, Snare, Tom 1, Tom 2, Tom 3, Closed Hat, Open Hat. Then the two touch sensors near the phono jack on the guitar are mapped to stop, and record in Ableton Live. Also, there is a stand alone play button on the top right of the guitar that is unsee in the picture. I plan on using conductive paint for the touch sensors in a future generation of this device.

I also had an incredibly hard time working with a Bluetooth module. The original idea for this project was to be completely wireless (other than the guitar jack, which wireless systems already exist) and the Bare Conductive board to be running off of a LiPoly battery. I sadly, couldn’t get a head of the correct bluetooth firmware on my HC-06 module chipset to support HID interaction. Hopefully in a future generation of this device, I can make it a complete wireless system with conductive paint. I wanted to focus on the Max and Arduino plumbing for this project.

On the Touch OSC side, I created a patch that interprets the OSC data to changing the parameters on my guitar effect patch running in Max 4 Live. The Touch OSC patch looks like this:

touch_osc

The multi-sliders control the Delay and Feedback lines I used from an existing M4L patch. The first red encoder controls the first gain stage of the guitar effect. The second red encoder controls the second gain stage of the guitar effect. Together they make a distortion effect on the guitar. The red slider on the right is the amount of reverb time that the distorted guitar receives. The green encoder controls the amount of delay time that is taken in the effect. Lastly the purple encoder is the amount of feedback taken in to the effect.

 

In Ableton Live the guitar effect has this UI:

guitar_m4l

The effect parameters can be effected here as well, as well as levels, and a master out.

The drums are pretty much the same as my Project 1. Here is a link to my Project 1: https://courses.ideate.cmu.edu/18-090/f2016/2016/11/06/drum-machine-project-1-steven-krenn/

This is what it looks like in Ableton Live:

drums_m4l

Here is the code to the guitar effect:

Here is the drum synthesizer:

Here is the Bare Conductive board’s code:

 

Also, because this project has a lot of part to it, I will upload a Zip file to google drive that includes all of the files you would need to get it up and running on your machine.

Here is the link to the zip:

https://drive.google.com/drive/folders/0B6W0i2iSS2nVWDA4SW5HS1RCV3c?usp=sharing

 

For the future iteration of the device I could imagine, Bluetooth (wireless), battery powered, conductive paint on a 3D printed overlay, and a gyroscope. I am excited to continue working on this next semester.

Have a good one,

Steven Krenn

 

]]>
Drum Machine Project 1 Steven Krenn https://courses.ideate.cmu.edu/18-090/f2016/2016/11/06/drum-machine-project-1-steven-krenn/ Sun, 06 Nov 2016 23:34:29 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=449 Hi there,

For my self guided Project 1 I made a drum machine for Max for Live. I made a synthesized Kick drum, Snare drum, Tom (1 through 3), Opened Hi Hat, and Closed Hi Hat. As well as some master distortion effects.

Here is the plugin UI in Ableton Live:

screen-shot-2016-11-06-at-5-46-22-pm

The Kick drum has many envelope shapers to achieve the 808 sound. from top left to bottom right, the ADSR of the pitched Kick sound, then the pitch envelope right underneath it. Same ADSR for the noised kick, as well as a pitch envelope. The snare has just an ADSR filter. The Toms each of their own pitches, as well as attack and decay parameters. The closed hat, and the open hat are the same synthesis engine, however the closed hat has a fast decay to 0, while the open hat has a long decay. All of the instruments have their own independent volume sliders, as well as a master out slider.

So what does it sound like?!

I made one drum beat with a Clean setting, an Overdrive setting, and a Bit Crushed setting.

 

Here is what the patch looks like in Max:

screen-shot-2016-11-06-at-6-11-21-pm-2

The Max for Live plugin works as just a normal Max patch if you plug in a MIDI controller. It is expected notes:
C-(MIDI NOTE: 36) – Kick

D-(MIDI NOTE: 38) – Snare

E-(MIDI NOTE: 40) – Tom 3

F-(MIDI NOTE: 41) – Tom 2

G-(MIDI NOTE: 43) – Tom 1

A-(MIDI NOTE: 45) – Closed Hat

B-(MIDI NOTE: 47) – Open Hat

Try it out on your machine and make a beat with it! I learned a lot about routing signals while doing the project, so if you wanted to make your own Max for Live plugin feel free to check my code out on how to grab certain notes from Live.

Have a good one,

Steven Krenn

And most importantly….The code!!

]]>
Filter Freak https://courses.ideate.cmu.edu/18-090/f2016/2016/10/16/filter-freak/ Sun, 16 Oct 2016 04:03:37 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=384 Hi there,

I made a version of a famous plug in called Filter Freak. What I tried to do was make a bandpass filter, as well as a resonate filter, and overlay them on top of each other. Then I also wanted to oscillate the resonate filter so it can sweep through the band-passed signal.

The Low knob is the low end of the bandpass.

The High knob is the high end of the bandpass.

The Freq is the starting frequency of the seep.

The Degrade, degrades the signal 0 is more degradation, 1 is none.

Reso Mix is how much of a mix of the resonate sweep you want. (I usually keep it around 95)

Speed is the speed of the oscillation.

 

Here is what the filter freak sounds like:

 

Here is a link to all the code:

https://drive.google.com/open?id=0B6W0i2iSS2nVNDVodTA0VndNeU0

Here is the main patch:

Here is the FFT_Resonator~ patch:

Have a good one,

 

Steven Krenn

]]>
Convolve it! https://courses.ideate.cmu.edu/18-090/f2016/2016/10/02/convolve-it/ Mon, 03 Oct 2016 03:59:47 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=320 Hi there,

I created a few Impulses using Apple’s Impulse Response utility software. Which is slightly different than the balloon trick used in class. The Impulse Response utility sends a 20 Hz to 20K Hz sweep through the room, that is then recorded and flatted to an impulse.

slide

The first IR is of the Great Hall in the CFA:

great-hall

The second is through my VOX AC30 amplifier’s spring reverb. I also included another file where it was a clap. The high transients really make the spring reverb sound interesting.

vox

The experimental pieces I chose were a vinyl sound from the internet. I thought it would add that vinyl sound to a modern recording. For my last convolution, I did a Lion’s roar. Which created these very interesting sound clouds of tones from the Cherokee talking clip. Here is all of the recordings:

 

Have a good one!

 

-Steven Krenn

]]>
Super Duduk Effect! https://courses.ideate.cmu.edu/18-090/f2016/2016/09/18/super-duduk-effect/ Mon, 19 Sep 2016 01:35:28 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=235 For Assignment 2, I made a delay/pitch shifter. The usage is pretty simple. It uses only a single slider to control all of the parameters! The other faders are for the level of the dry signal, the level of the delayed signal, and the level of the pitched signal. The slider changes between how many bits the feedback is down sampled to, versus how much pitch is being added to the feedback. It sounds especially good on reeded instruments. So I used the Duduk.aif included in the demo content for Max. Check out the patch, it’s pretty fun to play with!

Here is an audio example:

Also, here is the code:

Enjoy!

-Steven Krenn

]]>
Autotune pedal feedback loop https://courses.ideate.cmu.edu/18-090/f2016/2016/09/06/autotune-pedal-feedback-loop/ Wed, 07 Sep 2016 03:13:19 +0000 https://courses.ideate.cmu.edu/18-090/f2016/?p=143 Hi there,

After listening to Lucier’s I am sitting in a room, I wanted to do the same thing using this auto-tune pedal I have. I wanted to recursively playback and record a song through this pedal a few times to hear the auto-tune degradation. So I wrote a quick little python script to control my audio interface, and play and record a few times. It uses PortAudio (SoudDevice), SciPy, and NumPy. The song I chose was “Sam and Dave – Hold on, I’m coming” at around 45 seconds.

Here is a link to the youtube video:

https://www.youtube.com/watch?v=AREppyQf5uw

The auto-tune pedal I used was a TC Helicon VoiceTone Synth. They call it “Hard Tune” and not auto-tune (I think Antares has the copyright over the word auto-tune). The “Uni” setting is the classic auto-tune, turned to 10. All of the other filters aren’t active.

Here is a picture of the pedal:

Voicetone Synth

The Python script then controls my audio interface, which plays the audio back, and records it. The interface I am using is a Fire Studio project, but Port Audio works with any I/O device.

Here is a picture of the signal flow: (PS, don’t mind the dust, I haven’t been around all summer)

Voice Tone signal flow

So the audio file I uploaded is the first 7 iterations going through the pedal, then the last clip is after 50 iterations. Also, I used a -1 dB limiter, but I’m not sure how SoundCloud normalizes the audio.

Here is the audio file:

https://soundcloud.com/user-333984151/feedback

So, as you can hear, the signal degrades really quickly. It gets crunchy and distorted much quicker than I thought it was going to. I was expecting more of an auto-tune sound that slowly degrades to sine waves. I think it is because I used a full polyphonic mix, and not just a vocal track. However, I think it will eventually go to all sine waves if it iterates enough times.

Here is a picture of what the audio looked like in Pro Tools:

Pro Tools screen shot

You can see it gets exponentially louder with every iteration until it’s being limited.

Also, if anyone wants the source code. Here it is:


import sounddevice as sd

import numpy as np

from scipy.io.wavfile import write

fs = 44100
sd.default.samplerate = fs
sd.default.channels = 2
sd.default.device = 2 #Firestudio

duration = 15 #Seconds
myarray = sd.rec(duration * fs) #first recording

sd.wait()
myrecording = sd.playrec(myarray)
sd.wait()
for i in range(7):
myrecording = sd.playrec(my recording) #Play and record at the same time

sd.wait() #Block until done
write(‘write’ + str(i) + ‘.wav’, 44100, my recording) #Write an audio file for ever iteration

 

 

write(‘output.wav’, 44100, myrecording)


 

 

Thanks for reading!

 

-Steven Krenn

]]>