Flanger is receiving values via OSC to control delay output of each individual string
Gravity Sound connects strings to planets in our solar system where each string’s tension is equal to the gravitational pull between the two objects it’s connecting. A frequency can then be found, knowing the length and tension of the string.
The strings in Gravity Sound now have delay capabilities. Select the delay button on the left menu and a parallel solar system will appear. If a string is selected (highlighted red when selected), a user can then create a string in the delay solar system which will modulate the selected string. Only one delay string can modulate a selected string at a time. The resulting values that are sent to MAX/MSP over OSC are:
delayTime = abs(y coordinate of string midpoint) ….(absolute value)
delayRate = string length x 3 …units are AU. which is the distance between the Earth&Sun
delayDepth = 4 …changing this in real time didn’t sound so great. 4 sounded okay
delayFeedback = 1 – abs(sin( angle between camera and string midpoint) x 2) …wanted it to be close to 1
delayWetness = 100 * abs(cos(angle between camera and string midpoint) / 2)
]]>
Here’s the (WIP) patch:
Thanks!
]]>But in addition to max I needed to write an opengl shader to manipulate the model. I also decided to handle lighting and color in a shader as well. Here is that shader:
]]>I wanted to make an expressive Guitar Controller using Max and Max 4 Live. I used an old guitar I had laying around that I wanted to create something fun and new with it. I used a bare conductive Touch Board ( https://www.bareconductive.com/shop/touch-board/ ) for the brains on the guitar, and an application called Touch OSC running on a mobile device.
Here is a picture of the guitar:
I used aluminum foil for the touch sensors which are connected to the Bare Conductive board. For this demo, the touch sensors are controlling my last project, the drum synthesizer. The sensors go from top left; Kick, Snare, Tom 1, Tom 2, Tom 3, Closed Hat, Open Hat. Then the two touch sensors near the phono jack on the guitar are mapped to stop, and record in Ableton Live. Also, there is a stand alone play button on the top right of the guitar that is unsee in the picture. I plan on using conductive paint for the touch sensors in a future generation of this device.
I also had an incredibly hard time working with a Bluetooth module. The original idea for this project was to be completely wireless (other than the guitar jack, which wireless systems already exist) and the Bare Conductive board to be running off of a LiPoly battery. I sadly, couldn’t get a head of the correct bluetooth firmware on my HC-06 module chipset to support HID interaction. Hopefully in a future generation of this device, I can make it a complete wireless system with conductive paint. I wanted to focus on the Max and Arduino plumbing for this project.
On the Touch OSC side, I created a patch that interprets the OSC data to changing the parameters on my guitar effect patch running in Max 4 Live. The Touch OSC patch looks like this:
The multi-sliders control the Delay and Feedback lines I used from an existing M4L patch. The first red encoder controls the first gain stage of the guitar effect. The second red encoder controls the second gain stage of the guitar effect. Together they make a distortion effect on the guitar. The red slider on the right is the amount of reverb time that the distorted guitar receives. The green encoder controls the amount of delay time that is taken in the effect. Lastly the purple encoder is the amount of feedback taken in to the effect.
In Ableton Live the guitar effect has this UI:
The effect parameters can be effected here as well, as well as levels, and a master out.
The drums are pretty much the same as my Project 1. Here is a link to my Project 1: https://courses.ideate.cmu.edu/18-090/f2016/2016/11/06/drum-machine-project-1-steven-krenn/
This is what it looks like in Ableton Live:
Here is the code to the guitar effect:
Here is the drum synthesizer:
Here is the Bare Conductive board’s code:
Also, because this project has a lot of part to it, I will upload a Zip file to google drive that includes all of the files you would need to get it up and running on your machine.
Here is the link to the zip:
https://drive.google.com/drive/folders/0B6W0i2iSS2nVWDA4SW5HS1RCV3c?usp=sharing
For the future iteration of the device I could imagine, Bluetooth (wireless), battery powered, conductive paint on a 3D printed overlay, and a gyroscope. I am excited to continue working on this next semester.
Have a good one,
Steven Krenn
]]>
]]>
This patch works well with images generated in photoshop to produces interesting patterns. Conceptually, I am interested in creating patterns with tons of human artifact (especially digitally), but are they really artificial since people are a product of nature?
]]>
In this project, I utilized the SearchTweet Max Java external to search the Twitter API.
I began with a simple song construction:
While Max is playing the sound file, it uses SearchTweet to identify the 10 most recent tweets of the search parameter, and stores them in a Coll object.
Tweets
A number of operations (word count, letter count, number of uppercase and lowercase letters, the use of exclamation points, etc.) are performed on the text received from the Tweets. Based on the results of those operations, values are sent to grainstretch~ object to manipulate the track.
Operation values also affect the voice (using aka.speech) of the synthesis engine speaking the tweets above the affected music.
Main Patcher
Sound of the affected sound file. As the variables will change per each refresh of the Twitter content, this is an example of the types of transformations that occur.
(Max file was missing in initial upload)
The primary issue with this is that it lacked the true anonymity required for the concept to work. There needed to be a larger disconnect between the people exposing themselves and those absorbing the work. What better way to accomplish than the internet!
For project 2 I built a web app. https://gatsby-effect.herokuapp.com/
The app replaces the concept of the booth in my earlier iteration of this project. Users go to a webpage, record a secret, and then are redirected to a page which plays a recording generated with the max patch I built earlier to recreate the experience. Here is the repo for the patch from before.
https://github.com/dtpowers/Secrets
The one issue with the current system is that it requires me to manually create new performances from the data collected via the app. I tried to set up my backend to automatically generate new performances daily, but I had issues interfacing with max in any sort of hosted environment. If I had more time / a personal server space I would automate that process so the system is completely self sustaining.
To use the web app click the microphone once, say something, click it again when finished, and then you will be redirected to a performance of the system. Repo for the app is https://github.com/dtpowers/GatsbyEffect
]]>
I added an object that would transform the colors of the video by adding “gain”. I utilized chromakey in order to overlay another clip over where the white portions of the video would be. The effect I achieved was interesting, as the video wouldn’t overlay until the background would “gain” up to a white color, as I set the chromakey to white. The colors in the clip that weren’t white were dramatically transformed. The intensity of this effect was linked to a bass sound through the usage of a bandpass filter. On top of this, I added a delay system that was also linked to the bandpass filter. The following video is the result of all this.
]]>Monday- Stravinsky & Strauss (in reverse)
Tuesday- Construction ambience
Wednesday- Ocean ambience/bird calls
Thursday- CFA recordings
Friday- A Pulse
Along with the daily changing noises I wanted movement to be a large aspect of the piece, so I filtered the sound to each speaker differently. The speakers lower on the stairwell were high frequencies, and as you climbed the stairwell the frequencies slowly became lower and lower. People had the power to change what they were hearing by the way they moved throughout the space.
The feedback from the piece was subjective and hard to document. For the most part, even though there was a clear view of all the speakers and cables, most people didn’t associate the sound with them. They though the wood shop was being extra noisy, or that the school of music was doing ‘something weird’. It wasn’t until people started moving throughout the space that they understood what was making the noise and how it was changing. This was caused from a few different things:
No matter if people realized if it was coming from the speakers or not, they all reacted to it. I recorded someone walking up the stairs and when it was played everyone on the stairwell moved to let them pass before realizing no-one was there. Understanding that sound has an effect on what we do, where we are, and how we interact with a space is important.
]]>