Category Archives: Assignments

gravity flanger

Flanger is receiving values via OSC to control delay output of each individual string

Flanger is receiving values via OSC to control delay output of each individual string

Gravity Sound connects strings to planets in our solar system where each string’s tension is equal to the gravitational pull between the two objects it’s connecting. A frequency can then be found, knowing the length and tension of the string.

The strings in Gravity Sound now have delay capabilities. Select the delay button on the left menu and a parallel solar system will appear. If a string is selected (highlighted red when selected), a user can then create a string in the delay solar system which will modulate the selected string. Only one delay string can modulate a selected string at a time. The resulting values that are sent to MAX/MSP over OSC are:

delayTime =  abs(y coordinate of string midpoint)          ….(absolute value)

delayRate =  string length x 3            …units are AU. which is the distance between the Earth&Sun

delayDepth = 4           …changing this in real time didn’t sound so great. 4 sounded okay

delayFeedback = 1 – abs(sin( angle between camera and string midpoint) x 2)    …wanted it to be                                                                                                                                                close to 1

delayWetness = 100 * abs(cos(angle between camera and string midpoint) / 2)

 

One handed DAW

For this project I wanted to do build a digital audio workstation that one could control entirely with one hand, and with as little keyboard/mouse interaction as possible. The leap motion controller seemed like the easiest way to do this, but I was stuck for a bit trying to figure out how to control so many different parameters with just one hand. The gate object was the key here, in conjunction with the machine learning patch we looked at in class. Upon recognizing a certain hand position, the gate object would cycle through its outlets and activate different subpatches that recognized different sets of hand gestures. For example, the hand gestures in mode 1 would control audio playback- pause, play, forward/reverse, etc. The same hand gestures in mode 2 would control a spectral filter, and in mode 3 they might control a midi drum sequencer, and so on for various different modes. At least that was the idea in theory…
I definitely bit off more than I could chew with this undertaking. There was way too much I wanted to do, and only a couple of modes that worked semi-reliably. I intend to smooth out the process and eventually have the daw be as fluid and intuitive as possible. One day maybe a disabled music producer might be able to perform an entire set with one hand and not a single click.

Here’s the (WIP) patch:

Thanks!

3D audio visualization with opengl

For this project I made an audio visualizer that manipulates 3D models. In this case, when the signal goes above a certain threshold, the model is stretched in a new random direction. Additionally, using an fft, the signal produces a color based on the loudest detected amplitude, with red being mapped to low frequencies, blue to mid and green to high. Here is the max patch that accomplishes this:

But in addition to max I needed to write an opengl shader to manipulate the model. I also decided to handle lighting and color in a shader as well. Here is that shader:

Expressive Guitar Controller Project 2 Steven Krenn

Howdy,

I wanted to make an expressive Guitar Controller using Max and Max 4 Live. I used an old guitar I had laying around that I wanted to create something fun and new with it. I used a bare conductive Touch Board ( https://www.bareconductive.com/shop/touch-board/ ) for the brains on the guitar, and an application called Touch OSC running on a mobile device.

Here is a picture of the guitar:

guitar

I used aluminum foil for the touch sensors which are connected to the Bare Conductive board. For this demo, the touch sensors are controlling my last project, the drum synthesizer. The sensors go from top left; Kick, Snare, Tom 1, Tom 2, Tom 3, Closed Hat, Open Hat. Then the two touch sensors near the phono jack on the guitar are mapped to stop, and record in Ableton Live. Also, there is a stand alone play button on the top right of the guitar that is unsee in the picture. I plan on using conductive paint for the touch sensors in a future generation of this device.

I also had an incredibly hard time working with a Bluetooth module. The original idea for this project was to be completely wireless (other than the guitar jack, which wireless systems already exist) and the Bare Conductive board to be running off of a LiPoly battery. I sadly, couldn’t get a head of the correct bluetooth firmware on my HC-06 module chipset to support HID interaction. Hopefully in a future generation of this device, I can make it a complete wireless system with conductive paint. I wanted to focus on the Max and Arduino plumbing for this project.

On the Touch OSC side, I created a patch that interprets the OSC data to changing the parameters on my guitar effect patch running in Max 4 Live. The Touch OSC patch looks like this:

touch_osc

The multi-sliders control the Delay and Feedback lines I used from an existing M4L patch. The first red encoder controls the first gain stage of the guitar effect. The second red encoder controls the second gain stage of the guitar effect. Together they make a distortion effect on the guitar. The red slider on the right is the amount of reverb time that the distorted guitar receives. The green encoder controls the amount of delay time that is taken in the effect. Lastly the purple encoder is the amount of feedback taken in to the effect.

 

In Ableton Live the guitar effect has this UI:

guitar_m4l

The effect parameters can be effected here as well, as well as levels, and a master out.

The drums are pretty much the same as my Project 1. Here is a link to my Project 1: https://courses.ideate.cmu.edu/18-090/f2016/2016/11/06/drum-machine-project-1-steven-krenn/

This is what it looks like in Ableton Live:

drums_m4l

Here is the code to the guitar effect:

Here is the drum synthesizer:

Here is the Bare Conductive board’s code:

 

Also, because this project has a lot of part to it, I will upload a Zip file to google drive that includes all of the files you would need to get it up and running on your machine.

Here is the link to the zip:

https://drive.google.com/drive/folders/0B6W0i2iSS2nVWDA4SW5HS1RCV3c?usp=sharing

 

For the future iteration of the device I could imagine, Bluetooth (wireless), battery powered, conductive paint on a 3D printed overlay, and a gyroscope. I am excited to continue working on this next semester.

Have a good one,

Steven Krenn

 

Project2-Mingyuan Yu

In the movie Hobbits, there is a scene about the hero face a dragon. In my point of view, the dragon’s voice wasn’t sounds nicely because it did not have much reverb on its voice, which does not make sense because the scene happen in a city in a huge close environment and it should have lots of reverb. Also, I want to add some spatial effect on its voice, so the sound and video can be more interactive. In this project, I extract the voice track from movie Hobbits, split it into two parts – main voice and others. Then I use leap motion controller to control the sound image position for the dragon voice. Finally, I add a reverb to the dragon’s voice.

 

 

 

 

finalProject: Halftone It!

This final project is a continuation of the first project. This maxPatch has the same conceptual goals as the first. These goals are to create an image renderer which turns an input into patterns of circles. I want to have lots of control over the dots, so I put in variables to scale the dots in various direction! Currently, I am confused why the image renders 90 degrees from the original input.

This patch works well with images generated in photoshop to produces interesting patterns. Conceptually, I am interested in creating patterns with tons of human artifact (especially digitally), but are they really artificial since people are a product of nature?

 

Project 2: Tweet Generating Sound Art

Can I use Max MSP to integrate data collected from the web to affect the sound of a musical composition?

In this project, I utilized the SearchTweet Max Java external to search the Twitter API.

I began with a simple song construction:

While Max is playing the sound file, it uses SearchTweet to identify the 10 most recent tweets of the search parameter, and stores them in a Coll object.

Tweets

Tweets

A number of operations (word count, letter count, number of uppercase and lowercase letters, the use of exclamation points, etc.) are performed on the text received from the Tweets.  Based on the results of those operations, values are sent to grainstretch~ object to manipulate the track.

Operation values also affect the voice (using aka.speech) of the synthesis engine speaking the tweets above the affected music.

Main Patcher

Main Patcher

Sound of the affected sound file.  As the variables will change per each refresh of the Twitter content, this is an example of the types of transformations that occur.


(Max file was missing in initial upload)

More Secrets (Project 2)

If you recall from last time I wanted to build a max patch to create an interactive installment based around the idea of feeling vulnerable in a crowd. People who wanted to interact with the exhibit would enter a booth where they would tell a secret to a microphone. Their secret would then be convolved and bounced around a surround sound set up. As more and more people entered the booth the performance space would become chaotic and it would be impossible to make out anything one person said.

 

The primary issue with this is that it lacked the true anonymity required for the concept to work. There needed to be a larger disconnect between the people exposing themselves and those absorbing the work. What better way to accomplish than the internet!

For project 2 I built a web app. https://gatsby-effect.herokuapp.com/

The app replaces the concept of the booth in my earlier iteration of this project. Users go to a webpage, record a secret, and then are redirected to a page which plays a recording generated with the max patch I built earlier to recreate the experience.  Here is the repo for the patch from before.

https://github.com/dtpowers/Secrets

 

The one issue with the current system is that it requires me to manually create new performances from the data collected via the app. I tried to set up my backend to automatically generate new performances daily, but I had issues interfacing with max in any sort of hosted environment. If I had more time / a personal server space I would automate that process so the system is completely self sustaining.

To use the web app click the microphone once, say something, click it again when finished, and then you will be redirected to a performance of the system. Repo for the app is https://github.com/dtpowers/GatsbyEffect

 

Trippy Video Edit

For my second project, I decided to create a system that would alter a pre-existing video utilizing an audio input, a concept that deviated from my first project, which created visual from scratch. My patch had multiple parts to it. One portion essentially created a system that served as a sort of green screen effect. The original video I found was a music video with a completely white set, which served well for the purposes of demonstrating my project.

I added an object that would transform the colors of the video by adding “gain”. I utilized chromakey in order to overlay another clip over where the white portions of the video would be. The effect I achieved was interesting, as the video wouldn’t overlay until the background would “gain” up to a white color, as I set the chromakey to white. The colors in the clip that weren’t white were dramatically transformed. The intensity of this effect was linked to a bass sound through the usage of a bandpass filter. On top of this, I added a delay system that was also linked to the bandpass filter. The following video is the result of all this.

Sonic Spaces

My second project was the continuation of my first project. Looking at space as a flexible changing thing that sound can enforce and manipulate; I created an Installation in the CFA stairwell. Six speakers were placed at varying levels along the stairwell and throughout the week I played a variety of different sounds to see how the acoustic quality would change how people interact with the space.

Monday- Stravinsky & Strauss (in reverse)

Tuesday- Construction ambience

Wednesday- Ocean ambience/bird calls

Thursday- CFA recordings

Friday- A Pulse

Along with the daily changing noises I wanted movement to be a large aspect of the piece, so I filtered the sound to each speaker differently. The speakers lower on the stairwell were high frequencies, and as you climbed the stairwell the frequencies slowly became lower and lower. People had the power to change what they were hearing by the way they moved throughout the space.

img_9508 img_9524 img_9539

The feedback from the piece was subjective and hard to document. For the most part, even though there was a clear view of all the speakers and cables, most people didn’t associate the sound with them. They though the wood shop was being extra noisy, or that the school of music was doing ‘something weird’. It wasn’t until people started moving throughout the space that they understood what was making the noise and how it was changing. This was caused from a few different things:

  1. CFA has a lot of reverb and it is hard to source a sound to a specific speaker unless you are very close to it.
  2. CFA has a lot of character to it. There are lots of different noises going on throughout the building at all times of the day, from voice majors singing, architecture students building, to art students creating art. Its just accepted that something is constantly going on that you might not be aware of or involved with throughout the halls.
  3. People have a hard time connecting the visual and aural without the involvement of movement. When you were able to walk up and down the stairs, it was much easier to understand what was going on versus just hearing it from down the hall.

No matter if people realized if it was coming from the speakers or not, they all reacted to it. I recorded someone walking up the stairs and when it was played everyone on the stairwell moved to let them pass before realizing no-one was there. Understanding that sound has an effect on what we do, where we are, and how we interact with a  space is important.