mingyuax@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 Twisted Signals Thu, 07 Dec 2017 05:03:31 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.24 https://i1.wp.com/courses.ideate.cmu.edu/18-090/f2017/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 mingyuax@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 32 32 115419400 Final Project – Generative Music Soundscape Matthew Xie https://courses.ideate.cmu.edu/18-090/f2017/2017/12/04/final-project-ambient-music-generator-matthew-xie/ Mon, 04 Dec 2017 09:28:33 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1533 For the final project, I decided to further explore Max MSP’s self-generating music project, a step above of what I created for Project 1. For this project, 8 different designed sounds are ready. 5 are main sounds while 3 acts as special effects. The patch almost acts as a sequencer, with inputs of tempo and ‘beats per bar’. Each bar, a new sound is triggered completely randomly. However, both the frequency and volume of the sound is from analyzing the user’s input through the piano keyboard and other settings. The user is also able to change the sound design of 5 of the 8 sounds through graphs. The piano keyboard also acts as a slider, as both the frequency and volume is set based on where the user clicks it. Other sliders for the 7 different sounds indicate the octave possibility range. From then on, the 5 main sounds are selected randomly. The 3 FX sounds are played also due to chance, yet this chance is a result within the subpatch. The sounds are processed through reverb and delay effects. Furthermore, a stuttering effect is also available, which splits each bar up into 16 distinct ‘steps’ (inspiration from Autechre).

I originally wanted to due a music generative project based off of possibility and an input from the mic. But after researching online and especially finding out about the music group Autechre I changed my mind. I mainly got my inspiration from their patches. Sound designs were learnt both through the youtube DeliciousMaxTutorials and http://sounddesignwithmax.blogspot.com/. Reference for the reverb subpatch: taken from https://cycling74.com/forums/reverb-in-max-msp.

Here is a recording sample of the piece being played:

 

Code as follows:

]]>
1533
Project 1 – Matthew Xie https://courses.ideate.cmu.edu/18-090/f2017/2017/10/30/project-1-matthew-xie/ Mon, 30 Oct 2017 12:14:11 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1425 For Project 1, I created a self-generated melody & drone patch.

First off, a wav file of single piano notes played consecutively is analyzed. While Max randomly selects portions of the wav file to play in snippets, the frequency of the audio being played is analyzed and triggers the 1st higher-pitched drones in intervals. Meanwhile, the 2nd drone patch can be triggered by using the keyboard as a midi-keyboard.

The drone is achieved via subtractive synthesis. The pink noise generator is send through filters, only letting pass certain frequency bands. Then, the subtractive synthesis is done with a handful of inline reson~ objects.

The ‘analyzer~’ object is referenced from the maxobject.com website.

Delay is added to all sound effects. Piano melody can also go through a noise gate at will. The speed of the piano sampling can also be manipulated, which will immediately also effect the speed of the self-generated higher pitched drones.

Here is an example of the music being played:

Code is Here:

]]>
1425
Assignment 4 – Matthew Xie https://courses.ideate.cmu.edu/18-090/f2017/2017/10/16/assignment-4-matthew-xie/ Mon, 16 Oct 2017 12:29:56 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1359 For assignment 4, I decided to utilize the fft object and create an audio effect patch that would mimic the sound of recent favorite music genre ‘Vaporwave’.

I created a noise reduction subpatch. Furthermore, a degrading fft subpatch is also used and linked to the output, playing along with the other signal. After these two subpatches, the audio signal is then run through the original patch where it is stretched out in real time (slowed down). This is done by using the delay effect.

I also added another simple visual presentation, that is very similar to the one we made in class. A japanese city pop song was used in demonstration, to achieve that ‘vaporwave aesthetic’.

Another demo: https://soundcloud.com/thewx/assignment4-demo/s-K8iMc

 

Top level patch:

Noise Reduction Patch:

Degrading Patch

Visual Patch:

]]>
1359
Project 1 Proposal – Matthew Xie https://courses.ideate.cmu.edu/18-090/f2017/2017/10/02/project-1-proposal-matthew-xie/ Mon, 02 Oct 2017 07:08:20 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1215 For my Project 1, I am thinking of creating a patch that could detect the different audio effects from an audio file (reverb, delay, high/low frequencies etc) and would trigger the proportional amount of effects visually, manipulating a video input. If I have the ability, I would also like to include other features such as generating certain patterns across the video also being dependent on the audio changes.

An inspiration source I am having for this project is as follows:

This is a lot more advanced than what I’m hoping to achieve, but definitely includes certain artistic styles that I’d like to imitate in my project as well.

]]>
1215
Assignment 3 – Matthew Xie (Kalimba Project) https://courses.ideate.cmu.edu/18-090/f2017/2017/10/02/assignment-3-matthew-xie-kalimba-project/ Mon, 02 Oct 2017 06:34:35 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1179 For this project, I used a recording of a Kalimba riff from a friend.

  1. Recorded IR1 (Maggie Mo Hallway), IR2 (WestWing staircase, balloon pop at top recorded at bottom). Recorded IR3 (myself saying “Kalimba”. Used a sample for IR4 (a water drip sound effect)
  2. Ran the kalimba riff in Max through all 4 convolution effects and recorded through ‘sfrecord~’ Code: https://gist.github.com/anonymous/978888ce9c3f36fe2fdfb96b659bee4f

Here are the results:

Kalimba Original IR1 IR2 IR3 IR4 Kalimba IR1 Kalimba IR2 Kalimba IR3 Kalimba IR4

I really liked the sound of this, and decided to make a track through Ableton Live.

Track explanation as follows:

  1. Percussions (kick, clap, claves, woodenruffle, drip, underwater-twirl-sound, chimes): except for kick, and chimes, all other reverb effects were used by running the original sample through the Max patch with IR1 as the convolution effect.

Kalimbas:

  1. Starts off with the original riff, adding the IR1 IR2 IR3 IR4 riffs each loop. IR3 is pitched down two octaves to provide a bass-ier feel.
  2. Once all are included, the oiginal riff IR1 IR2 IR4 fades away in order each loop. IR3 is kept at the very end (wanted to end the track with the distant “kalimba” voice within the kalimba reverb.

Apart from a tiny bit of compression (Ableton built-in) and limiter (George Yohng’s W1) added, not much else mixing is done (my apologies).

]]>
1179
Assignment 2 – Matthew Xie https://courses.ideate.cmu.edu/18-090/f2017/2017/09/18/assignment-2-matthew-xie/ Mon, 18 Sep 2017 13:00:33 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=890 An audio processing patch that the user is allowed to control the timeshifted delay effect of both high ends and low ends of an audiotrack through filters. The high/low ends are separated so different amounts of timeshifting can be applied, then they are played back together through main audio channel.

 

]]>
890
Assignment 1 – Matthew Xie https://courses.ideate.cmu.edu/18-090/f2017/2017/09/06/assignment-1-matthew-xie/ Wed, 06 Sep 2017 07:43:07 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=744 The system I used to process my signal through was an algorithm to Pixel Sort images for the application Processing. Pixel sorting is the process of isolating a horizontal or vertical line of pixels in an image and sorting their positions based on any number of criteria, in my case I sorted the pixels of the images through the brightness. In more detail, the script will begin sorting when it finds a pixel which is bright in the column or row, and will stop sorting when it finds a dark pixel. The script identifies black pixels by comparing the pixel’s brightness value to a threshold, if it’s lower than the brightness threshold the pixel is deemed to be dark, if it’s higher it’s deemed to be bright.

I started with a simple image of a bird-eye view picture of an island. I ran my image through the system 630 times to be exact. The effect or changes can be quite noticeable at the beginning, but they slowly become harder to notice as the signal gets twisted even more. I also noticed that the algorithm itself has a few bugs, leaving some blocks of pixels unsorted at all. Especially in the middle (middle of islands) where brightness levels are even and high, the effect won’t be present no matter how many times the process is ran.

For more info about pixel sorting, please see: http://datamoshing.com/2016/06/16/how-to-glitch-images-using-pixel-sorting/

]]>
744