Author Archives:

Project 2: Max Hexagon

For my second project, I created a game heavily inspired by super hexagon. This game, which I call “Max Hexagon”, bases all of its randomization on aspects of a given sound file.

In the game, the player is a cursor on one side of a spinning hexagon. As the board spins, so does the player. The player can move left and right about the hexagon, and must dodge the incoming formations for as long as possible. By default, the entire board spins at 33.3RPM, the angular speed of a 12′ record. As the player adjusts his movement, the song begins to play faster/slower, based entirely on the players angular speed in proportion to the speed of a record.

The stage itself is generated in a number of ways. Aspects of the songs FFT are used to create pseudo-random shapes and rotations, while the note-rate of the song is used to determine game speed. In addition, visual effects are created from the music. The maximum value of the FFT is used to create the color, the integral of a frame of the song is used to determine how large the center hexagon is, and the beat of the song is used to change the background pattern. Beat detection is both inaccurate and computationally intense, which is why it does not play a larger role in the game.

The game itself was created using Python and Tkinter. The script that runs the game is multi-threaded, to allow both Tkinter and an OSC server to run in parallel. The OSC server changes specific variables to allow python and max to communicate. The general form is either Python sends a message to Max, which is enacted on immediately, or Python requests new data from max, which is promptly sent over OSC.

The game itself is extremely computationally intense, and must be run in a 1920×1080 resolution. It is, unfortunately, difficult for the game to keep up with Max while running other tasks on the hardware. If the game crashes due to insufficient hardware, the framerate can be changed in and the tickrate (which is the framerate in milliseconds) can be changed in max.

The game itself requires a few external modules:

Python OSC:


Beat~ is a soft requirement, it can be removed if necessary and most of the game will still function, baring a visual effect. Beat~ requires 32-bit Max, and will thus not run in Max 8.

My project can be downloaded here:

Python spews a number of errors on closing the program. This is normal behavior, and due to the lack of an ability to properly close the OSC server with the rest of the game.

Project 1: These sounds look nice.

For my first project, I created a Max patch which can take an input video and synthesize a corresponding audio file which has a wave form representation that looks like the video.

To convert a video to a waveform representation, a few things must happen: First, the video needs to be simplified. I use edge detection for that. The edge detected video needs to be converted to matrices of sine values. A scope will take these values and plot them to x and y coordinates in a signal. Since our video is edge detected, we need only look for “visible points” and then determine what their sine values are. These values correspond to their position in the matrix. More clearly, the top-right of the screen is x=1, y=1 and the bottom-left is x=-1,y=-1. Once we have our list of values, we need to align them so that the output looks “correct”. This is done in python, as it allows for more complex manipulations. The aligned matrix is then written to a jxf, for later use in max. That matrix represents one frame of our video, as audio.

Interfacing Max with Python required a bit of creativity. I wound up using OSC to send messages between Python and Max, with most real data being sent in the form of matrices saved to jxf files. The exception to this is the patches playback function, which is python sending many read values to Max under very strict timing.

Rendering the video to audio takes a long time. Around one second of 24fps video takes 1 minute to render. I’ve included a short video and it’s corresponding audio and representation. My project Zip also includes a scope patch to allow for dependency-free viewing on any computer with Max.

The patch itself has several requirements, with the primary ones being Python-OSC, Python 3, and xray.jit.sift. These are either included or else explained in the README of my project.



My project can be downloaded here:


Assignment 4: Some colorful music.

For this assignment, I created a patch which takes in an arbitrary sound file and uses it to convolve a video source. In my case, I used my web cam as the video source. The video is convolved at the color level, with the first bin of the sound fft being the first item in the red matrix, then the second is green, then blue, and then it repeats until there is a 5×5 matrix for each color of the video to be convolved with. The video is rendered at about 14 fps to keep the render from lagging, although this can probably be increased. The scale for the fft can also be increased. Each bin is normalized to cap at 1, but the scale multiplies this by some arbitrary factor. I’ve found that three works best in my dorm.

Video of patch in use:

Top level patch:

fft patch:

Assignment 3 – Convolution

For my convolution, I recorded a brief statement using my microphone in my dorm.

I then created 4 impulse response recordings.

This impulse response is the sound effect for collecting a ring in the game “Sonic the Hedgehog.” It was obtained by extracting the sound effect from a youtube video using a conversion website. It was then normalized in audacity.

This impulse response is the first three notes to the Song of Storms from the Legend of Zelda series. It was recorded using the audio input of an EasyCap USB capture card, which was connected to a Nintendo Wii. The song was then played in game, in Legend of Zelda: Majoras Mask, and normalized in audacity.

This impulse was response was recorded under the UC overhang across from the entrance to entropy. To record it, I determined which spot seemed to have the most interesting acoustics, and placed a zoom in said spot. I then popped a balloon a few yards away, still under UC, and normalized the result in audacity.

This impulse response was recorded in Doherty Hall, outside the first floor elevator. To record it, I set a zoom h4n pro to record and placed it on a desk in the corner of the room. I then popped a balloon in the center of the room and normalized the recording in audacity.

Using these impulse response recordings, I convolved my original signal.

Assignment 2 – A delayed change in pitch.

For my assignment, I began experimenting with tapin/tapout and line~. Through experimentation I learned that, when you change the parameters of line~, there is a pitch shift to transition into the new delay. I used a random number generator and zl reg to setup a patch where, whenever line~ reached its desired delay, its goal  would be changed again and its start would be set to the previous end, resulting in a delayed pitch shift at the output.

Next, I incorporated feedback using the same method with a shorter delay and an input that would get progressively quieter (so as to not trample my original sound in an explosive manner). The output of my patch is a version of the audio with a distinct echo and pitch shift, below are the results for the song I used.

Assignment 1 – Repeated capturing of gameplay footage.

For my assignment, I first used my capture card to record part of the opening to the game The Legend of Zelda: Majora’s Mask. I then transferred that footage to my laptop, where it was output and recorded back to my main computer. I then repeated this rerecording from my laptop many times. So as to prevent the video from being destroyed by the darkening of the footage alone, I added a bit of brightness after each iteration. Additionally, each recording was converted from AVI to MP4 before being iterated on. I successfully recorded a total of 16 iterations, at which point the video component was destroyed.

Since the file is too large to upload here. I have uploaded it to youtube: