Complex Harmonic Motion

Viewing Golan Levin’s presentation was refreshing. It is always nice to take a step back from working in a very specific field or genre and to take in different flavors of the art. I had never given visual aspects of music much serious thought, and Golan’s collection of personal experiences and youtube videos gave me some serious perspective into how Golan thinks about the interaction between sound and visuals.

The historical aspect was interesting in particular to me; I did not know previously that waveforms could be printed and read with film without a digital or magnetic interface. I can appreciate a little bit more the convenience of being able to shape waveforms how I choose on the computer with fancy programs.

Lissajous patterns on oscilloscopes are incredibly fun to watch. They are made twice as interesting because of the nutty amount of testing and detail that goes in to create certain aesthetics. I bet some of the formulas for making a shape of a face or a car take up dozens of exponents, coefficients, and sinusoidal functions. It makes you think, maybe these images can be mathematically derived with a program? Is it possible that some of these artists have created these patterns as images first?

After some research, I found out that the guy who does “Oscilloscope Music” videos does a lot of his work with PureData, or at least a lot of simpler shapes. I can’t find out if he does use a software to convert images to sound, but he does reference this software which is currently in development to change 3D models into sound.

This is such a crazy way of creating music, but very cool.

Golan Levin Reflection

From Golan Levin’s really awesome presentation last week my main takeaway was the works of Norman McLaren. The audio visual works he showed us really caught my attention. Upon further research into his main years of activity were from 50s to the 80s which made me appreciate his work even more. The kinds of audio-visual art he was doing was extremely groundbreaking for his name, not to mention extremely difficult and time-consuming to create using the means available to him. With modern day computing, generating simple geometric visuals in time with sounds or music is quite simple(I’ve even made simple geometric music visualizations using max). McLaren’s “A Phantasy in Colors” was entirely animated with pastel and then the film was manually synced up to the audio. This level of effort and coordination is absolutely mind boggling.

Anyways, I leave you all with a really neat short film of his I found(apparently an Oscar winning film) with some really interesting sound work. Even though it was released in the 1950s all the sounds used in the film were synthesized.

Golan Levin Reflection: Visualizing Music Performances

What stuck out to me the most from Golan Levin’s lecture was the visualization of an orchestra that consisted of projecting close ups of peoples faces as they listened to the orchestra play a piece. When I think of visualization of music, I typically think of an animation that synchs up with the music and is carefully designed to extract a particular emotional response from the listener. This was very different, and the brilliance of the piece was how organic it was. Audience members listening to the orchestra could see in detail how the music effects others by looking at the projection, allowing them to compare their own experience of the music to that of others. They weren’t led into a specific emotional response based on the visuals, rather their listening experience was enhanced by adding visuals of natural human response to what they were listening to. By projecting several faces behind the orchestra, audience members can clearly see that everyone experiences the same piece of music differently, and not feel like there has to be a “right” way to experience an orchestral performance.

Nowadays, many orchestras are struggling with ticket sales and are trying to remedy the situation by adding modern twists to concerns including playing popular music(For instance the Pittsburgh Symphony’s FUSE concert series) and incorporating visual art into performances. While its great that orchestras are making an effort to find a place for themselves in the modern world, focusing too much on adding flashy visuals to a performance can take away from the music. Classical music is beautiful in its own right, and the example Levin presented in class is a great example about how technology can be used to enhance the experience of classical music, instead of overshadowing it.

Also, as an example of how the classical music concert can be modernized without ruining the musical experience, here is an example of how the Toronto Symphony cleverly uses graphic design to help audience members understand the structure of the pieces they are listening to

Golan Levin Reflection:

Golan Levin Reflection:

Overall, I thought Golan was an incredible artistic and musician. His ideal about linking sound and visual mediums together was incredible. He showed some very insightful videos that I have never seen before. One in particular stood out to me, which was the video of the oscilloscope images and it’s corresponding sound. The video he showed at first was the oscilloscope mushroom video.

Then showed the album made for oscilloscope by Jerobeam Fenderson.
https://www.kickstarter.com/projects/1230242223/oscilloscope-music
The idea of bringing these images in to sounds is really interesting to me. Hiding messages isn’t something new however. Aphex Twin did it first with the face in the spectrum of his song “Equation”.

However, making an actual moving image out of these sounds is really fascinating. Being able to tell a visual story from what sounds like unintelligible noise is incredible. The sounds are clearly not related at all, but together create this visual masterpiece.
In the future, I could imagine a piece of music technology software that uses it’s DRM or serial number inside of the spectrum at non-audible range. Or sharing some kind of codec metadata in the spectrum of the non-audible range. Imagine being able to take a recording of an audio clip, and it having some kind of hidden text/data instructing another piece of technology on how to do something. Like an over-the-air instruction system. Or being able to encode these unintelligible noises into moving images and making a sort of video synthesizer from these crazy noises. Being able to transmit a video-like content over sound. Even newer technology from Teenage engineering, the pocket operator “PO-32” can transmit audio data over sound.
https://www.teenageengineering.com/products/po
I think the data transfer system is very similar to the Aphex Twin face in the spectrum, just with encoded data instead of an image. Golan really got me thinking about the possibilities of sounds that aren’t really just sounds, but also images as well. That even though they may sound unintelligible, they are very indelible in their own domain.

Group 7

Abstract:

The goal of this ambisonic soundscape was to explore the perception of time passing as one ages. This was accomplished through compositional layering of ambient environmental samples with an underlying spin concept as well as grounding the foundational pulse to the progressive acceleration of a ticking clock. The climax of the piece crescendos at the highest RPM of the clock as the interplay of sounds weaves a chaotic background symbolic to the whirlwind of life itself. Ultimately the tension resolves with the removal of the chaotic elements representing a self discovered inner peace and harmony found at the end of the journey.

Ideas:

The idea behind this work was to create a soundscape that utilized many rotating sources, and convolve them in an interesting way. Multiple recordings of many different rotating objects such as clocks, fingers across a wine glass with water, and diesel engines were used. The work was to begin with the sound of the clock ticking away around the listeners at one rotation per minute, the standard angular velocity of a second hand. As time progressed, the work includes more samples at varying rotation velocities and stationary recordings as the second hand ticked at an increasing rate. This acceleration of time brings in more and more different sound recordings until an appropriate balance of sound and space is found, then the recordings disperse.

A draft of our compositional outline can be found here: https://docs.google.com/a/andrew.cmu.edu/document/d/13lr_reW6H9SErvx6deQELith79aUhBvXnYFV_HIASJk/edit?usp=sharing

Max Patch:

The programming in Max/MSP consists of two main patches. A rotational patch handles moving sources rotating at a specified rotational velocity, and an oscillating patch which methodically moves sources in a pre-defined path. An additional patch was also used so that the novation LaunchKey 49 could be used as a MIDI controller to control levels for rotational sound sources.

The rotational patch was written onto the HOA.2d.map patch from the HOA library. Cartesian coordinates are calculated for eight separate sources, including a function for magnitude. All sources spin at a pre-defined rotational velocity, which is controlled as a linearly increasing function over the course of the performance. Multiple rotational patches were used during the course of the piece.

The oscillating patch also used the hoa.2d.map patch, modified with sinusoid oscillators on x and y positions of sound sources. This moved the left and right channels of stereo audio as point sources in lissajous patterns around the central space.

The max patch can be downloaded here: https://cmu.box.com/s/kzeykyxpw39smgzolv2gdpuwt8soag5n

Ableton Live Samples:

A solid foundation of sounds were chosen to be the base in the composition. The higher frequency layers consisted of wind chimes with a little EQ and reverb, a culmination of crystal glass recorded at separate water levels. A dolly cart rolling on wheels was filtered with a low pass and loaded into the Push as a polyphonic tone and gave a sense of tension with the abrupt and uneasy sounds. Lastly, a recorded segment of guitar and flute added to the tranquility of the wind chimes/glass sounds and helped to establish the tone and resolution of the environment.

Mixing:

A preset was made on the uTRACKX32 that set the first 24 inputs to take audio from the computer or “card” as it was listed on the mixer. Ins 1-8 were one DCA group and the rest were the second group. DCA group 1 was all Max and DCA group 2 was Ableton. This made it easier to keep the balance and bring multiple things in and out simultaneously.

The recording can be found here: https://cmu.box.com/s/jhiodbx2os4la7ao0iacb6c00kvt1ye6

Team Functionality:

Gladstone Butler: Live mixing Luigi Cannatti: Oscillating Max Patch/Max Midi Mapping Nick Pourazima: Ableton Live Samples/Push performance Garrett Osborne: Rotational Max Patch

Reflections on Music for Oscilloscopes

Visualizing sound appears to be quite the open ended world for experimentation if our lecture with Golan Levin was any indicator. While an A->B sound to audio structure certainly has a lot to explore on it’s own, what caught my eye the most was the way some of the artists we looked at had the sights dictating the sound as much as the other way around. This was very evident in the Norman McLaren piece Synchromy, where the sonic and visual elements were generated at the same time from the same medium. The piece that interested me the most however (and has lead to lots of fun listening time since then) is the Music for Oscilloscopes project by Jerobeam Fenderson.

The kickstarter video by itself is a tour de force of weird sounds colliding with themselves in green shapes you might never expect soundwaves to make. Any sound in this video would be sufficient fodder to build entire songs around in the psy-trance electronic area (Think this type of stuff), yet the visual element remains surprisingly cohesive. It certainly makes me wonder how various processing effects might change the outcome of these green crossing points, or what the visual outcome of changing one axis of the oscilloscope’s color might be. More to the point however, this piece does seem to produce a perfect middle point with audiovisual work where the audio would mean much less without the visual context as would the visuals with no audio. Hopefully this sort of synergetic behavior is something we can work towards in class on one project or another.

Golan Levin Response: Pyrotechnics

Levin introduced us to a lot of interesting projects and possibilities, but the ones that caught my attention most immediately were those which dealt with sonic visualization. Sure, we have waveforms and oscilloscopes and whatnot, but I’m talking low-tech, real-world physical objects here. I was very interested in the Reuben’s Tube, being a vaguely fire-obsessed chap myself.

So I did a little digging and found this dude: https://www.youtube.com/watch?v=2awbKQ2DLRE
There’s a little group o’ folks in Denmark who’re teaching children about physics through cute little visualizations, and this one caught my eye. It’s a Reuben’s Plane, I guess; it’s interesting in that the flames make a sort of pixel art pattern, almost akin to the cymatic plate experiments. I love the idea of physical representations of sound and the ways in which sound can influence the physical environment. I like the concept of making the invisible, visible through its interactions with the visible world. I’ve also recently become very interested in the interdisciplinarity of music and sound art. There are so many out-there projects one can produce if he just knows a little about physics or…fire. Music is a lovely medium because anything can become a piece of the pie: field recordings, fire, physics, fire, biology, or even fire!

Golan Levin Response

During Golan’s lecture, he showed us this video depicting a “talking piano”. I’m sure a lot of technical effort went in to both constructing the mechanical piano playing robot and converting speech to be played in such a way that it would be recognized. However, it reminded me of a certain horrifying video.

They both rely on the same concept of layering sound waves, including the overtones and undertones, to produce something that sounds like speech. I think it is pretty comedic that one could be praised for being a technological advancement and the other has become a meme.

I also think that it’s interesting that a midi converter, in what is probably the most simple form, is able to get the notes in order to recreate these vowel sounds. I think that another part of recognizing the speech in this song is just from how well the song is known by the listeners. Considering that this has now become the most popular Christmas song, it is no surprise that our brain is able to make out the original lyrics despite a strange presentation.

Golan Levin Response

Golan threw a ton of material out during his guest appearance in class on Wednesday, and being quite honest I found an extreme fascination for the Cymatics video from Nigel Stanford.

Although the video linked below is unmeasurably cheesy, and now greysweatshirts.com is out of stock, I took away a lot from the oversold display of cymatics. (It gets exponentially cheeser as time goes on after 4:00.. Nigel has really outdone himself.)

When watching the video, seeing this strong visual connection to what I heard in this sense is somewhat new. I watch videos everyday that are audio/visually correct, but for some reason seeing the science behind what I heard was really cool for me. If used appropriately, cymatics could really add to some audio/visual pieces, as well as architecture.

Norman McLaren and Synesthesia

Golan Levin’s guest spot on visualizing sound reiterated for me how much of my interest in sound is guided by synesthesia.

It honestly makes the whole topic of visualizing sound somewhat frustrating because I have a very strong connection between sonic and tactile texture. It’s influenced how I understand sound and what I tend to gravitate toward aesthetically. For instance, I get excited by things that have slightly rough or grainy sound, while on the other, anything that’s passed through an FFT (with audible artifacts) tastes/feels like the hand soap from a public bathroom. Long story short, I have predetermined expectations of what a sound looks and feels like, and it’s profoundly distracting to when what I see doesn’t match what I hear.

However, it is with some surprise that I have to say I’ve learned a lot about what works for me and what doesn’t from delving into Norman McLaren’s work.

The first thing that jumps out at me is that while his most exciting work doesn’t follow the what-you-see-is-what-you-hear approach that a lot of artists take, it still creates an extremely satisfying connection between the sound, image and textures. Norman McLaren is a master of synchresis, a term coined by Michel Chion to describe the link forged in the audience’s mind between sound and image when both happen in the same instant. In ‘Dots’, and ‘Phantasy in Colors’ McLaren’s tight link between the sounds and the visuals creates the impression that one is generating the other. McLaren’s use of synchresis is actually clearer in ‘Loops’ than ‘Dots’ because it isn’t as seamlessly integrated. Unlike in ‘Dots’, not every movement is in loops is accompanied by a sound and leaves a conspicuous voids. Thematically, McLaren’s visuals for ‘Phantasy in Colors’ are much more loosely derived from the music, but nonetheless create a very satisfying relationship. The jerky, hand-made style of the visuals in both pieces matches the squelchy character of the audio. Even McLaren’s drawn-sound score for ‘Neighbors’ couldn’t really be replaced by a synthesized or orchestral arrangement of the same notes because something about the quality of the sound suggests earthiness that connects it to the flower.