Created by Amanda Ghassaei, the Sugarcube is a portable MIDI controller with the capability to connect with up to 16 apps. It implements both buttons and “shake and tilt” features, allowing the user to manipulate sounds by tilting the Sugarcube one way or another. The creator used code from Arduino so that the device does all of its app processing internally, rather than with another computer. The device is also able to store sounds to correspond with different buttons. The creator, who is a grad student working at MIT Media Labs, used their knowledge of interactivity and media to create a device that is both user-friendly and fun.
I admire the simple and clear aesthetic of the Sugarcube, because it is easy to use without sacrificing beauty. The back-lit buttons create a really beautiful visual while also producing sounds and patterns, so you can visually see the music you’re making. It looks so simple, yet all of the code that went into it is fairly complicated and long.
Georgia Tech’s Shimon robot writes and plays its own music using machine learning.
The Shimon robot was trained with 5,000 songs and two million motifs, rifts, and licks. The algorithm behind it involves using a neural network that simulates the brain’s bottom up cognition process. The result sounds very soothing, according to article the song it wrote is a blend of jazz and classical music. What I admire most about this project its that the music as well as the performance is totally generated, and yet it still sounds human and not robotic. This robot is making debatably “creative artistic decisions” by synthesizing novel music for pre-existing ones. Additionally, I also admire the performance. Instead of pre-defining the note location of the keyboard by assigning them a position variable, the robot uses computer vision through a camera on its robot-head which actively rotates, pans, and scans around its field of vision the very same way an actual musician would do when they’re playing the keyboard. If I closed my eyes, I could be fooled that this is a human.
A promotional video for On Your Wavelength & Merge Festival 2015
Created by Marcus Lyall, Robert Thomas, and Alex Anpilogov, On Your Wavelength is an interactive installation that generates music and a laser-show as it analyzes the user’s brainwaves in real-time.
In the installation, the user is equipped with EEG brain scanner headseat, which is then analyzed turned into media using Processing and Pure Data for audio generation. The analysis creates a profile of the user and focuses on three possible emotions – joy, detachment, and tension – along with several possible instruments and pitches in order to generate musical compositions specific to the current user. While the generation was up to the program’s analysis, the color choices and compositions, as well as the distinctive emotions that they chose to go by show the distinct mark of the artists who worked on it.
Behind the scenes of On Your Wavelength
Large-scale, immersive experiences like this one have always been fascinating to me, and in this case it’s not just technology taking artistic control, but rather a symbiotic relationship between user and program that’s not only awe-inspiring to look at, but especially to be in the user’s place and see how the program reacts.
On Your Wavelength was first shown during Merge Festival 2015 in London and later in a modified format in Winter Lights 2017 in London.
Additional performances, such as this one, can be viewed on Youtube.
John Karborn, a new-media audio-visual video artist, developed The Wondrous Wobbulator Machine for Young and Old Alike by feeding geometric still frames into a custom-built wobbulator, a device that visualizes frequencies and wavelengths of a given sound. To record these, he uses analog video sequences (VHS, for example) while a given frequency is being passed through the wobbulator — what results is the geometric animation to the right. Algorithmically, the wobbulator utilizes a combination of manual control feedback and an oscillator that produces a visual representation of the image being manipulated by the given sound waves.
My current work is in the field of audio-visual performance art, which makes K A R B O R N’s methodologies valuable towards my practice. As a whole, K A R B O R N’s work follow similar themes, using a combination of narratives and acting and sound and still frames to produce video works and documentations that are altered by time and noise.
The Classyfier by Benedict Hubener, Stephanie Lee and Kelvyn Marte at the Copenhagen Institute of Interaction Design (CIID) is a table that utilizes AI to detect the social scenario (through beverages being consumed) and reciprocates with appropriate music. This project intrigued me, as I was mesmerized with how technology is able to clearly detect and differentiate the difference between the “clanking” of various beverages. And I could see this experimental project being applied to enhance the capabilities of voice recognition technologies such as “echo” and “siri”; currently they are only able to do what they are told to do, but perhaps in the near future they would be able to read (through AI) different situations and act accordingly. The project brief indicated that the table contains a built-in microphone which catches characteristic sounds and compares these sounds to a predetermined catalogue. This catalogue contains three classes – hot beverages, wine and beer, with each class having its own associated playlist that one can navigate through by knocking on the table. Other algorithmic aspects include machine learning, Wekinator, Processing & OFX collection. The creator’s artistic sensibilities does manifest in a tangible or visual manner but rather musically.
“Our Time” is a piece commissioned by the MONA (Museum of Old and New Art) which was intended to take you on an ethereal sensory journey, warping the way you view and think of time. Using sound, light and motion to convey the passing of time. 21 large pendulums swing in midair in different directions as lights brighten and dim at their own pace. Each pendulum arm has a speaker which emits a barely audible echo, creating an eerily unfamiliar sensation speaking to the passage of time. They swing without seeming to adhere to any laws of nature, yet regardless seem to make the passage of time more palpable. Time exists among many frequencies in this room, and when you’re in there, experiencing it – that becomes obvious.
The amount of effort put into immersion in this is remarkable, and the piece utilizes our most basic senses to warp our perceptions of human constructs.
below is a video of the hauntingly beautiful piece…
LINES is an interactive instrument that is electrically programmed to trigger sounds. It uses the program Max/MSP to convert physical signals into multiple sound outputs that are then combined to creatively compose a song. To be specific, when the signal is detected, the distance sensors primarily send the stimulus to the arduino board which then is sent to the Max/MSP program to produce physical sounds.
I found this project extra fascinating because it allows a single participator to play the “LINES” instrument as if multiple people are cooperating. By simply placing the metal stimulator on the lines, the participator can easily make chords without having to constantly touch the lines. Without prior knowledge to play the instrument, one could easily play around and quickly learn to utilize this instrument. Also, the sounds that are produced through the “LINES” is very unique and interesting.
Chelpa Ferro is a multimedia art group whose work primarily revolves around sound, in one form or another. One of their installations, Acusma (2008), combines recorded sound and traditional Brazilian pottery spread throughout a space. Though I wasn’t able to find much information on the actual process behind the work, the sounds used in the work appear to have been originally recorded, then manipulated by an algorithm to play back with various modifications or repetitions. The result is a chorus of sounds that are voice-like, but is just removed enough to be not quite just a group of human voices.
I really appreciate the juxtaposition between this process and the other main part of the work, traditionally made pottery. This combination of new and old, as well as that of the sounds themselves coming from such unexpected places, also seem to be typical of how the artists are trying to explore sound—making both the sources of sounds and the sounds themselves unfamiliar to the viewers/ listeners.
Using Beethoven’s ballet with Houdini to create generative art, artist Simon Russell used this project in order to combine both visuals and audio. Each visual takes pitch and speed into consideration. These characteristics determine the shape and the amplitude. Rather than animating every little piece individually, this piece is dependent on generative factors.
From the clapping at the beginning, to the maestro, to the composition and organization of the generative art, Russell sets the scene of the orchestra. This generative art piece seems to display the different instruments or components of the orchestra. Russell explores the different colors that would represent each component. Rather than being completely obvious with instrument of music forms, he successfully uses abstract shapes that go along with the music. He did not just utilize music notes but also sound waves to make everything look cohesive.
This Google Doodle released by Japan has a grid of rhombuses the user can click or unclick. A vertical bar travels across the columns “playing” the “notes” of clicked rhombuses the user has selected. I’m fascinated by how such a simple interaction/interface transcends language (I have no idea what the context of this piece is) and is accessible to anyone. A casual user can press random keys and enjoy various tunes. Someone with a trained ear can figure out the key the notes are in, which buttons correspond to which notes, and try to compose a piece. The geometric, abstract visuals are reminiscent of fireworks, not unlike the magical shows one might see at Disney!