Google has created a virtual reality paintbrush called the TiltBrush, which now is able to react to sound. This means artists can create works that react to different sounds, and literally create musical paintings. Artists are able to create within the 360 VR experience and create layered sound reactive experiences. I find this project super interesting and inspiring because this is something I never imagined that would exist. I am not able to find the algorithm that creates these works, but through a bit of research I would assume that it has to do with code driven visuals that automatically adapt to a live audio input.
The Wrong Way to Draw a Pirate’s Soundtrackis a graphical representation of the musical composition “He’s a Pirate” by artist Juanky Soriano. The piece is part of a larger series “The Wrong Way to Draw the Music”, which explores his experimentation with a generative drawing system using music performance. According to his website, his experiment analyzed Midi files into MusicString files. He developed a “midi2mstring” library to do so. Within the library every token is tied to a musical event. Next, he crafted his sketch using Processing to assign each token to a drawing event.
I think the project could have been improved by adding different allowing prior tokens to disburse after they are drawn. The layering on top of one another gave the piece a sloppy look. From my interpretation, I think Soriano is inspired by computational models based in living cells and so his use of p-systems are quite common.
Martin Klimas plays songs through a frequency generator with pigmented water on top, so that the sound waves cause the water to create standing waves. This way, we are able to visualize the tones themselves as they appear in the wate, three dimensionally. I admire the fact that you can actually visualize the frequencies as beautiful and dynamic forms.
This project is by Swiss artist Zimoun, and is usually referred to by its materials- 121 prepared dc-motors, filler wire 1.0mm. It’s admirable because even though it appears ludicrously simple at first glance, it springs into life when turned on and almost appears organic. It’s also a fantastic exploration of sound that is interesting but isn’t necessarily music. The algorithms that control the spinning wires seem simple at first, but upon closer inspection, it becomes clear that each wire is spinning with a different starting point in a random direction. Zimoun is known for sculptures so simple they are almost abstract, a nod to minimalism, and that aesthetic is shown in this piece.
This project is a portable, autO-powered MIDI controller that boots up into a variety of apps to integrate events with sounds. It indicates the sound and waves through the lights under the buttons which is inspired by both the monomer and tenori-on. By using loops in programming the apps, the inventor was able to visualize the sound played within the MIDI and control the output from the reinvent device with simple movement variables. The idea is inspiring to me because it visualized the acoustic field which is being overlooked by many people and making interaction with the invisible sound wave possible by cheaper and more accessible device.
Spicule is an album by Yaxu, also known as Alex McLean, that can be constantly edited using Pi Zero and the TideCycles live coding environment. This is not his first time working with sound and code – he has performed at festivals throughout Europe with artists Slub and Canute and has also helped pioneer the movement of live coding.
Yaxu’s music in terms of sound is all electronic, a very popular genre of music amongst today’s youth. His musical sound and method of creation align with each other and the audience that he is targeting at music festivals, another cultural craze amongst today’s youth.
I highly admire Yaxu for using code to produce his sound. It seems that these days anyone can create electronic music, presenting the idea that maybe it lacks a certain depth, meaning and intention. However, by manually controlling every aspect of the final sound, Yaxu demonstrates the amount of work that can go into producing this very popular sound.
The Global Synthesizer Project complies environmental sounds from around the globe into an interactive sound based installation. The installation exhibits various modules that represent different parts of the globe. The user may interact with the piece through high frequency cables that allow the user to customize a mini symphony.
Yuri Suzuki teamed up with Moog music to pair samples with modular synthesis and program a reverb algorithm which allows users to “time stretch”. this allows the user to edit the start and stop time of the sample and giving the user some control of the pitch. The reverb is also a convolution, which takes reponses from real space and applies it to an incoming signal.
I admire Suzuki’s ability to engage interaction in a gallery setting as that is not what is common in that setting and therefore hard to encourage. I Suzuki has claimed that he had long wanted to use sounds from around the world in a installation. While this project accomplishes that there are still more boundaries to push in this project both in level of interaction and in concept. I personally hope he continues to develop this work.
The work I looked at this week is called Six Drawings, which is a joint collaborative effort between Maotik (generative visuals artist), Diego Espinosa (performer/composer), and David Adamcyk (composer). I was especially fascinated by this work because it creates an environment where it feels like one in inside a balloon. It was visually pleasing and a totally immersive experience. The audiovisual nature of this work made it especially powerful, and I would have liked to experience it in person.
It was presented as an instrumental audiovisual performance in May 2014 at Société des Arts Technologiques in Montreal during the IX Symposium. This project works by Diego Espinosa, the performer, controlling a custom made rubber orb (which looks like a balloon) to act as an all-powerful controller for the installation. The rubber orb was connected to microphones, sensors, and a computer which was running Max multimedia software. The orb also controlled the lights and the aural vibrations coming from the speakers. By using a program called TouchDesigner, Maotik was able to use the audio data to generate 3D visuals.
While I was able to gather information on the kinds of software that were used, there was little information available about what the actual creative process entailed.
This project causes speakers to play various melodic and percussive sounds when sensors in the ground detect shadows over them. The sounds vary depending on the length of the shadows overlooking the sensors. e.g., morning would cast longer shadows, resulting in slower more ethereal music, while midday with its shorter shadows would result in more percussive and dynamic sounds.
From a computational standpoint, it seems that the system is built to respond to the shadows exclusively instead of the time of day. It is mentioned that each sensor unit has a custom PCB with a light sensor on top and an LED on its bottom, for nighttime illumination. I believe that the sounds could be organized in a library, chosen depending on the intensity of light reaching the sensors and how long the shadow stays (assuming the people keep the shadow in constant motion).
This assumption makes sense for a variety of reasons. Weather could affect the Sun’s visibility and the shadows to be cast, which might break the immersion if the sounds were calculated using time. And having the system respond to fluxes in the intensity of light reaching the sensors means that people could manipulate shadows to tailor the sounds to their liking.
As a circus performer, one of the many challenges of choreographing is finding a balance between the visual motions and the music. In SWEATSHOPPE’s video, instead of balancing music with dance, they are balancing music with creating art. A choreographer’s biggest job is making sure that the music meets the movements: a moment of silence in the music should be a moment of silence in the dance, usually following an explosive movement. SWEATSHOPPE has had to combine finding that balance with coding – something I admire incredibly.
I know that they have to create their own software to track the motion of the paintbrush to generate the images on the wall, but then they have to match up the music to the motion of the brush and the manifestation of the art on the wall. I suppose that part of this is human choice and part of it relies on adjusting the timing of the art’s generation to the moments in the music.