NSynth Super is part of an ongoing project by Google called Magenta that explores how machine learning can become a new tool for musicians. The NSynth, or neural synthesizer, uses algorithms to learn the characteristics of existing sounds and subsequently create new sounds based on the inputs. The results of this process are completely original sounds that may be produced by a combination of more than one instrument. The NSynth not only generates these unique sounds but also gives artists control over the dynamics of the new sounds through custom interface and well designed hardware. All of the code for the project is open source as the project, like all other magenta projects, is meant to be free to access by anyone.
What I found most interesting about this project was its potential for generating brand new sounds un-restricted by existing tools or instruments. In addition, I thought the ease of use and accessible nature of the project both in its hardware and software was another aspect which made it a great project and something that anyone who is interested in it could try out.
When I read the prompt for this week’s Looking Outwards, I immediately thought of video game music. I’ve always been a fan of videos games—especially Nintendo franchises such as Pokémon, Mario, and more—and their soundtracks have long been considered the gold standard of technical music. One of the most prominent composers in video game history has been Junichi Masuda, the mastermind behind most of the soundtracks in the Pokémon series. His works have ranged from being techno-like in nature to beautifully symphonic in his newer games. But the commonality among all of the works he’s composed is that they were each computationally created.
I first listened to some of Masuda’s soundtracks from his earlier games like Pokémon Red and Blue (1998). I loved the techno-funk feeling conveyed by the music, and after reading up more about Masuda’s processes, I learned that this was partly a byproduct of technical limitations of that era, but also due to Masuda’s self-proclaimed affinity for techno music at the time. Pokémon Red and Blue were developed on UNIX computer stations called the Sun SPARCstation 1, which made programming files susceptible to crashing. These were clear programming limitations that likely limited the quality of sound files and sound effects.
Next, for the sake of comparison, I listened to music from Pokémon Black and White, games from 2012. I was blown away by the difference; the soundtracks from the newer games were not only crisper, smoother, and rendered more cleanly, but they legitimately sounded like orchestral movements. It was incredible to me how much Masuda’s work evolved, and after reading more about his inspirations, I learned that he was a big fan of the classical composers Igor Stravinsky and Dmitri Shostakovich. This was evident in the elegance of his compositions, and it blew my mind to learn that he programmed these tunes just like he did the techno-style music of 1998. It’s a testament to Masuda’s talent and understanding of the interplay between technology, computation, and music.
The Computer Orchestra is an interactive installation consisting of multiple computers. It was created by Laura Perrenoud, Simon De Diesbach, and Jonas LaCôte in 2013. Its setup closely resembles that of a classical orchestra. This orchestra allows the user to conduct using the movements of their hands. Movements are recognized with a Kinect motion controller connected to a central computer. This then gives instructions to a multitude of screens. The screens then send both sounds and visual representations of the sounds back. Now, there are entire music sets created with this Computer Orchestra.
I admire how this project keeps many of the similarities of a classical orchestra. The “conductor’s” movements are like that of a real conductor, and the way the screens are set up resemble that of a real orchestra. There is not much information about the algorithms that generated the work, but the software used includes SimpleOpenNi and Ableton Live.
The Computer Orchestra is an interactive orchestra consisting of multiple computers. It was created by fragment.in, and the goal was to let the user conduct their own orchestra and music. The conductor’s hand movements are accurately recognized using an Xbox Kinect motion controller that is connected to a central computer. Instructions are given to many musician screens. Screen-musicians then send the sound to the conductor and produces visual feedback.
What I love most about the Computer Orchestra is that it crowdsources sounds that people can upload, and then the musician can access it and play it. It’s incredible to see that one person can control the music through simple hand motions and gestures. The simple interface of the centralized computer also makes it extremely easy for the conductor to change where he wants vocals, violin, etc.
To learn more about the Computer Orchestra, click the link below:
For this Looking Outwards, I wanted to focus on a new computational instrument. The Midi Fighter 64 is a ‘finger drum’ instrument, in which a user can program sounds into each button and play the instrument by pressing the buttons. There’s a wide range of button numbers, from 4 x 4 (16 buttons) to 8 x 8 (64 buttons). Artists who use these instruments are called controlerism artists because the boards are closely related to video game controllers (Midi Fighters are only used for music). Another notable similarity between these instruments are computer games is that the buttons on the Midi Fighter are the same as retro Japanese arcade buttons.
The Midi Fighter sounds are programmed into the board using Ableton Live, a DAW (Digital Audio Workstation). The Midi Fighter was originally created by Ean Golden, who’s been interested in controlerism music since the early 2000s. Golden wrote and published an article on the topic in 2007 called ‘Music Manuevers’: https://archive.moldover.com/press/Moldover_Remix_Oct-2007_w.jpg . The instrument has since been popularized by artists such as Shawn Wasabi, a DJ who has pushed the instrument to its limits and played a role in its development to turn it into a marketable product.
Kraftwerk, an electronic band, created a The Robots electronic music performance in 2009. Kraftwerk was established by classical musicians who wanted to mix sound, feedback and rhythm to create music.
The video depicts electronic music with robots on stage moving along in set patterns to the music. I admire that it has a “concert” feel despite not having a singer. The performance includes music, lights, a stage, and people. Although I do wish that the robots moved to the beat of the music or maybe at a faster pace. The slow movements of the robots don’t match the upbeat fast-paced music.
I don’t know anything about the algorithms about how the work was generated. I also don’t want to suppose anything because I really have no clue and it would be wrong to generalize and guess based on no knowledge.
Latetia Sonami is a sound artist and a performer based in San Francisco. The work that I will be discussing is called “Lady’s Glove,” an instrument that makes and manipulates sound in a live performance. The sensor within the glove measures the motion, speed, and proximity, sending the data into Sonami’s computer and thus creating music. This glove is will never make the same sound unless one replicates a completely same motion, meaning even Sonami might not know what the music will sound like until it actually happens. In that sense, I admire her artistic sensibility and knowledge on what to do to make the sound pleasing, especially in live performance settings. I am unfortunately unsure what algorithm she used to create music that would sense her motion. But I admire this project because it questions the definition of music and takes the concept of computational music to another level.
A computational music project that I found inspiring was the “Weather Thingy”by Filip Visnjic. The project was mainly composed of two parts: one being a weather station and other being a controller. The basic mechanics of the project was that it would gauge wind and rain levels with its sensors. And then, the controller had receptors that could translate such weather data into audio effects, after interpretation with built-in instruments. The controller also had screens where the artist can amplify or constrain sounds.
This project was inspiring that it used sounds from nature to recreate music. Ironically, Filip uses a computer software to interpret sounds such as rain, wind, and thunder. This project is incredible in that it gives musical artists various novel sounds effects to work with. Filip also gave the machine the ability to save certain sounds to later give musicians inspiration.
The “Weather Thingy” uses various software such as C++, Arduino, and MIDI protocol.
Sonic Arcade: Shaping Space with Design is a multi-component exhibition featuring interactive installations that experiment with the computations of Sound Design. The exhibition showcases several solo / collaborative works that, in one way or another, helps the audience feel more integrated into the spatial environment. The work utilizes electronic circuits, signals, radio waves, and resonant bodies to create these immersive experiences.
Though all these pieces are drastically different from each other, each utilize sound as a substance as the primary medium in each installation. In the exhibition above, Studio PSK uses body-activated MIDI sensors to detect when sounds should be triggered / altered. With these sensors installed throughout all the structures, the entire exhibition becomes a musical instrument itself, ultimately allowing viewers to both watch and participate in the art.
GE Wang is a professor for Computer Research in Music and Acoustics at Stanford. His research focuses on interactive design in music in collaboration with programming languages. He created Smule and Magic Piano, both for the Iphone. I was interested in this piece, Twilight, because it is both a musical performance and an art performance, and integrates what GE Wang is focused on, which is interactive design in music. Using their laptops, the orchestra is able to translate thir body movements into sound/pitch, and because of that creates a visual performance as well. Something interesting with which they used the laptop seemed to mostly be biased towards gradual changes in sound, and slow build up of music, which is interesting. The algorithm used to create the music and select the pitch is definitely reliant on the length of the string attached to their wrists that the performers use.