Angela Lee – Looking Outwards – 10

A video introducing the installation “Apparatum” by panGenerator.
A user situated within the installation space.

“Apparatum” is a sound installation with sound inspired by Bogusław Schaeffer and the aesthetics inspired by Oskar Hansen. The installation in general also draws inspiration from the heritage of the Polish Radio Experimental Studio. The project consists of analog sound generators which are controlled through a digitized sheet music touch pad. I admire the speculative nature of the piece. Because it’s not commercial music that has to appeal to a wide audience, it feels much more thoughtful and edgy, and I am drawn to the process of creating it. I think that the artistic sensibilities manifested in the visual design. The aesthetics complement the sound art without overpowering it, since it has minimal grayscale colors, limited use of textures, and consistent forms. The textures of the sound are also quite interesting, challenging you to think of new ways to weave sounds and tones together.

Ankitha Vasudev – Looking Outwards – 10

Orchestrion is a computerized band that was programmed and created by Eric Singer in 2010. Singer is a Brooklyn-based musician and technologist who founded SingerBots and LEMUR – a group of artists and technologists who create robotic musical instruments. Orchestrion consists of a set of automated musical instruments that are mechanically equipped to perform a composition.

Lido Orchestrion, 2010

I find this project interesting because the instruments in orchestrion can play anything that is composed for them. This means that a musician composes a song on basic production software, but instead of playing the notes back, the program activates the physical playing actions on the orchestrion. The video below shows the Lido Orchestrion, which was built for a nightclub in Paris and consists of 45 automated instruments. 

In 2009, Singer attended Carnegie Mellon as an undergrad— and founded SingerBots, a company fully dedicated to building robotic music instruments. Singer beleives that musicality and infallibility are the two priorities for an orchestrion, so that they sound good and do not make mistakes. I agree with his belief that robotic infallibility could create a lively performance, contrasting to others’ beliefs. 

A video describing Singer’s Orchestrions

Kimberlyn Cho- Looking Outwards 10

“Rhapsody in Grey” by Brian Foo (2015)

Brian Foo draws from his fascination and interest in brain activity in “Rhapsody in Grey” by translating brainwave data into music. He specifically studied the brainwaves of a female pediatric patient with epilepsy so that listeners can empathize or briefly experience what may be going on during a seizure. I was intrigued by the distinctiveness of this project, as well as his creative approach to a scientific topic such as brainwaves. I admire Foo’s usage of his fluency and skills in programming to portray a personal interest(brain activity) in his own creative way(music). I found this project to be inspiring in the endless possibilities of programming and computational art.

Foo uses different variables from EEG brainwave data to computationally generate the intensity and fluidity of the rhapsody. He used Python to extract an excerpt from the EEG data, which he then calculated the average amplitude, frequency, and synchrony of. He then assigned instrumental and vocal samples in correlation to the calculations. The sounds were synthesized into a rhapsody using a music creating program called ChucK. Lastly, he used Processing, a visual programming language, to generate the visual waves that play with his music in the video above.

You’re able to see Foo’s artistic sensibilities in the final work in the sound samples he chose as well as connections he made to the EEG data. For example, he raised the pitch of string instruments during higher frequencies, while adding louder sounds for higher amplitudes. The connections he makes between his calculated algorithms and sound samples are representative of his interest in the human subject, as well as his artistic priorities and decisions.

“Rhapsody in Grey” by Brian Foo (2015)

Taisei Manheim – Looking Outward – 10


The project I chose is called Amper Score by Amper, a company consisting of a team of engineers, musicians, and composers that have worked on many movies and television shows. The idea for this project arose from many people within film and other industries being frustrated by the process of obtaining stock or precomposed music, including finding the music, getting over the legal and financial hurdles, and the lack of customization. Amper’s solution is an AI composer that writes and composes unique royalty free music. There are two workflows: one that allows you to work quickly and another that allows a deep customization of the music. The simple workflow requires the user to input the style of music, mood, and length of the song and the AI will generate a song in seconds, while the more advanced workflow allows for changes in tempo, instruments, and more. This allows businesses, films, and more to create originally composed music that is unique to them. Amper’s AI is built from the ground up and utilizes a massive dataset created by an internal team of composer-developers. Rather than employing an approach that limits contextual awareness and the ability for Score to collaborate with its users, Amper’s data team describes every facet of music, including genre, mood, and instrumentation. What I admire about it is that it is something that anybody could use and the luxury of original music for any business or film suddenly becomes much more accessible to those that wouldn’t have had the money or resources before this tool.

Looking Outwards 10 Ellan Suder

Generative Music by batuhan

Software used: Processing connected to SuperCollider via osc_p5 library and the great p5_sc library, and I used the Atari2600 UGen by Fredrik Olofsson, all custom software.

“Generative Music – Cellular automata and blip blops” (2008) by batuhan is a “somewhat intelligent cellular automata system” developed with some atari2600 style sonification. The music is generated by a cellular life system — the artist does not touch the system once it’s started and it dies by itself, ending the music as well. The thing I really liked about the project was the idea of creating a random, computer generated piece of music tied to the lifespan of a system. The song is the representation of a world from its beginning to its end.

The visuals and the audio (the harsh ‘blip blops’) seem to be inspired by old classic arcade style video games.

Lanna Lang – Looking Outwards – 10

Google Magenta // NSynth and NSynth Super // 2018

The goal Google Magenta had with NSynth and NSynth Super was to build a machine learning tool that gave musicians new ways to express themselves. NSynth (Neural Synthesizer) is a new way to approach audio synthesis using neural networks that creates the sound of the actual instrument that is being played instead of the note that’s being played. Magenta wanted the algorithm to be more accessible to musicians, so they created interfaces such as the Sound Maker and the Ableton Live plugin, and Magenta encourages creative use with the algorithm, from dubstep to scenic atmospherics. NSynth is Google’s neural network, but NSynth Super is the tool/musical instrument that brings NSynth to life.

What I love about this piece are the infinite possibilities this brings to artists and anyone anywhere. In the video, they show how using NSynth and NSynth Super, you can combine a flute and a snare to create a whole new instrument (i.e Fnure). NSynth Super isn’t just layering sounds on top of each other, but instead, it’s synthesizing an entirely new sound based on the acoustics of the individual instruments. This technology isn’t making the work of a musician easier, but it’s enhancing it and providing more possibilities and artistic direction that can manifest from this. Although the NSynth Super isn’t available for purchase, Google has provided instructions to make one from scratch using Raspberry Pi for artists to make themselves and explore.

The background behind creating NSynth and NSynth Super
An example of how someone can make music using NSynth Super

Fanjie Jin-LookingOutwards-10

Artificial intelligence researchers have made huge gains in computational creativity and there have been a number of artists that employed computational algorithm to produce albums in multiple genres, such as scored films, games and smartphone apps. 

Bach-style Prelude 29, Experiments in Musical Intelligence

David Cope, a professor at California Santa Cruz, has been exploring the intersection of algorithms and creativity for decades and he is specialized in what he terms algorithmic composition which is essentially computer-authored music production. He writes sets of instructions that enable computers to automatically generate complete orchestral compositions. His algorithms have produced classical music ranging for single instruments arrangement all the way up to full orchestral music and it is really hard to believe that the music is composed through a computer. 

I really admire this project “Bach style Prelude 29 Emmy Cope”, which he has let the computer to study the composition style of Bach. As you can hear, the final melody that the AI algorithms generate is an accurate representation of Bach’s composition style and some parts of the AI-generated music have created some unexpected beautiful melodies which is totally based on Bach’s composition techniques. The biggest advantage of the AI algorithmic music composition is perhaps “Algorithms that produce creative work have a significant benefit, then, in terms of time, energy, and money, as they reduce the wasted effort on failed ideas.” said Cope. 

Siwei Xie – Looking Outwards – 10

Microscale is a generative and web-based album. I admire it because although the creator has written generative/algorithmic music before, and almost all of his previous work has procedurally generated material, microscale is his first fully generative album which was created from a “generative” idea. Creator’s artistic sensibilities manifest because this album has been created not so much by thinking, as by emotions – so it’s not purely artificial intelligence or computer music. 

The music on microscale is generated in real-time from random Wikipedia articles. Each article becomes a step sequencer, where the letters are the sequencer steps and the track titles are regular expressions that switch the steps of the sequencers on and off.

The concept of the album is to show that through transforming one media (text) into another media (music), the meaning can be transformed – the article has its own meaning, but the music has a completely different meaning. And it’s not just one-to-one transformation – there are six articles (i.e. six meanings), which although unrelated to each other, create a whole piece of music that has one singular meaning.

Ales Tsurko, Microscale, 2017

Link to original source.

Emma NM-LO-10

Sonic Playground in Atlanta

Sonic Playground (2018) – Yuri Suzuki Design

Sonic Playground was an outdoor sound installation in Atlanta, Georgia that features colorful sculptures that modify and transmit sound in an unusual but playful way. I admire how the art installation engages the community in an art experience and gives people the opportunity to explore how sound is constructed, altered, and experienced. I like that it is for all people, regardless of age. Anyone can enjoy it. The installation itself is not computational, but they used Rhinoceros 3D to create a raytracing tool that allows the user to choose certain aspects of the sounds path. Users could “select a sound source and send sound in a certain direction or towards a certain geometry, in this case the shape of the acoustic mirrors or the bells at the start and end of the pipes to see how the sound is reflected and what is the interaction with the object.”

The artist’s creativity comes out in the path and shapes he chose for the final sculptures, thus influencing the sound that comes out. He decided which sounds were more interesting and the path it takes to make that sound. 

Sonic Playground Installation
Raytracing using Rhinoceros 3D

Ammar Hassonjee – Looking Outwards 10

An image showing how tune table works.

The project related to computer music I chose to focus on is called Tune Table produced by a group of researchers Anna Xambo and Brigid Drozda. Tune Table is a tabletop, game like interface that is meant to teach users computer science related topics by allowing them to program their own musical compositions. Using blocks of code that utilize computer science elements like loops, users combine the blocks to make unique songs; when each block of code is placed on the table, cameras under the table interpret the block’s imprint on the bottom and output auditory and visual feedback. I like this project’s goal of utilizing music to teach computer science because it’s a fun way to learn something that is very mathematics based. I think the creator’s original goal of finding a link between computer science and musical outputs was achieved. The link to the paper describing the work can be found here.

Video showing how Tune Table works.