LO-10

For this week’s LO, I decided to look into Imogen Heap’s “Mi Mu Gloves”, an idea of hers that was originally conceived in 2010 but only recently was able to be fabricated and prototyped for public consumption. Heap, originally only a musician, wanted to create the product to bring electronic music to another level, allowing artists to step away from their immobile instruments and connect with the audience. The gloves themselves have a mass amount of flex sensors, buttons, and vibrators that come together to simulate the feeling and sounds of real instruments. The gloves correspond over wifi with a dedicated software and algorithm that reads the movements of the wearer and translates them into sounds, depending on what the user records as inputs. 

It was interesting to see how technology has come as far to allow a musician to play instruments without having the physical instrument in front of them. I admire Heap’s ability to bring her dreams, which she dreamt up in 2010, to reality, even with barriers that existed, and the flexibility that the gloves could provide even to people with physical disabilities that don’t allow them to play instruments. This is demonstrated by Kris Halpen, whose life as a musician has been dramatically affected as a result of the Mi Mu gloves.

LO-10: Computational Music

Author: Christine McLeavey Payne (An assortment of people created the actual product)
Date: April 25, 2019
Project: MuseNet

MuseNet is a neural network that can develop music of its own. It can create short compositions in the style of different artists/composers with instruments of choice. This is a very creative idea because it allows people to compose music in the style of someone like Beethoven, but with guitar instead. I admire the creativity in the project, but also this combination of so many different styles and types of music. Much of what is produced by MuseNet is compositions that would never be possible or done without the computational work done by the team. The algorithms were created by a neural network, essentially a type of code that learns based off of inputs. In doing so, the computer learned patterns from music rather than individual analysis having to be done by people for every type of music/song. The artistic sensibilities are shown by the creative aspect of MuseNet. Clearly, the team cared about music and wanted to create a unique twist so they used their computer science knowledge to do this.

Link to Blog: https://openai.com/blog/musenet/

Link to Example: https://soundcloud.com/openai_audio/pop-nocturne

LO-10: Computer Music

The project I chose is The Welcome Chorus by Yuri Suzuki. This project is an interactive installation in the county of Kent commissioned by the Turner Contemporary consisting of 12 horns that continuously sing AI generated lyrics. What I admire most about this project, like for most of Suzuki’s other works, is its ability to blend sound and technology to produce music in unconventional ways that are easily accessible to anyone and everyone. The lyrics, reflecting the people’s experiences living in Kent and Kent as a whole, were gathered from workshops and gatherings and put into a data bank in which the AI algorithms learned from to produce the lyrics that will be sung. Another AI system was integrated in order to produce folk song melodies so that the installation can produce songs with both lyrics and melodies. Suzuki’s artistic sensibilities are manifested in the final form since, like his other works, he turns this complex project into an interactive sculpture that any visitor can contribute to (using an ongoing machine learning “conductor”) and learn from without the extra burden of trying to understand complex ideas.

Yuri Suzuki’s The Welcome Chorus (2020)

LookingOutwards-10

There isn’t specific computational music for this launchpad but I choose this is one because it is a new method(at least for me) to create music using technology. People can compose/edit a music using launchpad that had each own’s volume and pitch. Depending on how the user sets up the mode, it can be used as a bass or the main pitch. Even though this video is not creating original music, people can edit/make a chorus corresponding to the music. I think it is also fascinating how music can be linked to those notes and be edited/created right away by the composer.

Looking Outwards 10: Computer music

Computer-generated music is not revolutionary, or well known in any way. Music is generally known as an artform, something people make to express themselves – something unique to human nature.

However, computer-generated music, more specifically procedural music, is widely utilized in today’s world; mostly to improve adaptive/dynamic soundtracks in video games. In many games released in the past 2 years, procedural music technology has been a staple for the Triple-A gaming industry. No one ever notices the procedural music because it does its work so seamlessly and subtly. Nowadays, every time a peaceful, ambient, background soundtrack switches to an intense, high-tempo, adrenaline-inducing beat in a game, you never really notice it. The software automatically generates extra notes, extra percussion beats, and even background vocals to make the transition between one soundtrack to another so smooth that almost no one notices.

The technology and intelligence required for a computer to interpolate one piece of music into another on-the-go, depending on the events taking place on-screen is actually quite impressive. Machine learning is used to train the AI by making it analyze many different soundtracks. This results in an AI that can compose a very limited amount of music, but just enough to make the transition between one soundtrack and another sound non-existent.

https://en.wikipedia.org/wiki/Adaptive_music

LO-10

For this week’s looking outwards, I took a look at KNBC by Casey Reas. It uses news broadcast footage and translates them into a collage of pixels, which is then projected onto a wall. I found it a really interesting and meta look at the way we consume information so quickly and abundantly in our day-to-day lives. The work was done in Processing, for which Casey Reas is the co-founder of.

Interestingly, there is still a clear narrative that is being presented in the work, which I found really interesting; even when information has been distorted beyond what is cognitively recognisable, we can still see the beginning and end transitioning into another story altogether.

Overall, I really enjoyed the visual aesthetics of the piece, and how sound plays a large role in both its presentation but also how we come to interpret and understand information as a core piece of the artwork’s intent.

LO 10

I did some research into the Canadian Electronic Ensemble’s newest release on February 15th, 1. The ensemble is based out of Canada (of course) where they compose and perform using computer and modular synthesis instruments to create unique sounds and produce long concerts and improvisations. They reassemble their instruments during the performance and also make use of the program Ableton to create new effects as well as rebuild their instruments to create new sounds during the performance. Mainly, their performances are based less in traditional computer code and more in the system Max, in the plugin Max for Live, which allows users to create their own effects in the DAW Ableton Live. Their performances are usually long and complex, without a central theme, and feature solo sections for each member of the ensemble as they join and modify their elaborate analog and digital synths. Also, they play without using a keyboard or traditional control of their instruments, and mostly rely and step-based sequencing and MIDI controlling built into their software to assemble songs.

Looking Outward-10

https://reas.com/knbc/

For this LO, I chose to look at artist Casey Reas. Casey is the co-founder of Processing which is the grandfather to p5.js. I was really interested in seeing how the person who created Processing created art using a tool that they had imagined and built. The Sound Art piece I looked at is called KNBC. It was created in December 2015 using footage from news broadcasts from the year. This footage was manipulated in Processing to create a collage of pixels. What I love about this piece is how a video broadcast which was meant to tell a specific story was broken down by Casey into these collages and then combined with music to tell a whole new story. I also love that even though there is no clear clear picture or words, I still feel like I can understand the story that each scene is telling. Throughout the video of the piece playing, the transitions of music and color show you different parts of the story and you can tell when a scene has ended and another has begun.

shot of the piece during an installation
another shot of the piece during an installation

Looking Outwards 10: Computer Music

House Party was created by Neil Mendoza for a personal project. It is a musical installation with all of its materials, from furniture to computers, scavenged from trash. Even Arduino Zero, which controls the actuators in the installation, was found in the trash as well. How this works is that each screen is connected to a computer running software written in openFrameworks and the MIDI composition data was sent to Arduino and an OF control program. Then the control program sent the data to other computers over ethernet as OSC. As a result, the control program read the data and triggered the screens and Arduino. When I first saw this project, I was surprised that it didn’t use any greenscreen effect and all the materials were physically present and working. It was even more shocking that all the materials were previously trash. I admire that Neil used his unique artistic sense to create a musical installation that performs in its “natural” habitat.

LO-10-Computer Music

Laurie Spiegel, Unseen Worlds, 1991. 

Laura Spiegel with her analog synthesizer. Source.

Laurie Spiegel is one of the pioneers of electronic-music and even had one of her compositions sent into space on the Voyager spacecraft for aliens if they were to come in contact with it. Spiegel also created her own algorithmic compositions called Music Mouse. Her album, Unseen Worlds, uses computation electronic music to experiment and convey unfamiliar yet comfortingly ambient music. Even the album cover art seems to resemble this blend of computational generation and abstracted geometries that evoke the mystery and unfamiliarity of the music itself. 

Her program Music Mouse was created on a Macintosh 512k that generated tones based on the movement of the mouse. Spiegel’s album is inspiring in its pursuit to use electronic music and artificial sounds to create foreign ambiances that still sounds futuristic and distant even while being made 30 years ago. Spiegel created other-wordly compositions in the album with her software Music Mouse that allowed an accessible way to compose electronic music faster. Perhaps most importantly, Spiegel’s early explorations of synths and electronic music was inspiring for women and people who wanted to do music without the harsh borders of classic training. In an interview Spiegel says, “electronic instruments were a great democratizing force.” It becomes clear how her albums and work with electronic music expressed the transition of a more accessible music form that invites the unexpected and unusual.

Laura Spiegel’s album, Unseen Worlds, 1991.