ssharada-lookingoutwards11

^Cope’s Emmy Bach style prelude.

David Copeis an American author, composer, scientist, and former professor of music at the University of California, Santa Cruz. His primary area of research involves artificial intelligence and music; he writes programs and algorithms that can analyse existing music and create new compositions in the style of the original input music. His EMI (Experiments in Musical Intelligence) software has produced works in the style of various composers, some of which have been commercially recorded (Cockrell 2001)—ranging from short pieces to full length operas. As a composer, Cope’s own work has encompassed a variety of styles—from the traditional to the avant-garde—and techniques, such as unconventional manners of playing, experimental musical instrument, and microtonal scales, including a 33-note system of just intonation he developed himself (Cockrell 2001). Most recently, all of his original compositions have been written in collaboration with the computer—based on an input of his earlier works. He seeks a synergy between composer creativity and computer algorithm as his principal creative direction

heeseoc-LookingOutwards-11

I happened to stumble upon this piece named Automatic Orchestra while I was browsing the internet for algorithmic or computational compositions. It is basically an audio installation, orchestrated by networked machines and people. Students from the University of the Arts, Bremen and the Copenhagen Institute of Interaction Design created the setup, which consists of twelve pods, each with a controller attached to speakers. All pods are wired together to form a network transmitting musical data. Therefore the data travels through each unit before it is passed on to the next one. An interesting part of this project is that each pod will interpret and alter the same data that has been passed on based on its individual algorithmic rule set, which is like having their own personalities as instruments. So the music takes on different qualities depending upon the artist who programmed it. In contrary to the immediate impression of “computational music” that feels very robotic and cyber, this piece has some humanistic attributes that adds an extra point of interest.

http://resonate.io/2015/projects/automatic-orchestra/

abradbur – Looking Outwards – 11

For this Looking Outwards I wanted to find some musical pieces that couldn’t be reproduced with physical instruments and would instead need to be played entirely electronically. I started out looking for some artists who had perhaps made a specialized synthesizer when I came across an artist that, although utilizing a program that they hadn’t created themselves, they instead created a new genre of music. “Black Midi” is a music genre defined in that if you were to write the sheet music of a song in that genre, you would end up with a black piece of paper because so many notes are played in the song. It’s not just the magnitude of the music that makes Black Midi so interesting, but also the visual aspect of it. Rendering the notes in such a way that in video the music is mesmerizing to look at. “Pi, The Song With 3.1415 Million Notes” by TheSuperMarioBros2 on Youtube, is a particularly mesmerizing piece. Uploaded on March 14, 2015, the visuals include the symbol of Pi itself, morse code, and swirling visuals composed of the notes on Synthesia. It’s truly impressive, and while the music may not be everyone’s cup of tea, I find it enjoyable. It’s like video game music.

(The song duration is also 3:14)

dnoh-sectionD-lookingoutwards11

Project: Generating Music with RNN

This project is similar to what I wrote about in Looking Outwards 4. By using the same programming logic as the one I previously wrote about (Reoccurring Neural Networks), a program was able to come up with the final “composed piece” at the end of the video. The program first took many samples of Bach’s pieces. Then, the program analyzed it and created a random “piece” as its first iteration. Through the samples it was provided, it slowly “improved” to match melodies and chords more similar to that of the original samples to create a whole new piece in the style of Bach.

Honestly, I couldn’t really find an interesting computational music project that was not already written about, so I went with this one. There seems to be no artistic mind behind this because it was literally all computer generated without any specific parameters.

However, before I looked into this project, I remembered a YouTuber called Andrew Huang. He creates a lot of music through many audio samples he records and edits/organizes each file into different harmonies and sounds. I found this to be fascinating, however not very computational, as it is basically creating music manually.

ifv-LookingOutwards-11

 Ge Wang makes computer music aiming to use computers and phones to make new kinds of instruments. He created a computer music programming language called ChucK. Programs made with this language can be run with various interfaces. In the embedded TedTalk he showcases various instruments he has made. One example is a repurposed game controller that can be set to create a variety of noises (mostly futuristic/science fiction sounding) the other is a ‘wind’ instrument which is played by blowing into the phones microphone and can be altered by holding various buttons on the screen that mimic holes/buttons that would exist on a traditional instrument.

keuchuka – looking outwards – 11


Circuli in action

Circuli is a generative musical instrument conceptualized and developed by Batuhan Bozkurt in 2012, and is now an IOS app. The circles on the interface grow at a constant rate, and do not overlap. Bigger circles push and shrink smaller circles when in contact. A circle pops and makes noise when its boundary intersects the center of another circle. The pitch is decided by the position of the circle on the background. The bigger the hole is, the higher the pitch. The envelope of the produced sound seems to be determined by a number of parameters including the final circle size and the time of interaction between two involved circles. This project to me seems to make sense visually and is coherent to its audio component. It creates a multi-sense concert in a way.

source : https://earslap.com/page/circuli.html

Ziningy1-section c-Looking Outwards 11

 

SugarCube, developed by researcher samanda ghassaei at the MIT Media Lab, is an open source, grid-based, standalone MIDI instrument that can be booted up into various musical applications. With the built in accelerometer and gyroscope, SugarCube integrates the physical interaction of tilting, pushing and shaking etc. with the digital music MIDI Notes(shown in the video). It immediately gave a very tangle aspect of computational sound effect. On the other hand, Sugar Cube also afford a variety of ways that users can compose music through the interaction. Such as in the boiling app interaction, users will be able to experiment and create interesting polyrhythms with visual bouncing lights. While the users are pushing the cube buttons to add note to the rhythms, the visual Bounce direction is based on y tilt and Speed and MIDI velocity (loudness) controlled by pots. I assumed that the creators may originally concern with the variety of interaction getting too complex, there is a very user-friendly shake to ease function added to the product. Overall, I am very impress how Sugar cube links the digital music generating aspect with lighting visuals and analog interaction in a very intuitive way.

 

 

juyeonk-LookingOutwards-11

Title: The Classyfier

Artist: Benedict Hubener, Stephanie Lee, Kelvyn Marte, with the help of Andreas Refsgaard

Year of Creation: 2017

Link to the Article: http://www.creativeapplications.net/processing/the-classyfier-ai-detects-situation-and-appropriates-music/

The Classyfier is a machine that chooses an appropriate song that suits your mood. It does so by detecting the kind of beverages people are consuming by using the built-in microphone to catch the characteristic sounds and comparing it to the pre-trained examples (ex: clinking of the wine glasses or opening a can of beer), and the main programs used to create the algorithms were Wekinator, Processing and the OFX collection. The Classyfier will then “classify” the drinks into one of the three categories: hot beverages, wine, and beer, then it would start playing the music from the playlist that is designated to each category. The user could also knock on the table to navigate through the playlist.

It is interesting to me how machines could help us in social situations and help everything to go naturally with the flow when machines are generally expected to be a hindrance to interactions between people. It’s exciting to see how technology is getting more and more seamlessly integrated to our lives.

 

aerubin-LookingOutwards-11-Section-C

A very interesting new instrument that is generated computationally is called the dub-step board. This is a board with buttons that light up that are placed in rows and columns. Each button is linked to a different sound, song, or beat that is pre-programed with computer software. As a result of its preprogramed nature, it is an instrument of infinite possibilities that can feature live and unique performances. Many of the videos and performances done utilizing this board mix different songs and beats together to make a piece of music.

I really admire the versatility of the board and how user friendly it is. Many of the antiquated instruments that are in the standard orchestra only have specific pitches for strings and can only perform a specific group of sounds (such as the sound of a piano). The dub-step board allows the performer to play different sounds from diverse instruments and even do a drum beat at the same time all in the reach of one’s two hands. As it is personable, one can program a certain group of pitches in the reach of their own hand, as each individual has different sized hands. One of the many challenges standard orchestral instrumentalist’s face is the ability to reach the notes with their own hand, especially for those who have smaller hands that play larger instruments (such as the double bass or piano). Although the dub-step board will probably never solo with the Berlin Philharmonic, it is definitely an innovative devise that combines both music and technology into a user friendly instrument.

Click to learn more about Dubstep

rsp1-LookingOutwards

NSynth: Neural Audio Synthesis

NSynth is Google Magenta’s—a small team of Google AI researchers —latest project. Their main pitch was that this new system will provide musicians with an entirely new range of tools for making music. It take different sounds from different instruments, and blends them together creating an entirely new sound as the creator can also alter how much of one sound is used.

According to an article in the New York Times, “The project is part of a growing effort to generate art through a set of A.I. techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data. ” (https://www.nytimes.com/2017/08/14/arts/design/google-how-ai-creates-new-music-and-new-artists-project-magenta.html)

Image result for nsynth google
images of soundwaves from the original file to the altered file

 

The following link below contains samples of the types of sounds that NSynth can generate:

https://magenta.tensorflow.org/nsynth

Below is an interactive page where you can mix and match your own sounds:

https://experiments.withgoogle.com/ai/sound-maker/view/