adev_LookingOutwards11_SoundArt

Ryoji Ikeda, Supercodex

I chose to talk about Ryoji Ikeda’s live set performance, Supercodex. I had the opportunity of attending this performance in Pittsburgh about a year ago and it is still being performed internationally today. Witnessing this performance was absolutely incredible. This entire live experience takes place in a large, black cube-shaped room, and it is almost hypnotic.

Ryoji Ikeda is an electronic and visual artist and uses physical environments and mathematical notions to create live performances of his work in highly immersive environments.

nahyunk1 – Looking Outwards 11

https://www.youtube.com/watch?v=S-T8kcSRLL0&feature=share

sound art incorporates sound with an existing visual piece, generating a new meaning through the convergence of the two medium. Music as itself is a singular form of art that can exist on its own and portray a meaning or thought. The project that I found from this week’s looking outwards was Ge Wang’s computational musical instruments in Stanford. He laid out a series of machines in which he could install an app or program a set of codes that loop into a playable instrument. Ge incorporated his computer engineering skills to create a sound machine which people can interact and also utilize in creating music. Throughout the talk, I realized that his project much related to what I was learning in my other class which also incorporates computer language with musical form of art.

akluk – Section A – Looking outwards-11

For week 11’s Looking Outwards, I have decided to write about “If vi was ix” by TRUMPIN or Gerhard Trimpin.

I have written about a project of his in a previous looking outwards. In this project, he created a sound sculpture as the center piece of the Seattle’s Experience Music Project. It is basically a 50 ft sculpture with seven hundred acoustic and electric guitars. He also programmed the guitars to play music from all different kinds of genre from Scottish ballads to punk rock. He also wrote it so that the guitar could tune itself to make sure that it is in tune and in sync. It is also called the guitar tornado because it resembles the shape of a tornado. It doesn’t describe specifically what algorithms are used to create the program that plays different genres of music. What TRUMPIN always seems to do with his work is not only involve music or sound, but also incorporate a very unique visual aesthetic. Attached below is the link to the piece of work.

Link

enwandu-Looking Outwards 11

“To me computer music isn’t really about computers, it’s about people. It’s about how we use technology to change the way we think, do, and make music; maybe even add to how we can connect to each other through music”.     – Ge Wang

Ge Wang is Chinese American musician and programmer responsible for the creation of the ChucK programming language, as well as the founding of the Stanford Laptop and Mobile Phone Orchestras. He is also the co-founder of the mobile music app Smule, and designer for the iPhones Ocarina and Magic Piano. Wang received his B.S. in Computer Science from Duke University, and then went on to get his Ph.D. in Computer Science from Princeton University. He is now an Associate professor at Stanford University in the Center for Computer Research in Music and Acoustics. He operates at the intersection of computer science design, and music.

I found the idea of the laptop orchestra to be a weird, and intriguing concept. The process of bringing to life such a unique experience for everyone involved was also quite fascinating. Using IKEA salad bowls, car speakers, amplifier kits, they created these hemispherical domes, which project the sound of the instrument, from the location of the performer to give the sense of autonomy in performance, and mimic the way sound is produced in a typical orchestra, rather than have the sounds blast through the PA system. The process of setting up the laptop orchestra involved the creation of an instrument called ‘Twilight’:  an instrument which uses the motion of the performers hand to generate sounds. I really admire many of his projects particularly the laptop orchestra because it begins to blur the lines between various disciplines while expanding the minds of their audience to the interdisciplinary possibilities.

danakim-LookingOutwards-11

“The Classyfier” is a table that chooses music to fit the situation happening around it based off of the beverages that the people at the table consume. It chooses a playlist by comparing the characteristic sounds to a catalogue of pre-trained examples. The three classes that the table can detect are hot beverages, wine, and beer. I thought this project was pretty interesting because it is sort of an intro to smart objects and machine learning.

This project was created by Benedict Hubener, Stephanie Lee and Kelvyn Marte at the CIID alongside Andreas Refsgaard and Gene Kogan. They used the OFX collection, Wekinator and Processing to bring the project together.

Huebener, Lee, Marte; The Classyfier; 2017

The Classyfier; 2017

rkondrup-Looking-Outwards-11

Atlås is an iOS application that generates music in a conversationally philosophical digital environment. The program generates tasks that are solved by machine intelligence with accompanying music which is choreographed to the solutions the machine develops. Minimalist visuals and audio are combined to form a virtual space inviting calm and introspective thoughts to the user’s mind. The iOS app was coded using p5js embedded into a regular Swift iOS application. In addition, randomly chosen philosophical questions are posed on the screen by the 20th century composer John Cage as a supplement to the program’s zen-like aesthetic.
This program very much interests me because it was coded using p5js, meaning I could begin to code iOS apps myself using what I have learned this year in 15-104. I am also very interested in machine intelligence and the artistic works that machines can algorithmically produce. These ideas inspire me to develop ideas for apps that could further my coding abilities.

daphnel-Looking Outwards-11

In 2010, Tristan Perich created a full length album called “1-bit Symphony” on a small single microchip that was encased in a CD jewel case. Perich has always had a certain amount of interest in music and got into working with microchips to create music and art in his college days. For Perich, a microchip is just a smaller version of a computer but you are more in touch with it and you can understand it better. I love how he was able to use his talents and likes in order to create something very different and unique from many other music pieces and composers. He combined his love for composing and his interests in microchips in order to create something new and musically interesting.

jknip-SectionA-LookingOutwards-11

Atlas app’s visual style

Atlås by Agoston Nagy (2017)

Atlås is an “anti game [app] environment” where music is generated via a conversational cognitive space — users answer questions surrounding presence, human cognition and imagination, while also playing simple game mechanics that create sound. I really admire the visual aesthetic of this app, and the minimal interactions it has with the users — making users feel like music generation is a simple task for anyone. The app is developed using open source tools, specifically using p5js/javascript. The artist aims to use the experience to investigate the autonomy of algorithms and machine learning. Nagy was able to showcase his artistic sensibilities through interactivity and machine learning — he started developing this as part of his PhD thesis, and was especially interested in combining sound, visual, and literary analogies sensitively and in a visually pleasing manner.

http://www.creativeapplications.net/processing/atlas-guided-generative-and-conversational-music-experience-for-ios/

http://www.binaura.net/atlas/

Sheenu-Looking Outwards-11

This is a segment from Animusic, a series of musical and computational 3D animations. This particular one is named “Pipe Dream” and features numerous balls shooting out of pipes and hitting guitar strings, bells, xylophones, drums, and cymbals. Each segment in the series follows a certain artistic theme, a certain genre of music, and a certain type of orchestra. Electronic music would have a sci-fi theme and its orchestra would mainly consist of synthesizers and electronic drums. Classical music would have an orchestra consisting of violins, brass, and woodwinds.

As a child, I was always fascinated by the variety, creativity, and autonomy that is displayed in the Animusic series. At the time, the idea of robots playing music was a fascinating subject to me and it still is today. However, what I didn’t realize was that the animation itself was already, in a way, a robot playing music.

The Animusic animations are not animated by hand, but rather are animated and controlled by the computer through listening to the music. The software used to make the whole animation come to life is a custom made engine named “MIDImotion”. Because the songs are in MIDI format, the program responds to the data that is sent from the song file and translates it into animation for certain instruments. This is how the animation can show so many things happening at once; animating all of this by hand would be extremely difficult.

I recommend looking up and viewing all the other Animusic segments on YouTube. There are many other fascinating segments out there that are just as good as the one I’ve shown above.

jiaxinw-Looking Outwards 11- Computer Music

A.I. Duet by Yotam Mann

Someone is trying AI Duet with the keyboard

Yotam Mann created this experiment for letting people play a duet with a computer. When the user presses some keys on the keyboard, the computer will respond to your melody. I like how this experiment showed a potential of letting human beings interact with computers to create artistic works. One thing surprised Yotam Mann a lot was that some people didn’t wait for the response but tried to play music at the same time with the computer, which was really like a real-time duet with another person.

In this project, Yotam Mann used machine learning to let the computer “learn” how to compose. He used neural networks and gave the computer tons of examples of melody. The computer analyzed the notes and timings and gradually built a map for the relationships between them.  So that when the melody was given to the computer, it can give a response to people based on the map.

Here is the video of A.I. Duet

If you want to know more, please go to : https://experiments.withgoogle.com/ai/ai-duet