Looking Outwards 11

I saw Ryoji Ikeda perform Supercodex Last year, and it was an intense experience. Although his piece was a performance, it was not a musical composition. He used data sets to inform the music and visuals that created the piece. The sound aspect was created entirely by clicks, and each click was distinguishable at the beginning of the piece. As the piece progressed, the click-frequency increased, and became keyed (and un-keyed) tones. Ikeda does not reveal much of his process, but it was clear he was manipulating and abstracting the input data to control and sway the crowd. One part that was neat was to hear the transformation from single clicks into “continuous” square waves. It was beneficial to hear that transformation because it was a good primer for talking about a computer’s process for breaking songs down into samples. Ryoji Ikeda is intense and loud.

mjeong1-Looking Outwards-11-Section A

“Bicycle for Two” by Max Mathews(1962)

Max Mathew is one of the early pioneers of Computer Music. He studied electrical engineering at California Institute of Technology and MIT. He developed Graphic 1, an interactive graphic sound system on which one could draw figure with light pen which can be converted into sound. This simplified the process of composing computer generated music. He also developed GROOVE, a first fully developed music synthesis system for interactive composition and realtime performance using 3C/Honeywell minicomputers. Through his inventions, he showed his admiration to live performance and his interest in what a computer can do to aid a performer. He made a controller, the Radio-Baton, to provide new ways for interpreting and performing traditional scores.

“Daisy Bell” is the most famous music composed by Mathews. In 1961,  He arranged accompaniment of the song by computer synthesized human voice through technology developed by John Kelly. My favorite music composed by Max is “Bicycle for two” because it has simple melody and rhythm that I enjoyed and I admire how Max use computer software to generate music based on compositional algorithm.

 

link to max mathews

 

dayoungl Looking Outwards-11

For this week’s looking outwards, I looked into a concept of computer orchestra. The computer orchestra was created by group of students from ECAL, which stands for the University of Art and Design of Lausanne, Switzerland in 2013. What is interesting about their project is that the compute orchestra is not an orchestra based on computer generated sound but sounds that users upload to a server that’s later integrated into a form of an orchestra. Just like any other human orchestra, there is a conductor (a person) and musicians (computers). Rather than describing it in words, It’s really easier to understand the concept when you look at the attached video above. The way that the voices uploaded to the server is played according to gestures of the conductor; of course, conductor’s gestures are programmed accordingly and conductor can control so many aspects of the orchestra just by using his or her body. The students programmed the computer orchestra using Processing, SimpleOpenNi, and Ableton Live. On their website, they also specified that they used 10-12 computers, tripods, and Kinect.

Computer Orchestra is an interactive installation consisting of multiple computers. Close to a configuration of a classical orchestra, it proposes a new approach to music by allowing the user to conduct his/her own orchestra.

Movements of his/her hands are recognized accurately with a Kinect motion controller connected to a central computer. It will then give instructions to a multitude of musician screens. Screen-musicians then not only send sounds back to the conductor but also produce visual feedback.

Looking-Outwards 11 Thomas Wrabetz

I’m doing sound art now because I did Ronald Jenkees during the sound art week

“Journey into Hyperland” is a sound art experience that brings a crucial element of the musician’s world- the crazy acid trip- directly to your domestic screen. It’s parametrically generated which means it draws upon variables such as the user’s location and time of day. If you let it use your mic it can generate more stuff, which is kind of interesting although I didn’t see any tangible reaction to my sounds. In any case, the title does not lie as this work is hype to the max. There was also an elephant, so clearly the author’s animal selection abilities are as powerful as his programming skills. It’s like browsing around google earth when you’re bored except instead of the earth it’s a psychedelic EDM swamp with elephants and flying submarines.

HaeWanPark-LookingOutwards-11

Iamus by Iamus

Iamus is a computer in the University of Málaga that composes contemporary classical music without any human help. Its first studio album ‘Iamus’ was released in 2012. This album was composed by only a computer and recorded and played by London Symphony Orchestra. This album is composed utilizing ‘melodics’ which is a computational system based on bioinspired algorithm generating musical composition without human input. Also, Iamus evolves composition in itself.

I was surprised that this computer composes full pieces of complex classical music not just a piece of some melodies. Because it is so good at composing music without human input, people can easily think that computer would replace human musicians in the future. But, it also might be a really nice tool for human musicians to develop a new musical invention.


Iamus

monicah1-lookingoutward-11-SectionA

While Sleeping by Ikue Mori 2013

Ikue Morie is a japanese artiest who moved from Tokyo to New York in 1977.  She started playing with drum, then formed a band creating radical rhythms and dissonant sounds. Then she started to used drum machine to improvise music.

I was intrigued by her perceive in music. This piece, While Sleeping, made me thought about what do I hear when I’m asleep or when I’m peace and calm states. How does she relate sounds while sleeping to the sounds she created. It is interesting that she used many mediums, some natural and some made up to create the subtle  music in While Sleeping .

mmiller5-Looking Outwards-11


Computer generated chorale based off Bach’s compositions

This week, I’ll be looking at a Bach Style Chorale created by the program EMI (or Emmy, I dunno) and David Cope.  EMI, or Experiments in Musical Intelligence, is a program made by David Cope in 1981 that composes music in the styles of various composers, essentially making pieces that sound as though they were written by those composers.  Abstractly, the program works in three steps: deconstruction–analyzing pieces of the selected works–, signatures–identifying common aspects of works–, and compatibility– recombining these parts into new pieces.  I find this to be very inspiring because its goal is to computationally analyze musical structure and then produce something from that, potentially creating limitless numbers of works that follow existing styles.  This method could be utilized by composers to analyze their own works to aid their composing process, allowing man and machine to work together to make works of art.

hyt-Looking-Outward-11: Sound Art

Live performance documentation of Samson Young’s Nocturne (2015)

For this week’s Looking Outward post, I found inspiration through Samson Young’s work of art. He is described as a rising sound artist based in Hong Kong and with a strong background in music composition. While he’s concentrated on the expression using sounds, he also incorporate them into live performances, visual drawings and films. The particular project that interested me was Nocturne (2015), a live performance that he conducted in team gallery, NYC. He sits in a position surrounded by drummer’s set, various instruments, found objects and connected amplifiers attached to the objects. A old television screen is also placed in front of him, playing muted footages of night bombings found on the Internet, most of which are US attacks in the middle east as well as ISIS gulf war, etc. He aims to improvise the explosion sounds with “foley technique” and broadcast it on a pirated radio frequency so that the audience could experience the performance using transportable radios. Since the found footages are political and pointing to the specific war-torn countries, the performance seems to be a playful yet sarcastic commentary toward the authority.

Even though the sound is mainly created by the artist, the computation element lies within the video footages, as it shows the frequency of the bombing and the artist aligns his action based off of the frequency. I think the imitation of everyday / common-heard sounds is definitely something I am interested in, and I wonder if it would be possible to recreate them by playing with the p5.sound library for future projects.

 

hschung-LookingOutwards-11

My LookingOutwards-04 was more about uniquely produced music, so I am taking the opportunity to explore sound art.

As I was searching for computational sound art, I came across a website for EarSketch- a program used to teach students about computer science through coding music. Students are taught to code in Python or Javascript, and learn how to use loops, compose beats, and add effects to make music.

EarSketch hosts a national competition every year to encourage students to code music creatively. I liked a winning submission from the 2017 competition, titled Mid Day Parade Competition Song, created by student Robert Marcez. I thoroughly enjoy his song because it actually sounds good, like a real song. It’s full of complexities, crescendos, and even a beat drop of sorts, that make it feel like a full song. More than that, I’m impressed that this high school student made a song from scratch via the medium of code, which is something I actually haven’t thought much about before. He was able to manipulate different components of the song, make his own functions to easily manipulate them, and creatively exercise his knowledge of code. It reminds me there are many ways to blend the fields of science and art.

Robert’s coded song- press the play button to listen! It’s quite good.
https://earsketch.gatech.edu/earsketch2/#?sharing=2K1nSohUQd3YaLpD4Zvohw

A list of the winners from 2017. Robert’s song is posted first, and there’s a blurb from him about what he was thinking as he created his song.
https://earsketch.gatech.edu/landing/#

yoonyouk-LookingOutwards-11

 

 

The Classifyer is a gadget that can detect the mood of a social setting and the play the appropriate music. The table can detect different drinks: beer, wine, hot beverages and even catch sounds from the environment whether it be conversations or background sounds. The creators, Benedict Huebermen, Stephanie Lee, Kelvyn Marte, wanted this gadget to enhance the ambience of each environment. The algorithm would be the detection of the different drinks and then the projection of the sounds.

I thought it was cool that the gadget could play the sounds that correlated with different drinks. I also found it particularly interesting that the students of this project determined that drinks were the indicators of the different moods of different settings. Perhaps this product would then be most appropriate in kitchen, bar, or restaurant settings.

Learn more through this link.