LookingOutward 11

Architecture and sequence

A long distance relationship is a project done Cecdet Erek, who is one of the founding members of an experimental outfit called Necropsy. The project focuses on the relationship between two points, space or time. The continuation of his ongoing project Rulers and Rhythms lies in between two rhythms centered pieces conceived for MUAC: Measures Taken and Close FarClose. This project is not only music itself but also combines computational musind architecture. I think it is very cool to vary the media and add elements to other projects. I thinking of

A long distance relationship

Jaclyn Saik – Looking Outwards -11

Changxi Zheng is a professor at Columbia University who leads a team of researchers looking at ways to use computers to modify the sound of existing acoustic instruments.
One project that really caught my eye is called “zoolophone.” Zheng and his team studied the ways that professional tuners adjust glockenspiel keys–by digging into the material and making divits that allow it to vibrate at the exact desired frequency. Zheng looked into the ways that computers could make this process easier, and discovered that by modeling this same interaction on a computer, they could calculate the exact vibration that the keys would make based on their shape. In this way, he was able to manipulate the shape of the keys, something that isn’t done usually with traditionally made instruments since it’s hard enough to tune rectangular shapes.

(video caption): The metallophone contact sounds is a project the team worked on to manipulate different within set algorithms in order to maintain a certain tone.

The zoolophones on display. Each shape plays a different specific note.

This allowed his team to have more fun with the shapes, too. He wrote a program that asks the computer o start with a certain shape, such as a t-rex, and then text vibrations against it and manipulate it’s form slightly until it makes the right particular sound they were looking for.
I found this very interesting because I am always interested in tools that designers make in order to educate children, and this seems like a useful tool for teaching about different notes.

Kevin Riordan Looking Outwards-11-Section C

For this week’s looking outwards, the algorithmic music project that I chose is Andrius Sarpovas’ Kinetic Generative Music Installation. Through transposition and conversion, the data of half a million people is turned into energy impulses that produce a complex acoustic variation. I admire his creativity of thinking of turning statistics into a musical composition. I resonate with his belief that music is not merely songs that we usually listen to but it is also the rhythmic patterns, the space, the melody or the harmony that we encounter in our daily life. I think that turning different people’s consciousness into music is meaningful because individual’s own presence is represented in a very unique way. The project consists of metal, wood, plastic and glass which are the sources of the sounds. Aluminium bars are used to create a lasting and rich harmony. The segments of the installation were hung on wires through which the impulses were transferred to the sound activator and the damper. The algorithm of transposition and conversion turned statistics into impulses which were distributed to the segments to make sounds. These sounds created are rather relaxing, which has been achieved through the artist’s work on the the algorithm and the mechatronics. This shows that the artist sends a message to his audience that technology can not only be used to create solitude for people but also make the solitude comforting.

Source: https://metalmagazine.eu/en/post/interview/andrius-sarapovas-transforming-data-into-music

installation sound – 26 / 77 segments recorded – 17.06.25 from Andrius Šarapovas on Vimeo.

Anthony Ra – Looking Outwards 11

Thinking of computer music, I immediately thought of a musical trend that has become very popular to the younger demographic this decade – electronic dance music (EDM). EDM is essentially a collection of tracks and synths patched together using a musical software in a computer or application. My favorite EDM artist is Alan Walker, and it is solely a musical opinion rather than a computational one.

Alan Walker uses FL Studio for a lot of his tracks

Alan Walker uses a heavy dose of synth chords with a slow release on his melodies to create more of a soothing product. Within any instrument played on the computer or any synths, he is able to alter the settings and functions using his computer to create the right atmosphere for the given song.

Depending on who he is collaborating with, he also uses Cubase & Logic.

The video above shows his step by step process of patching different computational instruments together and shows us the electronic and synth collaboration with typical instruments. A lot of sounds he makes for his music are plug-ins from softwares like Nexus. Alan Walker is able to achieve his dream in being a musician without some of the fundamental qualities that one would need to become one – the ability to sings or play a musical instrument.

Sharon Yang Looking Outwards 11 Computational Music Composition

The algorithmic musical project that I would like to discuss is Apparatum. Apparatum, has been created by a solo exhibition, panGenerator. With its digital interface that generates electroacoustic sounds, the device emits purely analogue sounds. The project has been inspired musically and graphically by the “Symphony – electronic music” by Boguslaw Schaeffer. Just as Schaeffer did, the apparatus makes use of the visual language of symbols that convey the audio cues. The electroacoustic generators and filters were arranged inside two steel frames, and the two types of magnetic tape samplers are used in order to base off and create basic tones. The apparatus relies on the analog way of generating sound of spinning discs with graphical patterns on them. I admire this project because it is able to make analog sounds from a purely electronic means of creating sounds. In the era where we are constantly exposed to electronic sounds, it shows that an apparatus can be used to bring back the analog sounds. Also, its relatively simple algorithm is used in this ingenious project.

APPARATUM – the installation inspired by the Polish Radio Experimental Studio from ◥ panGenerator on Vimeo.

Video Clip of the operation of Apparatum

Source: https://www.creativeapplications.net/sound/apparatum-installation-inspired-by-the-polish-radio-experimental-studio/

Beyond the Fence

Beyond the Fence is the world’s first computer generated musical, created by Benjamin Till and a team of researchers.

Not only is 25% of the music and lyrics computer generated, but aspects of the production such as the setting and size of the cast were based on algorithms as well. Using an algorithm that sorts through over 2,000 musicals, they determined the characteristics of the best musicals and applied those to the premise of the musical. After computing these aspects of the plot, they team filled “in the dots.”

There’s a hint of Romeo and Juliet (or should that be West Side Story?) to the burgeoning relationship between Mary and Jim, a touch of Miracle on 34th Street to the George subplot, and the idea of a band of women taking a stand against oppressors has been rehearsed in the West End as recently as last year’s short-lived musical staging of Made in Dagenham.

This project is extremely interesting because most modern arguments suggest the computer will never be as creative as the human, especially concerning emotional factors, but this project applied creativity in a useful way. After computing the aspects of the plot, the team filled “in the dots.” With most computer generated work, there is a layer of human decision making (even going back to the creation of the algorithm) so things such as the dialogue and blending the lyrics from the poetry generator were created at the hands of the team. Somewhat relying on the machine to inspire their own creativity. With the advantageous use of these algorithms, the reliance on the “What-If Machine” allowed the team to use their talents elsewhere and fall prey to the surprising output of the machine.

The music was created with the algorithmic composing software, Android, the plot-generating script, PropperWryter, and lyric-generating, Clarissa the Cloud Lyricist, with their own software the “What-If Machine.” It was a collective effort of many collaborators and creators to fully realize this computational musical.

This project as a reference for sound art, is quite inspiring to go past the so-called restrictions of machine-learning and computer generated software’s to create something brand new, yet realistic and familiar. It would be really interesting to see what would happen if these algorithms ran throughout the musical, constantly changing its plot and premise- almost like an improv.

Read more about the project here & here!

Yoo Jin Shin-LookingOutwards-11

Sonic Playground

Sonic Playground (2018) at the High Museum of Art, Atlanta, GA

Sonic Playground by Yuri Suzuki is “an outdoor sound installation that features ingenious, colourful sculptures that modify and transmit sound in unusual, engaging and playful ways.” The colors used in the installation are so vibrant and really catch the eyes of those passing by.

Rhinoceros 3D / Grasshopper Pipes

The software behind this installation was designed by Luca Dellatorre using Grasshopper, as a parametric design plug-in in Rhinoceros. “The plug in he wrote is a 3D raytracing tool that allows the user to select a sound source and send sound in a certain direction or towards a certain geometry, in this case the shape of the acoustic mirrors or the bells at the start and end of the pipes to see how the sound is reflected and what is the interaction with the object.” It’s cool to see how such a seemingly simple installation can have such a complex architecture on the back-end.

Jenny Hu — Looking Outwards 11: Computer Music

For this week’s looking outwards post, I will discuss the artist Mileece, a  sound artist and environmental designer who is known for her methods of making music with plants.

Mileece creates generative music , one track which can be listened to above, and on this link. The way she generates this music is by essentially reading electrical currents given off by  plants via electrodes, and processing the current into binary code— which is then processed and animated into sound. Often, this creates sound that feels like music.

Vice created great documentation of her process at the following video:

What I love about Mileece’s music is the duality of the generative work. In some ways, the music is generated purely from the plant, it’s a radical change in our perception of what a plant is and is capable of, but it’s reprocessed by the computer. The computer is a key part of our ability to interpret the plant as music in the first place. Because of this, it feels impossible to define her music without the discussion of computation.

Will we start to breed plants to generate totally new sounds?

Jisoo Geum – Looking Outwards 11

Fedde ten Berge- of Nature and Things ( het Ei (the Egg)) – 2017 

Fedde ten Berge is a Dutch sound artist who studies the sound in different types of material. The installation above is part of a larger project called “Of Nature and Things”, a collection of artificial objects mimicking forms in nature. Through “Of Nature and Things”, the audience is able to investigate different ambient sounds generated from the unique surfaces and shapes that each sculpture has to provide. I particularly liked “The Egg” because of its nature to create a subtle yet powerful sound through indirect contact. The combination of sound and the movement of the person’s hand looked interesting and even magical. From my knowledge, “The Egg” is played by the proximity between the audience and the object using a low latency audio board for embedded applications. I think Berge’s artistic sensibility is best represented from the alien sound contrasted by familiar shapes in nature.

Eliza Pratt – Looking Outwards 11

 

The ARC creates generative music by responding to touch sensors in different objects

At the South by Southwest (SXSW) festival in 2016, Deloitte Digital created an interactive space where people could use touch sensors to generate electronic music. Known as the ARC (Audience Reactive Composition), the space holds programmable objects that use sensors to adjust the amplitude, pitch, and other elements of exclusive tracks. According to DJ Baio, an electronic artist with expertise on the project, the installation is controlled using Ableton software. Ableton, which is commonly used in electronic live concerts, can be used to adjust notes, frequency, and samples of audio to create unique versions of tracks so every performance is unique. I admire this project as it allows people with no experience in the computational sound world to create their own generative music. Moreover, adopting responsive technology to make music allows endless opportunities to create that would otherwise be impossible.