Looking outwards 11

I have choose the Google’ Magenta project as computer Music. I am very much interested in machine’s capability on learning realm that is considered to be human’s. The Magenta research project is aimed to push limits of what AI can do in arts. In order to achieve such an intelligence for the AI, they have tried many different machine- learning techniques, such as recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning.

They used NSynth algorithm, which uses neural networks to synthesize new sounds, on notes generated by different instruments.

This is such an inspiring and provoking project that allows more thought to be brought up on limits of AI and possibilities.

kyungak-lookingoutwards-11

(Simon de Diesbach, Jonas Lacôte, Laura Perrenoud,  “The Computer Orchestra”, 2013)

The creators of “The Computer Orchestra” initially met  at University of Art and Design of Lausanne in Switzerland during their Bachelor’s degree in media and interaction design. This project started as a small project, but later grew into an amazing piece that was exhibited in various parts of the world.

“The Computer Orchestra” is an interactive installation that allows viewers to conduct their own orchestra.  Users can upload or download music samples and integrate them to their own taste using the simple interface within in the computer. The sounds also change according to space. Kinect connected to the center of the computer detects motion of the user’s hands and position and interacts accordingly.

I personally admire the interactivity of this artwork. Being able to produce an orchestra that usually needs a handful of people with computers sounds amazing. The original thought and the combination of interactiveness made the artwork accessible and fun. I feel like this artwork was also educational, which makes it extra worthy to exhibit.

karinac-LookingOutwards-11

‘Mothership’ by Mason Bates

 

This is absolutely one of my favorite pieces of computer music.  Mason Bates, the composer of ‘Mothership’, is a famous artist who integrates technology with music.  This particular work uses synthesized sounds of multiple aircraft to create an alien sound of a mothership.

Mason Bates has to record and transfer those recordings into sound files that could be embedded into his laptop. Though the mechanical sounds were not computer-generated, he still had to edit a lot of the sounds to make it the way he wanted it to sound.

I was inspired by this piece and Mason Bates because it perfectly balances out the computer-generated sounds and the melodies and harmonies of the orchestra. Many times, computer generated music would be too heavy on either the tech side or the orchestral side, often clashing with one another. I think this piece is a great use of both.

katieche-looking outwards 11

Carsten Nicolai

Berlin based German artist and musician, Carsten Nicolai, works in a transitional field between music, art, and science. He seeks to override the individualized sensory perceptions by creating works that appeal to multiple senses at a time (i.e. using sound and light frequencies to appeal to ears and eyes simultaneously). In terms of computation, he often uses mathematical patterns like random, grids, error, consistency, etc.

In his 2016 installation, Reflektor Distortion, viewers can both hear the sounds being played, and see the light bars seemingly move in the water bowl. It works by creating sound waves through a speaker that shifts the water in the bowl accordingly, therefore creating the illusion that the series of light bars being reflected in the water are also moving. The result is a mesmerizing movement of lights that coincide with whatever sound is being displayed. I think that the light bas are a very effective and beautiful way to display sound waves, contrary to the stereotypical up-down cosine/sine looking sound waves that we’re all used to seeing.

creyes1-LookingOutwards-11


Brian Foo’s “Two Trains,” where the song varies depending on median household income for each neighborhood the 2 Train stops in

Created in 2016, Data-Driven DJ is a project by Brian Foo to create musical works through computation and interpretation of real-world data in new and experimental ways. The project is a collection of ten tracks, each sampling data from a broad swath of topics from social to cultural to environmental. Using a blend of programs such as ChucK (a programming language for real-time sound synthesis and music creation), Audacity, Python, Hydrogen (drum machine application), and Processing, each track that Foo creates is compelling and unique, and makes powerful statements when data is not only visualized in his videos, but synthesized into song. What I really like about Foo’s work is that while they may not be catchy, fleshed out songs, each track is fascinating and sends a powerful communication in regards to its subject. Even though the songs are essentially controlled by algorithm, Foo’s artistic touch is still incredibly evident, and the entire body of work shines for that reason.


Brian Foo’s “Rhapsody in Grey,” which uses brain wave data during a seizure to inform the song’s composition

Foo made his process for creating his tracks available as open-source on Github, and you can find more of him on Twitter, Facebook, Soundcloud, and Vimeo.

jwchou-LookingOutwards-10

EarSketch

A promotional image about EarSketch.

EarSketch is an online web application that teaches students how to code by making music. Users can code in either Python or Javascript. It was created because people thought in order to encourage people to learn how to code and to increase its appeal, they should use a universally understood and loved language: music. I really admire the interface, because it’s modern and fairly simple. If it looked unfinished or overly technical, I imagine that would turn people off from using the app. The app also features a HUGE range of samples, containing everything from dubstep to eight-bit to west coast hip hop.

EarSketch is designed/created by a team that is primarily based out of Georgia Tech.

While the sample themselves might not be computational, users can change the music by setting their own tempo in the code, which is computational. There are also a lot of functions that users can use to analyze tracks for their different properties.

Because the software is used to create music, the creators themselves did not insert any of their own personal artistic sensibilities into the project. Rather, they designed the software in order to allow others to insert their creative sensibilities.

A video that introduces EarSketch:

ashleyc1-LookingOutwards-11

WE is an interactive, immersive sound installation by the studio Let’s and specifically designed and programmed by Andy Arkley, Peter Lynch, and Courtney Barnebey. First exhibited in Seattle’s MadArt event, WE’s creators wanted to create a place where a shared community and experience could be built-moving beyond people interacting with the piece but also with each other. The project allows for up to 12 participants where they can interact with controllers to create musical sequences. The installation used light bulbs, music, and mapped video projections to synchronize with the music and create an overwhelming, sensual experience. What makes this interactive sound installation unique from others is that this machine doesn’t stay silent when there aren’t any participants; making it more initially inviting to participants. Any one can also start interacting with the controllers and feel like they’re not messing up any previously established sound because the musical sequences aligned to the controllers are programmed to synchronize with each other. So even though participants have choice in what to play, all of the sounds still work well together. I think this feature is what makes this piece particularly successful.

Sources:

https://creators.vice.com/en_us/article/vvaeja/you-can-play-these-giant-sculptures-like-musical-instruments

http://letspresents.com/wewe/

Bettina-LookingOutwards11-SectionC

Above: screenshot of Carsten Nicolai’s work from vimeo. (click image for video)

The only sound artist I knew of prior to this assignment was Ryoji Ikeda, so through searching “artists like Ryoji Ikeda” I found the following article. Carsten Nicolai piqued my interest because of his work in “altering audiovisual perception”. I think it’s interesting how we often blend our 5 senses; perhaps we visually represent sound or taste, or aurally represent touch. I can see there being pre-determined constraints mapping one variable to another so one sensory input informs the other.

It is nevertheless interesting that despite his work in representing sound visually, the visuals he makes sounds very sterile and technical. He seems to emphasize texture and lines, but I’m interested to see what computation could do with colors as well.

Above: screenshot from google images of Nicolai’s work

 

Jiheek1 (Section D)– LookingOutwards-11

Eunoia from Lisa Park on Vimeo.

Eunoia by artist Lisa Park is a performative piece that produces music through data. During the performance, the artist herself wears a NeuroSky EEG headset that measures her brain activities and convert the data into sound waves that vibrate dishes of water set on top of speakers.

the artist herself and the performative elements of the project

Data collected here includes frequencies, Alpha, Beta, Delta, Gamma and Theta and the emotion/mental states, “Attention” and “Meditation”. The sound waves, depending on their wavelength and amplitude, would create different sounds and visuals (the water particles glistening as the dishes are bouncing up and down due to the vibrations of the speaker beneath).

data that is translated into sound

I especially find this project interesting because of the way Park translates her emotion and mental state both visually and audibly. The result is a beautiful, serene piece of computational music that in a sense allows you to connect with the artist. I also admire the randomness that derives from the “instruments.” The way in which the water droplets are going to move in reaction to the brainwaves and even the data collected here (brain activities) are rather unpredictable.

from brainwaves to sound waves to speakers to water plate

Park’s concentration on people and their activities along with her interest in quantifying, visualizing such data are successfully manifested in this particular project. The project after its creation in 2013 was followed by a sequel, “Euronia II” in 2014.

More information on the project can be found on the Project Page

sntong-LookingOutwards-11- Computer Music

Chris Carlson is a sound artist and software developer. He holds a M.A. from Stanford University’s Center for Computer Research in Music and Acoustics and a B.S. in Physics from James Madison University. His award winning application he developed, the Borderlands Granular, is new visually and tacitly interactive for people to develop music using “grains”. The video below is a short demo showing how users can manipulate music and visual icons that is related to the sound that is being emitted. In one of his recent performance, Body Drift , Carlson partners with Jakob Marsico to an immersive audiovisual performance.