Looking Outwards 04: Sound Art

The project is called Experiments in Musical Intelligence (EMMY) designed by David Cope. David Cope works in the area of sound and computation and this project is about composing complete works in the styles of multiple classical composers. For example, in the work Mozart-style Sonata 2-1, the algorithms composed the piece in the style of Mozart from the analysis of the pieces traditionally composed by Mozart. The thing that I admire the most about the project is its potential of extending the lives of great musicians. By using algorithms, we can still hear new music pieces from those dead musicians. This is extremely exciting because it is a way for human beings to achieve eternity.

Reference
David Cope, Mozart-style Sonata 2-1, 2013.

lo: sound art

Rather than one specific project, I’d like to talk about a genre that’s very dependent on digital production to experiment and push the boundaries of music. Hyperpop is a fairly new genre in mainstream culture and is defined by its exaggerated and dramatic nature compared to contemporary pop music. Many of my personal favorite songs include larger than life basslines, quirky sound effects, and almost nonsensical lyrics, if any. Much of this experimentation is possible due to digital workstations and tools, making and mixing sounds that would not naturally happen in traditional music. Certain artists, like SOPHIE, also viewed music making as an entirely different process than people of the past would have. She described her process as something similar to building a sculpture – where each sound mimicked a certain material and sensation. While there isn’t one specific way these artists generate music, I think this subculture encapsulates how the advent of technology in the musical space has broadened their possibilities.

24HRS by ITZY – produced by hyperpop artist SOPHIE

Looking Outwards 04: Sound Art

This project is an interactive responsive play space piece that makes “musical rhythms, harmonie and luminous patterns” based on peoples’ movements, both active and passive. It is designed to respond with generated surround sound and visual light (umbrella of sonic shade of light made by electro luminescent fibers) to peoples’ activity underneath it. It also has a custom designed software that collects online weather data and translates it into the light column umbrellas. I admire how aesthetically pleasing this is, and how easily interactive it is with the audience.

As an artist, I am drawn to more ethereal and subversive pieces, and love how this project transports the audience into a new world and space that directly responds to their movements ( like a world that grows and is almost alive around them.) I also love how each person becomes a sort of unique instrument to the composition of light, sound and space.

The artists (Loop.ph) wanted to combine design. Architecture, and science to break the boundaries of specialized and innovative thinking and creativity, which were manifested with their installation that had dream-like, delicate structure and ethereal light technology.

link to video of the work

Sonumbra de Vincy, Responsive Light Emitting Environment (2008) – Loop.pH

Blog – 04

I am most inspired by the work of ‘FORMS – String Quartet’. They are a live multimedia performance for a string quartet, electronic music, and panoramic visuals. Their project resides in the field of visual sonification, which means making sound through visual elements. I admire their work because it not only includes shapes, colors, and movement but also includes sound. When all of those elements come together it creates a completely immersive and engaging experience. Their project uses a real-time visual music score generator that is designed with a set of rules using graphic generation, driven by randomness and probability. On their website, they claim to use spectral synthesis algorithms to transform sound into graphics. These graphics are very interesting because they change shape, size, and color, and differ depending on if the music is transformed into bass or treble. For instance, if the notes are bass the graphic prints at the bottom of the screen, whereas if the notes are treble the graphics are printed towards the top of the screen.

https://www.creativeapplications.net/maxmsp/forms-string-quartet/ (2021)

By: Katie Makarska

LO 04: Sound Art

The project ‘FORMS – String Quartet’ by Playmodes is a live performance of a string quartet and EDM music that projects to the visual sonification of the music. It uses real-time music to transcribe into a series of visual graphic notations in the form of lines, circles, curves, and other shapes. I think visualizing music is intriguing and the patterns of music can generate different forms of lines and representation that is also beautiful. This can really help those who are hard of hearing or may prefer visual experiences more where people may also be able to hear what they see. The visual graphics the music produces also seem to tell a story about what the music is like and it is able to isolate each instrument used in the song. This reminds me of the way EDM concerts have light shows that follow the patterns and beats in the music to create a more immersive experience. The visuals can also be transformed into music. The software that makes the graphics was coded in Processing and the image sonification was done in Max/MSP. The algorithm that makes it is able to isolate each instrument or rhythm and transform it into visuals and the other way around, whatever is on the screen can be translated into music.

Website Link

LO: Sound Art

I wanted to talk about the app Trope by Brian Eno (yes I know he’s like the generative sound guy, but I like his work!) and Peter Chilvers, the interactive generative sound-art experience. As one traces abstract forms onto the screen, the program will vary the tone. There are also additional instruments so that one can vary sounds and create a whole symphony of interesting sounds whose generated existence emerges from the human interaction with the program. I personally admire the interactive nature of it. I love randomness, but when you give people a way to experience that, to watch it unfold under their fingertips, that I find really satisfying. As the tone shifts randomly (in accordance with the laws of generative music, randomness being a key to making it “last forever”) you create songs that are entirely unique to your own experience at that moment, and can’t be perfectly recreated by someone else with ease. Likely tone and pitch are related to things like x and y on the screen, and potentially pressure and tilt, as the program is for smartphones and tablets and the like. This is another of Brian Eno’s apps, of which I think he’s made a few, and seems to be a spiritual successor to his previous work, Bloom.

LookingOutwards-04 (Section A)

I was interested by the “Mozart style Sonata 3-3” by David Cope. I find it interesting how a computer can create music that actually sounds like music to people– opposed to just having random notes played. Originally, I thought Cope had coded all the of the music just through code and I was really impressed. Cope’s artistic sensibilities are revealed because he would have to understand music theory to be able to produce music. At the same time, he would also have to understand Mozarts style very well to be able to create code that produced music that was similar.

However, I as was looking into how it was made, I read comments that said that it seemed like just a mash up of many of Mozart’s Sonatas. In this case, his code may just be taking parts of Mozart’s Sonatas and having the computer put it togther. If this is the case, this is not quite as advanced as I thought but I still think the music is pretty cool!

I’m not sure which one it is. If Cope was really able to code all of the music to sound like Mozart and the computer generated all of it, that is really impressive but either way, it is interesting how computers can create music which is usually thought to be very human.

Here is the link by David Cope. 2013. “Mozart style Sonata 3-3.”

Looking Outwards 04: Sound Art

Sugarcube is a generative art project by Amanda Ghassaei that uses MIDI and MaxMSP. The project uses real life movement and generates sounds. I suppose that the project uses hardware to track the movement of MIDI and generates a certain pattern of sound using algorithm. The creator puts together her artistic sensibility with an existing generative nature of MIDI pad to create a great generative piece. The project also has the same hardware as normal MIDI pads which makes the project very practical. It is also impressive that the MIDI pad connects to MaxMSP, the program and lets you control sound on your computer.

https://vimeo.com/91259876

Read more

Looking Outwards – 04

I researched the project Supersynthesis by Amay Kataria in 2022 that was presented at Mu Gallery in Chicago. I like how the project emulates a wave of light and sound and how it allows the viewers to become users by digitally controlling the exhibit. The user can control how the exhibit shows light and sound in real time. It allows for community building and potentially performances by artists and/or musicians by controlling the exhibit. According to the artists the project is reminiscent of Olafur Eliasson’s Weather Project in 2003, which is alos about interacting with the community. The shape of the structure and interactivity of the project are manifestations of the artist’s interests. However, I wish the project had physical changes that respond to the user besides light movement and sound change. Perhaps something like raising and lowering of the bars when it lights up and emits sound or perhaps it could change colors.

Light & Sound Synthesis: In Conversation with Amay Kataria – CreativeApplications.Net

Blog 04: “Sound Machines”

By Ilia Urgen
Section B

“A visual instrument to compose and control electronic music in a comprehensive and responsible way.” – MediaArtTube, January 28, 2012. I love how this modern audiovisual concept is based on a timeless design used throughout the greater portion of the 20th Century – the record player.

I am truly fascinated and intrigued by this stunning piece of technology. As quoted by the creator, Sound Machines consist of 3 units, each unit resembling Vinyl record players. Each unit has the capacity of holding 3 tracks, just like traditional record players.

MediaArtTube, however, embodies this classical design with a 21st Century makeover. There is no direct contact of the needle to the groove in the disc in a Sound Machine. Signals received from the laser light of the “needle” is synced to a sequencer, producing a sound output.

Sound Machines are definitely a cool way to mix various digitally-transmitted tracks together, and I hope that we continue to see a greater implementation of this technology in everyday life.

(YouTube link: https://www.youtube.com/watch?v=_gk9n-2lBb8)

One of three Disc Units of a Sound Machine. Each color represents the different sound files electronically stored on the disc.