Rjpark – Looking Outwards 04

Photo of Šarapovas’ Muscial Scultpure

Artist Andrius Šarapovas’ room-sized, interactive, musical sculpture features 77 individual installations comprised of a metal bar, sound activator, sound damper, resonator, and mechatronics that are used to make generative music. The project consists of 16 notes, C, D, F and G spread across 4 octaves, which are distributed throughout the room so that one can hear different music compositions at different points in the room. A note is made by the project’s algorithm which is activated by the algorithms from the apps on the Tele2’s 4G network. The algorithm for the project uses one second of 4G statistical data to generate one second of music. With multiple phones and installations, this creates a rhythm or musical pattern and sound volume. The pitch is determined by the amount of data that’s being downloaded on the phones. This algorithm seems so simple, yet it’s outcome is amazing. “The probability of the repetition of an identical four-note combination is about once in every 98,304 activations”, Šarapovas explains. What’s so admirable about this project is that the algorithm is very straightforward but the result is so complex and combinatory. You can see the artist’s artistic sensibility through this fact. As it states in the article, “Early on, Šarapovas settled on a minimalist spatial object that would consist of a number of identical segments… to balance the sound, control noises, dampen different notes, and activate sounds”.

Šarapovas Massive Robotic Instrument

Eliza Pratt – Looking Outwards 04 – Section E

Robert Henke’s audiovisual installation, Spline, uses lasers and computer generated sound to evoke a dreamlike experience. 

Robert Henke is an engineer and audiovisual artist from Germany. Using a complex system of physics, lights, and synthesizers, Henke creates surreal installations that feel both futuristic and otherworldly. In his 2017 installation, Spline, he couples lasers with computer generated sounds to emulate patterns reminiscent of astronomical phenomena. Using the Max programming language, Henke has developed his own synthesizers, such as the Granulator II. Operating through “quasi-synchronous granular synthesis,” these instruments allow the user to modify the pitch and volume of grains to create unique sounds. I admire Henke for his ability to transform light, sound, and space using both scientific and artistic methods. The sound he incorporates in his installations are in perfect harmony with the mesmerizing visuals, working together to create an ethereal, omnipresent experience.

Looking Outwards – 04 Min Lee

LOOPLEX is an interactable user interface that distorts sound and colors through the hexagonal markers placed by the user on the device. As the user plays with the markers, a DV-cam inside the device keeps track of the different variations in the markers’ angles to trigger sound loops and changing color effects.

This project intrigued me because of its resemblance to a modern deejay’s turn-table. But LOOPLEX’s rendition of the turn-table incorporates both changing audio and colors through a more futuristic design. Instead of reacting to the physical touch of the user through a button or dial, the LOOPLEX uses a camera to sense the slightest changes in motion or position of the markers, meaning it could become an instrument for highly-skilled deejays and musicians in the future. As of yet, it is still a simple prototype that delivers on simple functions but in my opinion, it is a project that could be improved on for practical use in the real world.

Source:  https://vimeo.com/3546180

Looking Outwards 04

https://www.youtube.com/watch?v=wsITBynbeTU

This is a video and a link to the piece.

This project really interested me based on the interaction that the viewer can have and how much agency is actually given to the viewer. In my opinion the art that interests me most is usually the kind that I can interact with. This audio reactive installation was made by 3 artists in the artist collective, The Creators. The algorithms that generated the work probably had to do with the distance that the person was from the installation, similarly to mouseX, they might’ve had a camera that could base the sound and video movement off of a person’s proximity to said camera.  The artists’ concept is implemented into their execution, the viewers participation or lack thereof shows the concepts of causality, responsibility, and choice. In this piece, the viewer becomes the creator, the sound and visuals react to the choices of the viewer.

Justin Yook – Looking Outwards 04

Sculptures have distinct shape and distance from each other

Yuri Suzuki Design collaborated with High Atlanta to create the Sonic Playground, an outdoors display of sound-interactive sculptures. There are a total of six steel sculptures, and they manipulate sound depending on where visitors stand, listen, and talk. The calculations for each sculpture’s optimization were done with the softwares: Grasshopper, and Rhinoceros 3D; Grasshopper is a software for writing programs visually, and Rhinoceros 3D is for making 3D models. Combination of both softwares allowed the creators to control sound source, direction of the sound, and reflect sound from various sculpture pieces. The creator’s artistic sense is shown through the distinct shapes of the sculptures, and the bright colors added to them in order to make the sculptures seem more fun. I think that this project is interesting because of how interactive it can be; the idea of allowing people to interact by moving around the sculptures adds a new dimension to sound art.

Source: http://www.creativeapplications.net/sound/sonic-playground-playful-acoustics-by-yuri-suzuki-design/

Yoo Jin Shin-LookingOutwards-04

Multiverse

The video above captures “Multiverse,” an audio-visual installation by fuse* that attempts to create the “eternal birth and death of infinite parallel universes.” This project is inspired by the multiverse theory by Lee Smolin. “According to this theory, our universe is only one in a much larger cosmos (the Multiverse), a member of a growing community of universes, each one being born from the collapse following the formation of a black hole.”

The various, unique scenes are generated by an application developed in openFrameworks, while the sounds are produced using a generative sound system in Ableton Live and Max/MSP. I think the eerie sounds fading in and out match really well with the visuals and play a significant part in creating this space-like experience.

Overall, I think the the visuals and sounds in this installation are truly mesmerizing. Simply looking at it through the screen has this powerful effect so I wonder how it would feel like if I were that person in the video, being immersed in the Multiverse. I think it’s interesting that the creators used a theory proposed in the science community to physically conceptualize it through this audio-visual installation.

Yingying Yan LookingOutwards-04

An example of how the data is sorted visually

Sorting is a visualization and sonification project created by Ren Yuan. It is based on 31 algorithms and made using Processing. Basically it allows the viewer to see how the data is sorted and made audible by seven sorting algorithms. They are insertion sort, shell sort, bubble sort, quick sort, selection sort, heap sort and merge sort. This project is very interesting and visually compelling. I love how Ren uses pattern and algorithm to make sorting a list of data on a black and white screen to a piece of digital art that really catches people’s attention. I might not know this project’s performance criteria. But as an art piece, it is successful.

One of the sorting type

Kyle Leve-LO-04-Section A

A project that I discovered is FlowComposer. FlowComposer is a music generating software created by Sony CSL as a part of their Flow Machines program. The function of this software is to generate backing tracks of jazz standards that emulate the style and groove of specific big-name jazz artists. This project seems to have started in 2014 and since then has continued to grow. What I admire about this software is that it is seeming to work on one of the main problems of robotic emulation: lack of expression. Even though there is no way for a machine to fully capture a person’s emotions, it is a start to being able to get a machine to copy the style of musicians. The software is also about to re-harmonize melodies, create variations, and render the audio with different instruments. This software is a way for jazz musicians to play with a rhythm sections if they do not have access to a live rhythm section.

For more info:

FlowComposer: composing with AI

Victoria Reiter – Looking Outwards – 04

Vapor-Space creates an interactive Soundscape that reacts to data collected from the human body

People place their fingers on sensors to collect biological data from their skin

 

Vapor-Space is an interactive soundspace installation, whose name reflects how the data which feeds the soundscape is collected. Soundscape collects data on the humidity of the room, the temperature of the participant’s skin which touches the sensor, and calculates a vaporization rate which is translated into sound.

The creators at Vapor-Space highlight that the micro-workings of the body are themselves “an orchestra,” so in a way their soundscape seeks to amplify this symphony to a degree audible to the human ear.

What interests me about this project is how it brings together two seemingly opposing elements– electronics and water (hahaha if you ever drop your phone in the toilet or drop a hairdryer in the bathtub while you’re in it you’d understand)– and makes them collaborate together to create the sound.

 

Video demonstrates the use of Vapor-Space

 

Full information available here.

Looking Outwards – Music and Computation

A MaxMSP patch that turns weather data into music – January 2016

I’ve looked at weather data before as a part of site analysis. I can’t see the connection to this.

Algorithmic Composition is, as the username suggests, an algorithmic composer. Since 2009, they’ve been blogging and posting tutorials on just that. They made the MaxMSP (Max being a visual programming language focused on media) patch above that translates weather data into sound. What’s impressive to me is how much more it does; the patch has options for changing instruments, octaves, pitch, and tempo for the four tracks that can be mixed. As a result, it can do so much with just a single data set, although just by listening to it I can’t see how someone could trace it back to its origins.

Algorithmic Composition says

“You don’t have to be a programmer to make algorithmic music, just an interest in creating music in new ways”

and I think that shows through in their work. It’s making music in a new way.