atraylor – Looking Outward 04 – Section B

Ales Tsurko’s microscale is a web-based album that takes Wikipedia articles and transforms them into real time generative music. The articles are processed as step sequencers and the individual letters represent a sequencer step. When a letter is read, it plays a sound. Tsurko’s  concept is to transform meaning while the text is morphed into sound. He also is playing with the idea of dynamic music, as his project is published on an interactive web-page, rather than something that is composed and recorded once.

When I listened to microscale, I heard the audio form of chainless bicycles, anthrax, and vodka. There are several different track titles that have different atmospheric moods in which the text is interpreted.

I admire this piece because I’m interested in the transformation of words to something less tangible. I like that words can incite emotions and responses that are unique to the individual. This project is a way for words to be transformed beyond their meaning.

The web interface of microscale

Connie – LookingOutwards – 04

My previous LookingOutwards example for week 3 would have also fit perfectly for this week’s theme but the Turkish artist, Memo Akten, has many other interesting projects that combine interactivity, music, and smart algorithms. Another of his projects is Webcam Piano 2.0 (2010).

This is a class(-ical) example of creating sound and music through an unconventional means. Users are able to generate beautiful music through an algorithm, made with openFrameworks, that tracks their fingers, hand gestures, and other body movements. The updated 2.0 version introduces additional features including finer, more precise movement tracking and even interpreting the movement to create music that reflects different emotions by playing a different musical mode and changing color scheme. I find this project particularly beautiful, especially the 2.0, because it makes musical expression more accessible to those who may not be classically or technically-trained on the piano, or even in music, yet gives users an outlet to express themselves and their emotions through the fine-tuning of reading their facial expressions or body language.

Users playing around with the Webcam Piano. Image Credit: http://www.memo.tv/webcam-piano-2/

*Unlike some other of his works, the code for this project is unavailable.

aranders-lookingoutwards-04

Sonic Pendulum is a soundscape created by Yuri Suzuki Design Studio and QOSMO in 2017. The artificial intelligence part of Sound Pendulum utilizes the atmosphere around it to create sounds of harmony. These sounds are created using speakers and pendulums. The pendulums allow the doppler effect to help dictate the music. Every sound it creates is a response to what is around it, so the harmonies never repeat and continuously alter. I admire this project because of its interactive element and its mellifluous element. The environment can easily soothe a stressed person (I wish I could benefit from its atmosphere at this moment). A deep learning algorithm was used in this project that was trained with compositions and could be changed given the people and noises around it. The project embodies the artist’s ideas of order coming from chaos.

link

NatalieKS-LookingOutwards-04

Created by Amanda Ghassaei, the Sugarcube is a portable MIDI controller with the capability to connect with up to 16 apps. It implements both buttons and “shake and tilt” features, allowing the user to manipulate sounds by tilting the Sugarcube one way or another. The creator used code from Arduino so that the device does all of its app processing internally, rather than with another computer. The device is also able to store sounds to correspond with different buttons. The creator, who is a grad student working at MIT Media Labs, used their knowledge of interactivity and media to create a device that is both user-friendly and fun.

I admire the simple and clear aesthetic of the Sugarcube, because it is easy to use without sacrificing beauty. The back-lit buttons create a really beautiful visual while also producing sounds and patterns, so you can visually see the music you’re making. It looks so simple, yet all of the code that went into it is fairly complicated and long.

svitoora – 04 Looking Outwards

Georgia Tech’s Shimon robot writes and plays its own music using machine learning.

The Shimon robot was trained with 5,000 songs and two million motifs, rifts, and licks. The algorithm behind it involves using a neural network that simulates the brain’s bottom up cognition process.  The result sounds very soothing, according to article the song it wrote is a blend of jazz and classical music. What I admire most about this project its that the music as well as the performance is totally generated, and yet it still sounds human and not robotic. This robot is making debatably “creative artistic decisions” by synthesizing novel music for pre-existing ones. Additionally, I also admire the performance. Instead of pre-defining the note location of the keyboard by assigning them a position variable, the robot uses computer vision through a camera on its robot-head which actively rotates, pans, and scans around its field of vision the very same way an actual musician would do when they’re playing the keyboard. If I closed my eyes, I could be fooled that this is a human.

(http://www.wired.co.uk/article/ai-music-robot-shimon)

creyes1-LookingOutwards-04


A promotional video for On Your Wavelength & Merge Festival 2015

Created by Marcus Lyall, Robert Thomas, and Alex Anpilogov, On Your Wavelength is an interactive installation that generates music and a laser-show as it analyzes the user’s brainwaves in real-time.

In the installation, the user is equipped with EEG brain scanner headseat, which is then analyzed turned into media using Processing and Pure Data for audio generation. The analysis creates a profile of the user and focuses on three possible emotions – joy, detachment, and tension – along with several possible instruments and pitches in order to generate musical compositions specific to the current user. While the generation was up to the program’s analysis, the color choices and compositions, as well as the distinctive emotions that they chose to go by show the distinct mark of the artists who worked on it.


Behind the scenes of On Your Wavelength

Large-scale, immersive experiences like this one have always been fascinating to me, and in this case it’s not just technology taking artistic control, but rather a symbiotic relationship between user and program that’s not only awe-inspiring to look at, but especially to be in the user’s place and see how the program reacts.

On Your Wavelength was first shown during Merge Festival 2015 in London and later in a modified format in Winter Lights 2017 in London.

Additional performances, such as this one, can be viewed on Youtube.


“Lime,” an On Your Wavelength performance

rmanagad-lookingoutward-04-sectionE

Creator: K A R B O R N

Title of Work: The Wondrous Wobbulator Machine for Young and Old Like

Year of Creation: 2015

Link to Project Work: http://artwork.karborn.com/The-Wondrous-Wobbulator-Machine-for-Young-and-Old-Alike

Link to Artist Bio: http://www.karborn.com/


John Karborn, a new-media audio-visual video artist, developed The Wondrous Wobbulator Machine for Young and Old Alike by feeding geometric still frames into a custom-built wobbulator, a device that visualizes frequencies and wavelengths of a given sound. To record these, he uses analog video sequences (VHS, for example) while a given frequency is being passed through the wobbulator — what results is the geometric animation to the right. Algorithmically, the wobbulator utilizes a combination of manual control feedback and an oscillator that produces a visual representation of the image being manipulated by the given sound waves.

The Wondrous Wobbulator Machine for Young and Old Alike

 

My current work is in the field of audio-visual performance art, which makes K A R B O R N’s methodologies valuable towards my practice. As a whole, K A R B O R N’s work follow similar themes, using a combination of narratives and acting and sound and still frames to produce video works and documentations that are altered by time and noise.

afukuda-Project04

afukuda-project-04

/* 
 * Name | Ai Fukuda 
 * Course Section | C 
 * Email | afukuda@andrew.cmu.edu
 * Assignment | 04-b
 */ 

function setup() {
    createCanvas(400, 300);
}

function draw() {
    background(206, 236, 236);

    var x1;                     // x-coordinate of vertices 
    var y1 = 160;               // initial y-coordinate of vertices 

// PURPLE LINES (THICK)
    strokeWeight(1.5);                           
    stroke(204, 178, 213);

    for (var x1 = 130; x1 < 201; x1+=10) {      // top-left lines    
        line(100, 50, x1, 120+(y1-x1));
    }

    for (var x1 = 200; x1 < 271; x1+=10) {      // top-right lines 
        line(300, 50, x1, x1-120); 
    }

    for (var x1 = 130; x1 < 201; x1+=10) {      // bottom-left lines     
        line(100, 250, x1, x1+20);
    }

    for (var x1 = 200; x1 < 271; x1+=10) {      // bottom-right lines 
        line(300, 250, x1, 420-x1);          
    }

// PURPLE LINES (THIN)
    strokeWeight(1);
     for (var x1 = 130; x1 < 201; x1+=10) {     // top-left lines     
        line(200, 150, x1, 120+(y1-x1));
    }

    for (var x1 = 200; x1 < 271; x1+=10) {      // top-right lines 
        line(200, 150, x1, x1-120); 
    }

    for (var x1 = 130; x1 < 201; x1+=10) {      // bottom-left lines     
        line(200, 150, x1, x1+20);
    }

    for (var x1 = 200; x1 < 271; x1+=10) {      // bottom-right lines 
        line(200, 150, x1, y1+260-x1);          
    }


// ORANGE LINES 
    strokeWeight(1);           
    stroke(253, 205, 167);     

    // top set of lines 
    for (var x1 = 130; x1 < 201; x1+=10) {       // top-left lines     
        line(200, 20, x1, 120+(y1-x1));
    }

    for (var x1 = 200; x1 < 271; x1+=10) {      // top-right lines 
        line(200, 20, x1, x1-120); 
    }

    // left set of lines 
    for (var x1 = 130; x1 < 201; x1+=10) {       // left-top lines     
        line(70, 150, x1, 120+(y1-x1));
    }

    for (var x1 = 130; x1 < 201; x1+=10) {       // left-bottom lines   
        line(70, 150, x1, x1+20);
    }

    // bottom set of lines 
    for (var x1 = 130; x1 < 201; x1+=10) {       // bottom-left lines     
        line(200, 280, x1, x1+20);
    }

     for (var x1 = 200; x1 < 271; x1+=10) {      // bottom-right lines 
        line(200, 280, x1, y1+260-x1);         
    }

    // right set of lines 
     for (var x1 = 200; x1 < 271; x1+=10) {     // top-right lines 
        line(330, 150, x1, x1-120); 
    }

    for (var x1 = 200; x1 < 271; x1+=10) {      // bottom-right lines 
        line(330, 150, x1, y1+260-x1);         
    }

// BLUE LINES 
    strokeWeight(1);           
    stroke(140, 164, 212); 

    line(130, 150, 200, 80);
    line(200, 80, 270, 150);
    line(130, 150, 200, 220);
    line(200, 220, 270, 150);

// BLUE VERTICES 
/*
    strokeWeight(3);                            
    fill(140, 164, 212);

    point(100, 50);                     // primary geometry vertices (purple)
    point(300, 50);
    point(100, 250); 
    point(300, 250);

    point(200, 20);                    // secondary geometry vertices (orange)
    point(70, 150);
    point(200, 280);
    point(330, 150);

    point(200, 150);                   // center of geometry 
*/
    
}


I was able to generate this string art by simply declaring two variables: one for the initial x-coordinate and another for the y-coordinate. I began with the top-left set of purple curves, and translated those appropriately to create the geometry. Things that I could improve are: using rotation to make the code much simpler, and to use variables so the geometry becomes dynamic.

Process work:

afukuda-LookingOutwards04

The Classyfier

The Classyfier by Benedict Hubener, Stephanie Lee and Kelvyn Marte at the Copenhagen Institute of Interaction Design (CIID) is a table that utilizes AI to detect the social scenario (through beverages being consumed) and reciprocates with appropriate music. This project intrigued me, as I was mesmerized with how technology is able to clearly detect and differentiate the difference between the “clanking” of various beverages. And I could see this experimental project being applied to enhance the capabilities of voice recognition technologies such as “echo” and “siri”; currently they are only able to do what they are told to do, but perhaps in the near future they would be able to read (through AI) different situations and act accordingly. The project brief indicated that the table contains a built-in microphone which catches characteristic sounds and compares these sounds to a predetermined catalogue. This catalogue contains three classes – hot beverages, wine and beer, with each class having its own associated playlist that one can navigate through by knocking on the table. Other algorithmic aspects include machine learning, Wekinator, Processing & OFX collection. The creator’s artistic sensibilities does manifest in a tangible or visual manner but rather musically.

Link |

The Classyfier – AI detects situation and appropriates music

Classyfier

(both pages include video)

Work | Benedict Hubener, Stephanie Lee & Kelvyn Marte, unknown year

Ikrsek-Looking Outwards-04

“Our Time” is a piece commissioned by the MONA (Museum of Old and New Art) which was intended to take you on an ethereal sensory journey, warping the way you view and think of time. Using sound, light and motion to convey the passing of time. 21 large pendulums swing in midair in different directions as lights brighten and dim at their own pace. Each pendulum arm has a speaker which emits a barely audible echo, creating an eerily unfamiliar sensation speaking to the passage of time. They swing without seeming to adhere to any laws of nature, yet regardless seem to make the passage of time more palpable. Time exists among many frequencies in this room, and when you’re in there, experiencing it – that becomes obvious.
The amount of effort put into immersion in this is remarkable, and the piece utilizes our most basic senses to warp our perceptions of human constructs.

below is a video of the hauntingly beautiful piece…