hannahk2-LookingOutwards-04

“Soft Sound” is a research pioneered by EJTECH (Esteban de la Torre and Judit Eszter Kárpáti) that brings together textiles and sound, and explores possibilities of enhancing multi-sensory experiences. I admire this project because it achieves its goal of using textile as an audio emitting surface. The artists created soft speakers and embedded it into fabric in order for it to emanate sonic vibrations, allowing viewers to perceive the audio through both hearing and touch. The viewers can perceive the audio through touch because due to the pulsating nature of the sound, the host textile will throb. By creating shapes of flat copper and silver coils, vinyl cutting and applying them onto different textile surfaces, and then running an alternating current through them, the artists created the speakers. They are connected to amplifiers and a permanent magnet in order to force the coil back and forth, allowing the textile attached to move back and forth and create enhanced sound waves. The artists’ sensibilities are clearly manifest in the final form, seeing the way the simple, crisp and geometric printed shapes on fabric resemble the visual language present in their other works. This project is extremely admirable to me because it displays a non physical entity as something tangible and physical, and seeing a small piece of cloth moving and creating sound is just really amazing to me.

 

Soft Sound for TechTextil – Sounding Textile series from ejtech on Vimeo.

link:http://www.creativeapplications.net/sound/soft-sound-textiles-as-electroacoustic-transducers/

Ziningy1- Section C – Looking Outward 4

Magenta is a project that lead by the Google Brain Team to explore how machines with computational abilities can generate music. Magenta encompasses two goals. It’s first a research project to advance the state-of-the art in music, video, image and text generation. So much has been done with machine learning to understand content—for example speech recognition and translation. Different with other artists utilize computation to generate music, songs and musics of Magenta was solely generated by computer intelligence with machine learning models. The first creation song from Magenta is a simple melody that primed with four notes. The Magenta algorithm is given four notes (C,C,G,G) to begin with, and it come up with the original melody. I personally find it very impressive that how artificial intelligence with deep learning models can already accomplished some simply content generation and creativity, which is a stark contrast to the stereotypical perspective that machine will only be capable of non-creative/systematic tasks.

There is some demo from Magenta:

ashleyc1-Section C-Looking Outwards-04

PyroGraph is a machine programmed to take in an image, generate and draw it using dots in a style similar to pointillism. It correlates the sound of the room to the greyscale of each particular dot and the dot is then created by a moving tip that burns 450 into the paper. The dot’s greyscale correlates to the noise of the room. PyroGraph is interesting because it recreates and stylizes an image through public involvement. I wonder why the artist/programmers chose to have the machine react to sound. Since this project was inspired by traditional thermal printers, incorporating sound seems disconnected from tangible print. If it’s a reference to the processing noises old thermal printers make, that would make sense but I wish it was explained explicitly.

This project heavily follows the practices of pyrography: the art of burning wood to create an image. There are differences in style. For instance, PyroGraph requires the machine to draw in dots whereas a traditional pyrographic artist could easily burn lines for realistic textures. However, it’s still revolutionizing the practice, not only through computer-generated burning but also shifting the act of burning from an artist to a generalized public. It’s subtly interactive.

PyroGraph – Drawing machine that listens and burns

hdw – Looking Outwards 4

“77 Million Paintings” is a sound and art digital software by Brian Eno. This artwork was made by compiling Eno’s past 296 works into a generative code that not only combines up to 4 of them visually at a time, but also generatively pairs music with his artwork. The title was named after the different number of artworks that could be made with said code, 77 million. This work is supposed to highlight Eno’s work with experimenting with light and generative music. Eno also shows this work through art installations at vaarious museums around the world. His work was inspired by minimalist musicians such as Phillip Glass and Steve Reich.

https://vimeo.com/638631
Example of some of his works.

Brian Eno’s work comes in a CD format with 2 disks, the first containing a software of randomized music and images, the second of which containing an interview.

Randomly generated images of his code.

Looking Outward 4

 

658 prepared dc-motors, cotton balls, cardboard boxes 70x70x70cm, Zimoun 2017

What is so fascinated about this artwork by Zimoun is that it looks really neat and simple at first. The form and the scale of this art attracts my attention. Withe the huge amount of boxes piled up and tons of sensors and cotton balls, audience are amazed by the scale. His idea is really surprising as well because the he wants the audience to focus more on the sound effect with such simple and accessible material. The use of daily life materials, like cotton balls and cardboard boxes, connects people more with the artwork itself. The cotton balls are connected to mottors that create the sound. Although the motors that connect to the balls are low-tech, the deep, pounding sound that is produced by them resonates throughout the exhibition space.

“As you walk through the labyrinth of boxes, the constant humming and vibrating builds, and can be felt both behind and inside you”.

This artwork tries to encourage people to pay more attention to the beauty of sound. Sound can also be appreciated as art, which is the main idea of Zimoun’s art.

“The purpose of keeping the visual aspect of the art basic and subtle is to allow the person to take in all the auditory effects and be free from distractions”.

Sound Art at Beall

http://www.zimoun.net/2017-658.html

 

rgroves – Looking Outwards 04

In the summer of 2016, Icelandic ambient rock band Sigur Ros livestreamed a 24 hour journey around Route One, Iceland’s beautiful coastal ring road. The whole video was accompanied by new music from the band – well, new music in collaboration with a generative music program called BRONZE. BRONZE was created in 2011 by Mike Grierson, a scientist at Goldsmith University, and musician Gwilym Gold. It takes a recorded piece of music and is able to infinitely regenerate it in unique transfigurations. As the original recording is played over and over on this platform, it’s impossible to tell when each reiteration starts and ends because segments may be played in a different order, instruments may be amplified or eliminated, some sections are shortened and others elongated, etc. The music becomes ephemeral, as “the chances of hearing the same version of the track versus the chances of winning the lottery don’t even compare,” according to Gold.

This experiment worked extremely well with Sigur Ros’s ethereal sound. The entire spectacular 24 hour journey is available online, but here are the middle 9 hours!

And here is a link to an album by Gwilym Gold, which is only available on BRONZE and no permanently recorded version exists , so you truly can’t hear the music the same way twice. You do have to download a mobile app in order to listen.

http://bronzeformat.com/

LookingOutwards-04-Chickoff

This is Ambient Synthesis, a sculpture which emits sounds of varying frequencies according to its light stimuli. It was created by Amanda Ghassaei in 2012 who graduated from Pomona College with a BA in Physics and Minor in Chemistry.

All of the sculpture’s data is interpreted by a MaxMSP application which uses the Max visual programming language to connect an object to virtual patch cords to create sound. What I find most interesting about this project is that it gives the environment and light around the sculpture a voice. This is contrary of videos of landscapes that are coupled with music that may not reflect the actual state of the animals and nature being filmed.

I’m very curious about what other factors of an environment, besides light, could be used to create sound. It would be interesting to see a sculpture respond to natural disasters, temperature, or the amount of movement around it…what if a specific sound were emitted if a creature were within a certain range of the sculpture? The sculpture would then be notifying you of something that maybe you can’t detect with your eyes, unlike light. It would then serve as a guardian, informing you of how large the animal is that’s approaching, and perhaps let you know whether it is a threat.

Project 03 -Dynamic Drawing

sketch

function setup(){
    createCanvas(640,480);
}

function draw(){
    //as mouse moves to right, time in day advances in frame
    if(mouseX<=160){ 
        background(204,255,255); //early morning sky
    }else if(mouseX>160 & mouseX<320){
        background(153,255,255) //midday sky
    }else if(mouseX>320 && mouseX<480){
        background(0,0,153) //evening dusk
    }else if(mouseX>=480){
        background(64,64,64) //nighttime
    }

    noStroke();
    fill(0,153,0);
    rect(0,300,640,200) //grass
    fill(225);
    rect(175,125,300,200) //hunt library base
    fill(205)
    rect(175,125,300,15) //horizontal bars
    rect(175,175,300,15)
    rect(175,225,300,15)
    rect(175,275,300,15)
    

    if (mouseX<=480){
        fill(200)
        rect(175,125,20,200) //vertical bars
        rect(215,125,20,200)
        rect(255,125,20,200)
        rect(295,125,20,200)
        rect(335,125,20,200)
        rect(375,125,20,200)
        rect(415,125,20,200)
        rect(455,125,20,200)
    }else{
        fill(204,0,0) //red
        rect(175,125,20,200)
        fill(255,128,0) //orange
        rect(215,125,20,200)
        fill(255,255,0) //yellow
        rect(255,125,20,200)
        fill(0,255,0) //green
        rect(295,125,20,200)
        fill(0,0,255);
        rect(335,125,20,200)
        fill(0,255,255)//light blue
        rect(375,125,20,200)
        fill(102,0,204) //purple
        rect(415,125,20,200)
        fill(255,0,255) //pink
        rect(455,125,20,200)

    }

    fill(250)
    rect(300,250,50,75) //hunt door
    rect(287.5,250,75,5) //awning
    
    //this is the scottie doggo now

    fill(0)
    rect(341,375,150,55,25,25)//body
    ellipse(362,435,40,20) //left foot
    ellipse(470,435,40,20) //rightfoot
    rect(470,365,15,20,85,0,55,55) //tail
    ellipse(356,360,60,60) //head
    rect(315,365,50,25,5,5) //muzzle
    ellipse(330,365,30,20) //head muzzle connector
    rect(315,385,5,10,25,5,0,5) //beard
    rect(320,385,5,10,25,5,0,5)
    rect(325,385,5,10,25,5,0,5)
    rect(330,385,5,10,25,5,0,5)
    triangle(355,327.5,385,327.5,375,355) //ear
    fill(255)
    ellipse(340,350,10,10) //eye
    fill(0)
    ellipse(337.5,347.5,5,5) //pupil
    fill(215)
    ellipse(317.5,362.5,10,7) //nose

    fill(0,102,204)
    ellipse(200,435,80,25) //food bowl
    rect(160,420,80,20,25,25)


    if(mouseY<300){
        fill(255)
        push()
        rotate(radians(40)) //left bone
        rect(400,200,25,10,10,10,10,10)
        ellipse(400,200,10,10)
        ellipse(400,210,10,10)
        ellipse(425,200,10,10)
        ellipse(425,210,10,10)
        pop()
    }else{
//bone 2
        fill(255)
        push()
        rotate(radians(40)) //left bone
        rect(400,200,25,10,10,10,10,10)
        ellipse(400,200,10,10)
        ellipse(400,210,10,10)
        ellipse(425,200,10,10)
        ellipse(425,210,10,10)
        pop()
        rotate(radians(-30)) //right bone
        rect(-30,470,25,10,10,10,10,10)
        ellipse(-30,470,10,10)
        ellipse(-30,480,10,10)
        ellipse(-5,470,10,10)
        ellipse(-5,480,10,10)
        pop()
    }
}
    

For this project I decided to use let the mouse determine the daylight (as dragged to the right, it becomes night, as shown by the darkness and how Hunt light ups), and as the mouse goes closer to the food bowl, the scottie dog gets more food. My process first started by creating the simple drawing, and then adding if statements to allow the concept of daylight to change as well as the feeding of the dog.

sntong-LookingOutwards-04-Sound-Art

Table d’Harmonie is created by Pascal Broccolichi and using black corindon powder and speakers. When resonance is reached between the sound and the powder, the powder amasses into small mounts of circular craters around the speakers. The theory of “granular synthesis” is applied to help program the sound to create desired shapes. The project is interesting when trying to visualize sound in a physical form and translating something that has no visual cues into something physical through the medium of different materials.

Looking outwards-04

This is a project that comes to me as unique. I enjoy watching movie on AIs or reading articles on AI’s learning capabilities. This is Sonic Pendulum by Yuri Suzuki Design Studio. At first pendulum seems to be essential part of the project, but it is not. Pendulums are generating calming ambient sound, but the algorithm is using space and crowd in surrounding to create new composition of sounds. Most impressive part is that if this was normal project, they would have used algorithm to be fixed, which will generate similar sound over periods of time; however, team trained the AI to create infinite composition, which is site and moment to moment specific. This project shows that it is not only possible to make program react vibrantly with surrounding, but also possible to adapt and trained to generate new type of compositions.