karinac-LookingOutwards-04

Mileece has spent many years studying sound design. In the process, she created a garden, the Tree Weaver, that produces acoustic sound through currents and wave frequencies. The sound produced by the movement of the hands utilises GSR (Galvanic Skin Response) while the human interactive sections use biosensors connected to a bigger system that chooses sound based upon factors such as firmness and location of the touch. This project really inspired me since Mileece took something artificial, the sound design creation process, and perfectly blended it with nature and everyday life to enhance the features of these plants inside the garden and really immerse those who visit. There are times when I believe that technology could not emulate sound produced by humans or actual instruments, but Mileece integrated the technology in such a natural way that it seems real.

dnoh-sectionD-lookingoutwards-04

Project: Computer Replicating Voices

Video:

Key times: 9:30 -> 11:48

Artist (?): Cary Huang

Year: 2017

Although this “project” is from a content creator on Youtube, I thought it was fascinating how a programmer could create a specific algorithm to teach a computer to learn a specific voice. This was all done through the program, ARSS, or The Analysis & Resynthesis Sound Spectograph, Python, and another program called HyperGAN. The ARSS transfer sounds into images which is also modified through Python, then the HyperGAN software uses Convolutional and Deconvolutional neural networks to generate new images in the same style. The ARSS then transfer the images back into audio to become audible.

I find it amazing how not only can we start to teach computers to slowly learn through parameters, but also that such complex process and discovery was put into a YouTube video as if it were “just something cool”. Technology has come far enough to dismiss this amazing demonstration as “a cool video” that is shared around.

This video didn’t highlight an “artist”, rather it showed how complex computers can develop its neural networks. This creator also showed how AI could learn to play Jazz or Baroque music through hours of “training”, which brings another question… is a completely computer generated (through human made parameters) art? I think the thought and artistry behind the codes and programs behind such acts allow AI creation to be artworks.

rkondrup-Looking-Outwards-04


Artist Paul Prudence has created a new panoramic audiovisual performance work called The Mylar Typology which combines gelatinous oil-colored forms with slow pulsing auditory stimulus. The work utilizes what Prudence describes as “Slowly transforming abstractions of iridescent wavelets and hyperchromatic emulsions synchronised to binaural drones” to produce a calming effect, in part due to the effect of brainwave entrainment, in which neural wave frequencies synchronize with outside stimulus in order to reach an associated state of mental cognition, in this case, relaxation. Prudence aims to produce a sensorially immersive viewer/listener experience which leaves the subject matter largely ambiguous.

ifv-Looking-Outwards-04

“Great Animal Orchestra”

by Bernie Krause and United Visual Artists.

 

Krause takes recorded audio of natural elements and creatures to create soundscapes that are not only interesting to listen to but are also visually compelling. Krause uses the information within the audio to generate a spectrogram which is projected on the walls of a fully immersive space. This visual component is not only meant to draw in the observer, it also teaches the visual cues of certain types of sound created by different creatures. I admire the focus Krause brings to parts of nature most people overlook, in his TedTalk regarding his work he talks about his discovery that even the smallest insects, such as ants, have their own unique soundscape. His attention to detail shows in the execution and thought behind the creation and presentation of this work.

 

Bernie Krause talking about his work with soundscapes in his TedTalk

juyeonk-LookingOutwards-04

 

 

Title: Ambient Synthesis

Creator: Amanda Ghassaei

Year: 2012

Link to the project: http://www.amandaghassaei.com/projects/ambient_synthesis/

Link to the bio of the artist: http://www.amandaghassaei.com/about/

 

Ambient Synthesis is a sound sculpture created by Amanda Ghassaei that senses the change in light stimuli in its surroundings and transforms them into sound. The program that is in charge of this transformation is called ‘Inkblot’, which uses the MaxMSP application to convert the light data into slowly evolving synthetic tones.

This sculpture was inspired by the concept of ‘additive synthesis,’ which outlines that simple sounds can be added or mixed together in order to create a broader and a more complex range of sounds.

This mechanism is best observed during sunrise and sunset where there is a drastic change in the amount of light present around the machine. When the sculpture is stimulated, one or more pixel will appear on its screen. Each pixel represents a different frequency of the harmonic series. When these pixels become activated, the tones that correspond to each pixel are added to the mixture, producing a gradually evolving soundscape over time.

 

 

 

 

 

 

 

 

nahyunk1-Looking Outwards 04 : Sound Art

The computational sound art that I’m most familiar with is the sound looper that many artists and composers use when they create instrumentals or the background music for most songs. Because I enjoy listening to music and is very much interested in knowing about the mechanism of some musical productions, the loops used for each measure is a quality in music that I admire and also aspire to learn the techniques later on. This looper is used with an artist adding a few notes in a rhythmical series as an input and keeps on adding more notes and other sound effects by adding layers of loops on top of each other.

here is the link to an artist who sings and creates music through looping sound.

eeryan-LookingOutwards-04

Link to vimeo

The Creatures of Prometheus by Simon Russell is a generative visualization of a Beethoven song, using pulsing graphics in coordination with the music to create a visual representation of the ballet. The animation is driven by a program (I think it’s a MIDI file, possibly through the Houdini program) that takes into account the pitch, note, and volume of the notes of the ballet and translates these factors into visual components like color and position that correspond to the particles that make up the visual representation. I like the extra dimension that this visualization gives to the ballet, allowing for a modern voice to be spoken through the traditional ballet piece.

cduong-looking outward 04-FabricKeyboard

The project is “FabricKeyboard”, which is a stretchable fabric that is used as a musical instrument. It is a physical interaction media based on fabric. The way it functions is that it is a multilayer design that is created in a way to look as though it has different ‘keys” that have extensions that can determine things such as touch, proximity, pressure, stretch, position, etc. The video was honestly so amazing to watch, especially with the range of sounds that they can make and it is just so interesting to think that technology has advanced so far that people can now produce sounds from a piece of fabric.

They used software known as MIDI that allows the FabricKeyboard to “plug and play” and they are currently exploring the OSC protocol to use it wirelessly. The algorithms that were possibly used to develop this work had to be working on different combinations of factors such as pressure, proximity, etc. to produce a specific type of sound and they had to determine exactly what types of sounds they wanted to produce.

They had to develop this fabric to look a specific way and it looks somewhat like a futuristic keyboard or piano, especially with the sounds that it makes. The creator’s had full control of what they wanted the sounds to be and how they wanted every little factor to affect the sounds.

The FabricKeyboard was created by the Responsive Environments team at the MIT Media Lab. This was created this year in 2017.

FabricKeyboard – Stretchable fabric (sensate media) as a musical instrument

mecha-lookingoutwards-04

Patatap was created by designer Jono Brandel in collaboration with Lullatone and published on March 26, 2014. Described as a “portable animation and sound kit”, patatap generates sounds created by the Lullatone team, Shawn and Yoshimi, with corresponding graphics at the push of a key. As both a graphic designer and a computer programmer, Jono wanted to create a program that introduced synesthesia–more specifically visual music–to creators.

What drew me to this project specifically was the way that sound and graphics were combined in such a precise manner. When first introduced to this website a few years ago, I did not even consider the fact that it had to have been coded. With the knowledge that I know now, I have even more respect for the designers and the project itself. I am also inspired by Jono’s ability to purposefully exemplify the concept of synesthesia through Patatap.

dnam-Looking Outwards-04

The Program Detecting Wine on the Table

Artificial intelligence is becoming more and more relevant. ‘The Classyfier” created by Benedict Hubener, Stephanie Lee, and Kelvyn Marte is another display of modern artificial intelligence that makes our lives easier. The program (and the table) detect people’s drinks by sound. For example, dinks of wine glasses tell the program to turn on smooth, sexy music. While the opening of a cool beer would trigger the program to turn on fast-paced rock. Artificial intelligence to have understandings of the environment through sound could help humans have homes that would ‘live and breathe’ along with them, always keeping up with what is happening. More information about the Classyfier can be found here.