ChristineSeo-LookingOutwards-04

 

Classyfier is a project where a table has the ability to choose the music genre that fits into the situation by detecting the beverages that people are consuming. It is programmed through Wekinator, Processing and the OFX collection, by Benedict Huebener, and 2 other students in Copenhagen Institute of Interaction Design. The music can be changed to the next song by tapping on the table. Huebener originally has passion for mechanics and aesthetics. He applies physical and digital prototyping techniques in order to take on problems in today’s society, which dictated him to take on this project.

I found this project very intriguing because I believe that this kind of technology relating to sound can be used in the bigger world. I think that this is a very possible project that can be actually carried on to a further extent to use in the future. The use of technology to respond to sounds in order to appropriately use in a certain context relates to convenience for humans. This project is aids human’s convenience and does not affect in a harmful way, which I think is very important since technology is consistently developing today. The man made machine table that can use its intelligence due to training, is very fascinating. I wish this project can work for multiple uses, not only limiting to sound of wine glasses, beer cans, and spoons clicking against mugs. Overall, this project is very successful in its use and the conceptual idea must be carried forward, and shared to a wider audience.

http://www.creativeapplications.net/processing/the-classyfier-ai-detects-situation-and-appropriates-music/

Looking Outwards -04 Jaclyn Saik

For this assignment, I shopped around a lot and looked at a variety of sound and new media artists to see what caught my eye. There was some really astounding work in a wide range of forms, such as computing sound into visual representation, or using computer programming to form sounds out of things that don’t generally create it. It reminded me of a podcast I listened to awhile ago that talked about being able to identify noises that were playing in the background of silent videos simply because of the way the objects appear. Computers were used to detect tiny movements and vibrations in the frames, and used it to bring out people’s voices. I thought that was not only crazy interesting, but also a possible window into making artwork that reflects on the past.

And speaking of reflecting on the past, the artist that really caught my eye, Robert Henke, combined engineering and sound to document a past tragedy in a beautiful way. In his audiovisual laser installation, “Fall” (2016), he was inspired by the village in southern Bavaria of the same name that disappeared underwater in the 1950’s after a reservoir was built. Henke, a German-based artist who creates mainly installation combinations of contemporary electronic music and laser work,  was inspired when the ruins of this village resurfaced in 2015, and created a hanging sound installation that completely engrossed me as soon as I watched the video (above).

The installation is made of hanging fabric structures, which are distributed by an algorithm Henke created. This is one thing that I think really defines this artist as a new media computer artist: he generates algorithms for almost every installation and sound he creates, and in this way his work is very technical but very individual. Lasers are used to illuminate the translucent and pieced-together build of the fabric construction, and an entire show plays through in about 10 minutes, varying in color and texture to create a water-like effect on the ceiling.

An still from the 2016 installation “Fall” by Robert Henke. Even without the movement or sound, the visuals are captivating.

The sound that plays is particularly interesting. Henke generated his own track of music that is of course algorithm based, and it sounds kind of like a jungle mixed with clinking glass layered on top of static or big noises or something– I can’t describe it, but it’s somehow very electronic and very organized all at the same time, and very interesting ot listen to. The lasers are also punctuated by some larger noises, and every movement is matched with sounds or electronic percussion somehow.

I have no idea how he generated algorithms as complex as those he used to make these noises. I know that there is defiantly code that relates the movements of the lasers to the beat of the track he created, and since the tempo seems to be constantly changing as the pace ebbs and flows (like water). I especially wonder how he can achieve sounds that feel so organic and natural with technical algorithms.

Jenny Hu — Looking Outwards 04

Oscillating Continuum is a project by artist Ryoichi Kurokawa. Ryoichi calls this an audiovisual sculpture, where both dynamic sound-generated visuals are displayed on juxtaposed sculpture surfaces. The above video shows multiple songs documented.

While the process is not documented online, I think the video can let you assume a process where the line distances and vertices are generated based on the data from the song.  What I admire, however, is the elegance and simplicity of the artist’s creative direction. He has kept the work minimal in color and form, letting the sound generated visuals take priority. This project reminds me that data exists, but it still takes the artistic and creative vision to articulate what it means.

Looking Outwards-04 Sound Art-Veronica Wang

CARBONSCAPE, the latest kinetic data soundscape installation by h0nh1m aka chris cheung from designboom on Vimeo.

CarbonScape is a kinetic soundscape installation created by Chris Cheung. It consists of 18 tracks of granular synthesized sound samples. They are collected from the sound sources where carbon footprints left: the sound from the jet engine, steam from the factory and horn of the ship, etc… The finale is composed of all tracks blending into the soundscape.

Interactive installation

The installation is a visualization of CO2 levels arranged in a bamboo forest like landscape, with the changing ambiance drone noises, imitates chimneys frequently found in power plants and factories, with black spheres indicating CO2 levels around the globe.

Sound data incorporated with the exhibition

Sharon Yang Looking Outwards – 04

The computed audio artwork is called ‘Déguster l’augmenté’ created by Erika Marthins. She aims to add a new dimension to the perception of food and experience of meals. She has explored three desserts and transformed them into generative art pieces; the record made of chocolate that brings the sense of taste to a different sensation – auditory is what I especially admire. The regular records are made of wax; recreating the texture as well as the algorithm – the lines on the record that creates the sounds – using chocolate that can emulate it. The collaboration between the chefs, artists and scientists is incredibly innovative and unparalleled which brings attention to the creators’ ingenuity and artistic sensibilities. She achieves bringing a change into a highly common experience or a highly common object through a technological innovation, allowing us to interact with them to a whole new level.

Jessica Timczyk – Looking Outwards 04

The photo above shows screenshots of different students’ work utilizing the Chunity program.

Chunity is a programming environment developed by a PhD student and associate professor at Stanford’s Center for Computer Research in Music and Acoustics, Jack Atherton and Ge Wang in 2018. The program uses the programming language ChucK and the real time graphics engine in Unity. I find this project extremely interesting because the tools are driven by audio as the most important component with visual graphics coming second, following the audio. This is contrary to how I would think most programs and tools run, with visual graphics as the main component and audio being secondary. In the algorithms generated using this program, physical changes in the graphics of the program are accompanied by audio, integrated with real time components. When a final project is created using this programming environment, the coder is able to artistically express their vision through how  the audio is integrated with the visual graphics.

The photo shows a screenshot of an example of how code is written to include both the audio and visual graphics.

Looking Outwards – 04

Using textiles as electroacoustic transducers Author: Filip Visnjic

I thought this work was really interesting because I’ve see clothing that incorporates lighting, but not sound. This is a project by Esteban and Judit of EJTECH. The reason why they wanted to create pieces like this was to enhance and explore the possibilities of multi-sensory experiences through textiles. This soft sound acts as a provocative new instrument for human expression. The main piece, would be a metal surface that emits audio and sonic vibrations because it is embedded onto the fabric. The intent of this project was that it was to be used as an innovation to material. And because of this, there can be so many different possibilities to implement this application.

Soft Sounds – Registry Phase 1

Project Page – Prototypes and Different Iterations

Curran Zhang-LookingOutwards-4

This article begins to talk about the merge of music and technology. With the idea of robots being able to do jobs that humans can, many people begin to wonder whether robots and AI would have to ability to create music without the need of humans. Francois Pachet, head of SONY’s computer science lab in Paris, believed that they were very close to programming computers into creating melodies through the mashing of music’s produced by Legrand and McCartney. According to scientist, composer, and author, David Cope, music contains instructions that can be synthesized into different yet similar outputs. Cope also designed EMMY, an emulator, that creates music that are similar to Bach chorale, Mozart sonata, Chopin mazurka and Joplin Rag.

This ongoing process captured my attention since people within this field has a strong desire to link machine to music. In this era, the idea of multidisciplinary designs and work are applied in every field. Yet art, especially music, and machine are one of the harder combinations due to its different workflow. By trying to merge the two fields together, both fields of work would have layers of new discovery and understanding. Music that makes people cry, happy, and emotional would no longer only be achieved by people, but also by machines.

 

Links

https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/

Erin Fuller-LookingOutwards-04

The project “Green Music”, by John Lifton, a London based artist, was part of the documentary called “The Secret Life of Plants” (1979). Lifton produced music based on the bio-electronic sensing of plants to record the “stress” of their physical environment, such as light, temperature, the presence of guests, etc. In this project, the computers are constantly receiving information from the sensors attached to the plants, and converting the data into music. In this work it makes both the plants and humans creators in that they are both acting on each other to produce this sound; although there is no tangible interaction with the artwork, guests presence alone can be enough for the plants to react and create different music. I think that’s beautiful.

    Documentary Clip of “Green Music”, 1979

I think this project is admirable just because it was created so long ago. It is easy to think the computational design is something of this decade or even just this millennium, but this project has shown me that people have been working on and progressing the field of computational design for much longer than I previously thought.

Julie Choi – Looking Outwards – 04

Above shows the MULTIVERSE, the eternal birth and death of infinite parallel universes.

This project, Multiverse is an embodiment of the concept of “a system composed of an infinite number of universes that coexist simultaneously outside of our space-time.” Derived from the multiverse theory presented by Lee Smolin, this art piece fuses both audio and visual to generate a live experience of a narrative context. Multiverse was built through a program called openFrameworks that assimilates a continuous slide of digitally created photographs that displays a realistic series of evolutions of the multiverse. The photographs are generated with a soundtrack from the interaction of the visual elements. In the photographs, small unidentifiable particles continuously merge and separate forming bigger particles.

The vertical projection is 7.5 meters tall and has a mirror on the ceiling for a dramatic experience.

This art is displayed in a pitch black room that centers the focus only on the project. When the audiences stand in front of the vertical projection that is 7.5 meters tall, they are able to witness the art of creation.