enwandu-LookingOutwards-04

‘Unnamed SoundSculpture’

Daniel Franke and Cedric Kiefer

‘Unnamed SoundSculpture’ is a computational design project operating at the intersection of sound, art, and technology. The sound sculpture is the result of recorded motion data of a real person. Laura Keil, a Berlin-based dancer, to interpret a musical piece – Kreukeltape by Machinefabriek – as closely as possible with the movement of her own body. She was recorded by three depth cameras (Kinect). The intersection of the images was later put together to a three-dimensional volume (3d point cloud), which was later put into 3D max for further rendering.  A three-dimensional scene was created including the camera movement controlled by the audio. Through this process, the digital body, consisting of 22,000 points, comes to life.

I couldn’t find much on the code or algorithms used in the creation of this project, but it doesn’t seem any custom-made software was needed for them to achieve their goals. With the sculpture based on the music and movement of the dancer, it captures the artistic sensibility of the performer. And the rendering of the 3D environment, and points making up the human form is truly evocative when in motion. I admire the emotional response of the project, in being able to capture an essence of performance, both with the music and the performer.

ctv-looking outwards-04-Sound Art

Jerobeam Fenderson/2014

This person has built an algorithm to draw images using two channels of an oscilloscope. The two channels use addition and subtraction of analog waveforms to create the imagery. I love this piece because he takes a computational approach to render drawings. I would love to see this attached to an analog oscilloscope (one that uses a laser to illuminate phosphor powder). Within the past could years, I was exposed to the idea of creating interfering waves to control amplitude (I never took calculus). Since then, I have been fascinated with moiré patterns to create visually dynamic art and to control density of color value. This work relates because the artist is using the same principles to send interfering information to create something that is recognizable to humans. In this class, I would like to learn about generating analog wave forms from static imagery: the inverse of this artists’ work.

Looking Outward-04

For sound art, I found a project called The Classyfier. This project was created by Benedict Hubener, Stephanie Lee and Kelvyn Marte at the CIID.

The Classyfier is a table shaped speaker that detects what type of beverage people are drinking and plays a certain playlist according to the beverage. The speaker picks up on cue noises, such as the clinking of wine glasses or the popping open of a beer can.

I find this project inspiring because of the aspect of machine learning. The machine picks up on certain sound characteristics and compares it to a catalogue of sounds preprogrammed into it. From there, it is able to connect the suitable playlist and play the right music. In addition, the table has a feature where the user can knock on the tabletop to skip to the next song. This project connects very human understandings such as genres, mood, vibes, to a very robotic calculation. From a simple sound characteristic, a computer program is able to emulate the mood of drinking a relaxing glass of wine or cracking a cold beer with a pal.

Link

HaeWanPark-LookingOutwards-4

Touch Pianist – Magical Piano In Your Web Browser, 2015

Touch Pianist is a musical performing program created by Batuhan Bozkurt who is a Turkish musician and sound engineer. You can perform a variety of timeless classical piano music in computer screen with keyboard or touchscreen. For enjoying this performing exercise, you can either access to a website or download the app (both Android and iOS available). He used HTML5 with WebGL and WebAudio which allow presenting an interactive visualization of popular classical piano music pieces. When Touch Pianist was released, it became so popular. Even it was played more than a million times in just two weeks. I think that fact actually can be a proof that this is pretty interesting and fun to play with. I tried to play on my computer. Its graphics nicely get along with the music. I like that he made a program entertaining many people with performing classical music that can be easily considered as not very interesting and even boring especially in our generation. So, I guess this program can be also utilized to renew the mindset toward classical music which value is often overlooked.

Here is video to show performing on iPad

touchpianist.com

 

aboyle-Looking Outwards-04

For this post, I have chosen to discuss the Flow Machines Project. This project is funded by the European Research Council and coordinated by François Pachet. Their website can be found at http://www.flow-machines.com/. This project uses computers to help compose music in the style of a chosen composer.

I really admire this project’s attempts to emulate creativity. While I can easily understand how computers spit out preexisting melodies, I am impressed with how this project aims to generate new music. Since creativity has long been relegated to the realm of humanity, it’ll be interesting to see where this project goes.

The algorithms used in this project are based on Markov models. They capture the properties of sequences and abstract them into mathematical concepts. Using these concepts, the computer is then able to generate music of its own. The creators have an evident love for music in general, as is to be expected, but it seems to me that they also have a love for the aspects of music that make it unique and recognizable.

heeseoc-LookingOutwards-04

Volume, an installation piece made by an architecture collective named Softlab, is an interactive cube. It is made up of grids of responsive mirrors that redirect light and sound of the people surrounding the piece as a volume. It is interesting in that the motive of the piece is to capture the excitement of the festival-goers. Not only that I liked the sound it makes, but I was also intrigued by its scale and immersive visual. The mirrors are designed to rotate, track people’s movements as they walk around the installation, which creates a sense of infinite depth. The interface for the installation was built in Processing, tiling the coordinates of the cameras in order to visualize the interaction and convert the input into ambient sound.

Volume

elizabew – looking outwards – 04 – SectionE

Sugarcube: MIDI and MaxMSP Controller

 

What I really like about this project is how satisfying it looks to play with and touch and listen too. The tilt sensors as you tilt the entire machine makes the sound go up or down, almost as if you were holding a bowl and rolling around a ball in it. It really creates a bridge between physically doing something and technology. I also really like the satisfying buttons and the shape of the whole box — it looks like something I could keep on the side of my desk to just push the buttons for fun.

According to the maker, the project is a MIDI controller — generates Musical Instrument Digital Interface (MIDI) data to MIDI-enabled devices.

The artist was inspired by monome — a similar interactive instrument. However, he wanted to make something cheaper while also not relying on the computer to process everything into MIDI.

Click Here for Further Information on the Project

LookingOutwards04-jooheek

The Creatures of Prometheus – Generative visualisation of Beethoven’s ballet with Houdini

By: Simon Russell

The visual outcome the algorithm creates

This project shows the relationship between audio and visuals through computation. Specifically, it creates visuals for a Beethoven ballet. Through an algorithm, it detects the pitch and amplitude of the music and makes different shapes and colors for different musical notes. It also detects how high or low the note is compared to the previous note, and uses this to determine how high the shapes are in the composition.

I found this project interesting because it shows how computation can create a connection between audio and visuals, which creates both music and visual art at the same time. It shows that music doesn’t always have to be heard and art doesn’t always have to be seen.

The screenshots of the program at work.
Program at work

However, what’s interesting is that he did not get the algorithm perfectly. The first four bars of the computation is incorrect, but he states he just wanted to get it finished and out rather than being perfect. He states, “It was quite a fun one because I plugged in the data, hit render and then didn’t really know what would come out. Usually I’d hand animate every last detail but this one is essentially generative.” It’s fun to know that this project was just a fun experiment of someone rather than an actual serious project.

Site: http://www.creativeapplications.net/sound/the-creatures-of-prometheus-generative-visualisation-of-beethovens-ballet-with-houdini/

sunmink- Looking Outwards- 04

The origin of word ‘entropy’ came from the Greek word meaning transformation. The sound artist fraction collaborates with louis philippe starnault, naure graphique, and creation ex nihilo to perform a mesmerizing music installation. The installation that provides the sound includes over 6000 distinctive pixels that can transform on order.

When playing his music, fraction absorbs the audience by introducing them an intense-audio-visual experience. He usees sound spatialization, direct light, and projections to enable the audience to fully experience the ‘Entropia.’ I admire this artifact itself because it is an interactive product made by a collaboration of MadMapper, TouchDesigner, Resolume, MaxMsp, Ableton Live, Lemur and IanniX who created algorithms that can illuminate the spherical structure and also built RGB LED strip that is the skeleton of ‘entropy.’ The experience of ‘entropy’ is created through an algorithm that is programming signal based on the pressurized soundtrack. The signals are converted into noise and the luminous behavior during the performance.

 

egrady-LookingOutwards-04

https://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/

“When Robots Write Songs” is a particularly interesting article, and the thing that stuck out to me while reading it was the computational sound experiment performed by David Cope. David Cope, a computer scientist AND composer, has looked at the role that computer science can play in composing music and has tried to combine the two disciplines by using a program to spew out music in the style of different genres that it seeks to emulate. This program is called “EMMY” which is named after the acronym of his work “Experiments in Musical Intelligence”. EMMY analyzes the patterns in the songs that it is fed, and creates new work in the style of the music it analyzed, often creating musical masterpieces that sound much like the original artists.

I find this work/experiment particularly inspiring, in the sense of how scary it could prove to be. It also begs a lot of questions about the nature of music and musical expression, such as how can music be made to provoke emotion in some individuals? When music is made by flesh and blood and provokes emotion in those who listen, what does it mean when a computer algorithm creates music that is better and also provokes emotion? I don’t think that human musicians will ever be replaced by robots, but the idea itself is slightly unsettling. The alternative view to this would be that it is a celebration of human accomplishment, that we have finally found a way for the automated to express the seemingly unautomated (music). Music is something that requires time and dedication and is not usually considered alongside programming and computer science, however, this project attempts to bridge the gap between music and other seemingly unrelated disciplines.