vib-ribbon is a rhythm game created in 1999 by Masaya Mastuura, a Japanese musician and game designer. The game was one of 14 to be exhibited in the first MOMA collection of interactive media. The game functions by using player selected music to generate levels: obstacles and landscapes are generated using the sound waves of whatever music is playing in the game.
Personally, I really like the game’s visual style and innovative use of semi-random generation, which was decidedly ahead of its time. I think the game exhibits a willingness to experiment that most video games would shy away from. Both the visual style and use of musical level generation also speak to the Masturaa’s artistic sensibility: the game is clearly inspired by vector graphics, and the abstract-cartoonesque character design owes a lot to the work of artists like Takashi Murakami.
I don’t know exactly how the algorithms in the game work but here’s my guess: since computers can visualize sound waves, and each point of articulation of a sound wave has certain numerical values associated with it, the game probably generates simple shapes based on those values.
This piece -Dazzled Project by David Dalmmazzo combines different types of sounds and symphonized them together. I really admire the combination of the visual component of this piece with the sound background. At first, the sound was quite disturbing, but then a really high toned string-like sound came out and changed the feel. I like the transition a lot. Also, the visual looks like a galaxy which is cool. The algorithm is not explained. But I suppose that the movement of the lines create sounds to make them sync. The creator’s art work shows that he likes changing elements and progressive themed art. The strings slowly accumulate in the center and created a rotating sphere.
I really like this week’s subject of computational soundscapes for many reasons. For one, I love it because I think it is so neat that we can bring digital substance into reality by converting it into many ways, sound being one of them. The other reason why I love this weeks LookingOutwards, is because I myself make digital music on numerous platforms like Fl Studio, Garage Band, and Pro Tools! Being able to look and see what you are changing and being able to manipulate something artificial so that something real takes form, is a wonder upon itself. The Tilt Brush took another step into digital reality by creating a new combination of music, art, and animation. They have allowed a person to equip themselves with the ability to paint in virtual reality and as a result, create sound. The video below can paint a better picture than I can but it truly is incredible that some people can see what others cannot. Drawing sound is a concept that I had never even thought of until I saw this video. Now I am even more intrigued and I want to do more research on it even after this week.
Creator: Tilt Brush Article by: The Creators Project
Carolina is an Android application that maps a soundtrack to the changing landscape as one drives through. It plays to artist Kimbra’s ‘road to Carolina’. The camera’s movement is mapped to a spline. Figures pop up as 2D abstractions of the landscapes and fly by just like landscape features. The figures are triggered by instruments in the music track. The combination of simple 2D forms triggered by a soundtrack and moving in a 3D space is a rich piece of work.
The application uses a combination of Two.js for the landscape abstractions and Three.js as the movement of the camera along the spline. The 2D-3D interaction is evident in the application. The various music instruments are represented by specific shapes which in turn trigger the ‘landscape features’ in the camera view, thereby linking each musical instruments to a distinct 2D landscape feature.
The final form is an animated journey through a landscape whose features appear in consonance with the soundtrack. It is a synthesis of music and art in a digital platform just as the artist sought to do.
Google created the Tilt Brush in 2014 which had turned rooms into canvases, as it is a way to paint in virtual reality. They also have a feature that enables sound to affect the paint’s movement
As seen in the video above, this device allows you to essentially create your own world and bring it to life using sound. I am an avid daydreamer, and to create the worlds that I think of and have the ability to actually interact with them would be amazing.
I was unable to find much information about the algorithms used, but, since the sound-movement is an update and not a new device, I believe it builds from the original and just incorporates the sound through a new set of algorithms that can be combined with the pre-existing ones.
There are also links to the articles that I found about this below.
I found a sound project named “Mew” a very interesting experiment of interactive computational art. It is designed by students of Royal College of Art in London, UK. It is an responsive object, containing a distance sensor that let “Mew” interact with people when they walk towards it and encourages them to make further interactions. When they stroke the fur, it emits animal sounds according to the angle and how hard the press is. Observer can manipulate the sound while experiencing a touch of the fur. According to one of the student’s website, they created the project using sensors by Arduino. They had in total four circuits that each connects a sensor. They created sound effects using Max MSP, and there is a mini Mac imbedded inside the “Mew”. This project also took aesthetic values in. Through a simple form a stool, “Mew” reminds the observer that the animal sounds are programmed, although it gives a feeling that it is alive. The fur grabs the attention as it should, making the project interesting.
The project that I got inspired this week was the Mesa Musical Shadows at Daily tous les jours studio. I was admired by this because just by using your own shadow you could make a sound and depending on the length of the shadow can determine the dynamics of the sound. The algorithm that they used were 6 control nodes that could run 47 sensors and also comprised of an Arduino Mega, ethernet shields and custom connector shields.They also managed to install a speaker underground which was weatherproof. Their artistic sensibility is that they used shadows, light, and technology to create music.
Ryoji Ikeda is one of my favorite artists, and he is a really well-known micro sound musician, and currently lives in New york. Because of his installations, he is now also well-known as an artist.
Micro sound fields were commonly referred as ‘white noise’ and they are used as a therapy. Ikeda uses sine wave in order to present mathmatical patterns and pure electronic sounds.
His album [+/-] is one of the most famous albums, and it was sensational because he got rid off melody, and expressed music using low frequency sound.
He made his installation work very interactively, so that people actually feel like that they are merged with the art piece, and screens with motion attracts and collaborates with the mysterious sound really well.
Earthworks is a multimedia installation by the artists Ruth Jarman and Joe Gerhardt, using the natural sounds of the earth to create computer generated visuals to go along with it.
For the festival Sonarplanta, the artists put the 5 channel video installation inside of a huge round vaulted space that encompasses the viewer.
The artists take seismic data from the earth’s tectonic movements and then use that data to create sounds. It becomes an almost timelapse of the earth moving. They then use those sounds to generate psychedelic visuals that encompass the viewer.
What I think is inspiring about this piece is that they took something they were interested in, the tectonic movements of the earth and then turned that into numbers, they then took the numbers and turned that into moving data that then influenced sounds that then trigger visuals…or create a whole new slew of data. I think this kind of process is really inspiring, and the breakdown and buildup of information to create a piece of art is really interesting! It puts visuals to something that you know about but you don’t necessarily get to see or hear and brings your awareness to this whole other process that is constantly happening around you.
Fantasy-Impromptu, by Frederic Chopin, accompanied by a graphical score by Stephen Malinowski
The animated graphical score project known as the Music Animation Machine was created by Stephen Malinowski in 1974 and is still an ongoing project. The composer and programmer was overwhelmed by reading the score of a Brandenburg concerto and wanted to display complex music in a way that listeners of all levels could easily follow. He decided to use animation to visualize musical pieces by writing a program on an Atari using BASIC. The result was a video that synchronized with music using movement, shape and color. He explains that the notes we hear most recently are most vivid and the notes we hear a while ago seem to fade away, so he uses solid shapes to represent present notes and hallow shapes to represent faded notes. He also uses different shapes for different types of instruments that interact together. In 2012, his software could generate animation in a live performance. He now uses frame-rendering software that inputs a MIDI file (software unknown). A music appreciation teacher has used his work to teach students to grasp convoluted pieces and the results have been significant. What I admire most about this project is that it conveys music visually through one program. The animation Malinowski makes reminds me of dance. The programming that he does is actually choreography: he pays attention to the position and movement of different elements while following a certain rhythm and theme. This makes me wonder whether a single algorithm can be used to generate dance and other visual art.