Sihand – LookingOutwards – 04 – Tilt Brush

Tilt Brush

Tilt Brush has been around for a while now. It is mostly used to power designs with Google VR in the 3D realm; however, “it is only starting to wow the masses”, according to the Creators Project. Google has just equipped the tool with brushes that respond to audio, which allow users to elevate painting with VR to a whole new level. In this age of VR, designers might not be able to paint with the color of the wind, but they are certainly capable of painting with the very sound of the wind.

The exact algorithm of this mind-blowing drawing tool is preserved from the public. But it is clear that it transforms the beat of the music, any kind of your choice, into the pulses in the strokes of your drawings. The audio reactiveness might seem like a minor addition to the brushes, but the possibility it entails is unmatched. With the audio element, we now have yet another reason to engage in an artistic experience in VR than when it was silent.

 

Learn more about Tilt Brush here

Sarita Chen – Looking Outwards – 4

Max Cooper is a musician of the techno and electronic genre. One of his projects includes an immersive 4D musical experience. It’s kind of like virtual reality, except the focus is more on sound than visuals.

“The 4D system, and a lot of the work I do with my music in terms of spatiality and trying to create immersive spaces and structures within them, has to do with psycho-acoustics and the power of sound to create our perception of the reality we’re in,” he said. “The idea is to create an alternative reality that can be very beautiful, relaxing, jarring, and even uncomfortable, but still an interesting experience that can communicate a new idea.” – Cooper, about the project.

For the 4D effect, Cooper used a software that enabled him to design and build a 3-dimensional representation of the music. The sounds can take on any shape or form inside this 3-dimensional space. The software also calculates the effect of the speakers to create variance in the sounds.

What I admire about the project is the level of innovation it has to create an entirely original musical experience. It seems that he spent a lot of time and effort thinking about the shape of the sounds. The sound system used is known as 4D Sound, and was created by a group stationed in Amsterdam.

ShanWang-LookingOutwards-04

patternreel

(sample codes for Spicule)

Created by Alex McLean, Spicule is a album released as a Pi Zero with high quality  DAC in customs case that allows users to remix or rework the music using the TidalCycles live coding environment. The artist created the free and open source software TidalCycles with some friends, where users can join life streaming sessions and watch how he built up the rhythm and patterns with code. Different beats and sound of instruments are generated base on the code, which give users freedom to play with, experiment and create the unique music of their own.

I found the project extremely fascinating because of the unlimited possibilities it provides. With parametric control over sounds in its very basic unit (pitch, rhythm and etc.), access to music composition and experiments would no longer be limited to the small crowd.

(live of Alex McLean)

Kyle Lee Looking Outward 04

Nightingale and Canary from Andy Thomas on Vimeo.

I looked at Andy Thomas’ sound visualization and particularly admire how he was able to process a sound into a visual medium. What surprised me was how something so simple, like a bird call which we have all heard many times before, is actually quite intricate in its sound, vibrations, pitch, and volume. This visualization and additional visual information truly helped me understand the feeling of the bird call, the outdoors, and nature itself.

I do not know how exactly the auditory information was processed or what it was processed with. I couldn’t find any specific information on that, but I suppose the auditory recorded information was used to dictate drawing of different shapes. Now those shape all would have different forms, color, motion, and frequency which would also be influenced by the bird call.

Given how abstract sound is, I find Thomas’ attempt to make sound visually tangible remarkable. Not only did he effectively bring out the beauty in a bird call, but he also enhanced it in my opinion.

GraceCha – LookingOutwards – 4

ENTROPIA

Fraction’s work on 3D ambisonics experimental music and joined by Louis-Philippe St Arnault, Nature Graphique and Creation Ex Nihilo which was presented during IX Symposium on immersive experience (may 2015).


Apart from the fact that I was drawn to the name of the project “Entropia” (reminds me of Entropy),  I was drawn to the magnitude and out-of-this-worldness of this sound artist “Fraction”(or in other words the eireness).  I was really impressed by the marriage of many aspects of the senses- audio, physical, and360° visuals- to give an interactive experience for the audience (who are laying down on their backs!)

The sound manipulation by immersive electronic music sound data (which is live time performed by the man in the globe) is translated to “physical sound” through use of complex lighting systems that are manipulated by the sound data real time.  It’s almost like a conversation between the light and sound.

screen-shot-2016-09-23-at-11-52-41-am
Light reacts to the sound data coming from dome
screen-shot-2016-09-23-at-11-53-12-am
3D interactive experience for views

Christine Kim – Looking Outwards-04

1
2
3
Semiconductor’s ‘Earthworks‘ for SonarPLANTA

The art duo Semiconductors Ruth Jarman and Joe Gerhardt created Earthworks, which is an installation that can replicate the images and sounds of the Earth’s dynamic movement. While the Earth moves, it creates visuals in waveforms with different colors. Jarman and Gerhardt worked with colored layers of sand and recorded the timelapse of the Earth movement over thousands of years and then worked with the seismic data. They record the seismic data and those digital information becomes a waveform that translate to sounds. The main tool for this project was MATLAB (Matrix Laboratory) that packs all the data like location, instrument, frequency, and timeframe, into one packet. I suppose those data along with the sound recording of the Earth movement were computed to create visuals that represent how the Earth was moving for thousands of years. I could not find an exact algorithm that they used. Because the sound generates visuals in this project, you can assume that sound determines how the visuals are formed. This project allows one to connect and feel the Earth. It is so interesting that Jarman and Gerhardt were able to literally record the sound of Earth and from that create images of the Earth movement and formation. Jarman and Gerhardt’s passion for nature, technology, and science clearly show through this project and further leads others to question and explore the Earth.

4
5
Joe Gerhardt and Ruth Jarman at University of Barcelona with the study model of sand

Earthworks

Sofia Syjuco – Looking Outwards-04

Bach Style Chorale
David Cope
2012

A project that particularly interested me was David Cope’s EMI program, or Experiments in Musical Intelligence. I really admire Cope’s relationship to the project, how he treats it not as a way to exploit loopholes in human expectations of music, or something purely technical, but a natural progression of how we continue to understand composing music. I admire this because, I think too often that new media arts have a controversy surrounding them, and people ask questions like – is this even real art anymore? Can this still count as being creative if the computer does it for you? I personally think that these questions are ridiculous, and really admire how David Cope’s artistic sensibilities and thoughts on this subject manifest in the final form – something that generates music based on data, but ultimately contains mistakes or problems which humanize it in some way, blurring the line between what we consider a pristine and soulless work made by a machine, or a piece of music, perhaps not all too skillfully made but still interesting, created by something affectionately nicknamed “Emmy”.

rgriswol – LookingOutwards-04

Alex McLean, also known as Yaxu, released an album in 2016 called Spicule. What is unique about this album is that it is released as a live coding device on Pi Zero. You can also digitally download it. You can play mastered Spicule tracks “over high quality line outputs” and you can also plug it into your computer and “live code algorithmic patterns” to change and create “special versions” of Yaxu’s tracks. Yaxu also created TidalCycles, an open source live coding environment based off of his 16 years of experience with making algorithmic dance music. It makes sound through “SuperDirt” which in and of itself is live-codeable. It is also implemented through Haxell, a “pure functional programming language” which allows patterns to be created and combined with code. The patterns are also functions of rational time. Because the project is open source, many musicians can now use the project. Crowdfunding recently ended in August, but contributors got access to the album, pattern reel, and/or device depending on how much they donated.

Michal Luria – Looking Outwards – 04

AUDFIT

The project I would like to present this week is called “Audfit”, a project that combines sound, choreography and computing. This is a performance of a dancer that is connected to an “audio-costume”. Whenever the dancer moves, the movement sensor connected to a specific body part triggers a sound sequence in the audience’s headphones. Furthermore, the audience can choose from 3 audio channels to listen to while watching the choreography.

AUDFIT in action – a dancer with the audio costume and the sound it produces. credit: Strange Loop (vimeo)

What I like about this project is that the sound is generated according to the dancer’s movement, and that they have an interesting corresponding relationship between them. Also, the audience is able to choose between channels, and their choice of channel would change their experience, as the music is a significant part of the artistic message. Therefore, each person in the audience would have a slightly different experience from this performance, according to the timing in which they switched from channel to channel.

The technology behind this project is movement sensors connected to the dancer’s costume. Each time a sensor indicated motion, it triggered a sound sequence. These sound sequences were then combined to create new and interesting harmonies.

 

Diana Connolly – Looking Outwards 4

Please visit this link to interact with the piece: http://www.patatap.com/

Here’s what one user created with Patatap:

Patatap, by Jono Brandel and Lullatone.

Patatap is a “Portable Animation and Sound Kit.” It is accessible to anyone on a web browser, and invites the user to become interactive with sound by creating their own sound combinations. The algorithm works by randomly assigning sounds to keys on the keyboard. The user can hit a key, and a corresponding sound and visual will generate. Once the user hits the space bar, new sounds and visuals are assigned to each of the keys on the keyboard, and a new background color sets the mood. While the algorithm uses randomization to assign the sounds and visuals, it selects these sounds and visuals from a set list. For example, oscillating sin wave symbols repeat in the visuals, as well as varying geometric shapes and colorful polka dots. This algorithm allows for the artists’ artistic sensibilities to be manifested in the piece’s final form because it makes interaction with sound art accessible to any user. It is beautiful, with its simple but varying shapes and pastel colors, and its simple but compelling range of sounds. I really like this piece because it is really fun to play with, it changes every time you hit the space bar (stays interesting), and gives me a new way to interact with sound.