junebug-LookingOutwards04

Xander Steenbrugge’s When the painting comes to life…

Gif of a few seconds from the video

This project was an experiment in visualizing sound using AI. In the Vimeo channel’s about section, he describes his workflow and how he makes his work:

1. I first collect a dataset of images that define the visual style/theme that the AI algorithm has to learn. 
2. I then train the AI model to mimic and replicate this visual style (this is done using large amounts of computational power in the cloud and can take several days.) After training, the model is able to produce visual outputs that are similar to the training data, but entirely new and unique.
3. Next, I choose the audio and process it through a custom feature extraction pipeline written in Python. 
4. Finally, I let the AI create visual output with the audio features as input. I then start the final feedback loop where I manually edit, curate and rearrange these visual elements into the final work.

What I just appreciate about this work is the way the paintings are able to smoothly flow from one piece to another – they gradually fade into another by an element at a time rather than the whole painting. This video was so mesmerizing to watch, and the audio that was created sounded like lo-fi music and I just appreciated how in tune the visual was to the audio.