mokka – Looking Outwards 04

Hello Hi There by Annie Dorsen is a performance in which a famous, televised debate between the philosopher Michel Foucault and linguist/activist Noam Chomsky from the 70s and additional text from YouTube, the Bible, Shakespeare, the big hits of western philosophy, and many other sources are utilized as materials for creating a dialogue between two custom-designed chatbots. Annie designed these bots to imitate human conversation/language production while recognizing how the optimism for how natural language programming seemed to have helped us understand the process of human language.

I really enjoyed how this study of human language production quickly turned into a theatrical performance between two instruments of technology(in this case, two laptops). It was striking to see how with all the information that these two laptops were given(seemingly like two brains communicating) even a conversation between these bots was able to digress into completely unrelated topics from the beginning just as humans could, but more humorously and incoherently.

sweetcorn-LookingOutwards04

Grannma MagNet

This project by Memeht Selim Akten creates morphs between two given audio samples. Below is a compilation of examples of this.

The transitions were really interesting to me. I could hear the “notes” or whatever sound elements slowly change qualities like length, timbre, pitch, and finally resolve to the second sound. There’s a lot of potential I can see here for creating music, as music I’ve made doesn’t really transition from one thing to another all that much. I wonder what morphs from two samples of music produced by the same person, with all their musical quirks and tendencies, would sound like. Still, as the artist said, it isn’t about creating “realistic” transitions, it’s about the novelty and potential for modification.

 

marimonda – LookingOutwards04

IMG_9734

Dimensions of Dialogue (tablets) (2019) by Joel Simon

Link to project

This project is incredible to me because it explores a really interesting approach to the intersection between technology and language. The idea of having two algorithms compete, edit and change a set of glyphs in the generation of a completely new character system is almost a corny replica of the change of language through time. The most interesting part to me is that I might not even be able to recognize the difference between the text shown above and a legitimate ancient script. The idea of constructing a digital space where two systems compete is incredibly interesting to me, it almost becomes a space of linguistic evolution and collaboration– that in the end relies on a human to ascribe meaning to it with the original data that’s given to them. This was not the only project that caught my eye that had to do with typography or language, these two projects [1, 2] very differently approached type/scripts/visual formats of language. Some of these projects used ML to map typefaces across a space, similar to the sound map project we looked at yesterday in lecture. Some other ones deliberately explore human created glyph systems (like A Book from the Sky by Xu Bing in this project).

Unrelated, but this  is a gorgeous project. 

 

 

junebug-LookingOutwards04

Xander Steenbrugge’s When the painting comes to life…

Gif of a few seconds from the video

This project was an experiment in visualizing sound using AI. In the Vimeo channel’s about section, he describes his workflow and how he makes his work:

1. I first collect a dataset of images that define the visual style/theme that the AI algorithm has to learn. 
2. I then train the AI model to mimic and replicate this visual style (this is done using large amounts of computational power in the cloud and can take several days.) After training, the model is able to produce visual outputs that are similar to the training data, but entirely new and unique.
3. Next, I choose the audio and process it through a custom feature extraction pipeline written in Python. 
4. Finally, I let the AI create visual output with the audio features as input. I then start the final feedback loop where I manually edit, curate and rearrange these visual elements into the final work.

What I just appreciate about this work is the way the paintings are able to smoothly flow from one piece to another – they gradually fade into another by an element at a time rather than the whole painting. This video was so mesmerizing to watch, and the audio that was created sounded like lo-fi music and I just appreciated how in tune the visual was to the audio.