thumbpin-LookingOutwards04

https://mlart.co/item/use-photogrammetry-to-extract-vector-points-from-images_-and-apply-a-optical-flow-based-styletransfer-with-noj-barke_s-dot-paintings

This project uses Style Transfer software algorithms and Optical Flow to texturize an environment with the style of Noj Barke’s dot paintings. The environment itself is created with photogrammetry. This project interested me because it showed me how machine learning can be used to create an immersive 3D environment, not just 2D images. I did a little bit of photogrammetry in a 3D printing class last semester and I really enjoyed it, but this project used photogrammetry in a way that I hadn’t considered before. This project is a video, but I think it would be interesting to explore an environment created in this way with VR.

sticks – LookingOutwards-04

“Proto-Agonist” Alice M. Chung (2017)

"Proto-Agonist" was a piece which stood out to me, where these anime and Japanese RPG sprites were generated and created from DCGANs (Deep Convolutional Generative Networks) and utilized an RPG generative process that characterizes the 2D sprites. I was amazed by the way our brains are able to distinguish the faces, hair, and body from pixels, where there doesn't need to be a lot of information in images for us to recognize sprites. I really enjoyed the transition of pixel color, that allows us to see so many permutations and iterations of 2D characters from color changes in the pixels. I find it interesting how individually, the sprites may appear simple in their pixelated form and color, but when looking at the generation of these seemingly-simple collection of pixels, there's many more layers of depth that go into creating endless images of pixels that read to us as distinct characters and sprites.

 

yanwen-LookingOutwards04

Forma Fluens collects over 100,000 doodle drawings from the game QuickDraw. The drawings represent participants’ direct thinking and offer a glimpse of collective expression of the society. By using this data collection, the artists tried to explore if we could learn how people see and remember things in relation to their local culture with the help of data analysis.

The three modes of Forma Fluens, DoodleMaps, Points in Movement, and Icono Lap, all present different insights into how people from each culture process their observations. DoodleMaps show the doodle results organized with t-SNE map, Points in Movement displays animations of how millions of drawings overlap in similar ways, and Icono Lap generates new icons from the overlap of these doodle drawings.

The part that draws my attention the most is how distinct convergence and divergence can happen with objects that we thought we might have common understandings of. Another highlight from the project is how the doodle results tell stories of different cultures, which may suggest that in a similar cultural atmosphere people observe and express in similar ways.

tale-LookingOutwards04

I really liked Draw to Art by Google Creative Lab, Google Arts & Culture Lab, and IYOIYO. Draw to Art is a program that uses ML to match drawings, paintings, sculptures, and many more artworks you could find in the museums around the world to the user’s input drawing.

Not only did I find this educational and informative, I also thought having this interactive program in museums would enhance the museum experience by making the visit to that museum more enjoyable (if the user gets an opportunity to visit the museum). It’s definitely a more interesting method to learn about another piece of art in a museum around the world. If the data the program trained on is limited to pieces from one museum, it could also lead to another game of finding the artwork matched by the program, providing more memorable experience at the museum. There are so many possible positive experiences this program could provide, so I really like the concept of this.

mokka – Looking Outwards 04

Hello Hi There by Annie Dorsen is a performance in which a famous, televised debate between the philosopher Michel Foucault and linguist/activist Noam Chomsky from the 70s and additional text from YouTube, the Bible, Shakespeare, the big hits of western philosophy, and many other sources are utilized as materials for creating a dialogue between two custom-designed chatbots. Annie designed these bots to imitate human conversation/language production while recognizing how the optimism for how natural language programming seemed to have helped us understand the process of human language.

I really enjoyed how this study of human language production quickly turned into a theatrical performance between two instruments of technology(in this case, two laptops). It was striking to see how with all the information that these two laptops were given(seemingly like two brains communicating) even a conversation between these bots was able to digress into completely unrelated topics from the beginning just as humans could, but more humorously and incoherently.

sweetcorn-LookingOutwards04

Grannma MagNet

This project by Memeht Selim Akten creates morphs between two given audio samples. Below is a compilation of examples of this.

The transitions were really interesting to me. I could hear the “notes” or whatever sound elements slowly change qualities like length, timbre, pitch, and finally resolve to the second sound. There’s a lot of potential I can see here for creating music, as music I’ve made doesn’t really transition from one thing to another all that much. I wonder what morphs from two samples of music produced by the same person, with all their musical quirks and tendencies, would sound like. Still, as the artist said, it isn’t about creating “realistic” transitions, it’s about the novelty and potential for modification.

 

marimonda – LookingOutwards04

IMG_9734

Dimensions of Dialogue (tablets) (2019) by Joel Simon

Link to project

This project is incredible to me because it explores a really interesting approach to the intersection between technology and language. The idea of having two algorithms compete, edit and change a set of glyphs in the generation of a completely new character system is almost a corny replica of the change of language through time. The most interesting part to me is that I might not even be able to recognize the difference between the text shown above and a legitimate ancient script. The idea of constructing a digital space where two systems compete is incredibly interesting to me, it almost becomes a space of linguistic evolution and collaboration– that in the end relies on a human to ascribe meaning to it with the original data that’s given to them. This was not the only project that caught my eye that had to do with typography or language, these two projects [1, 2] very differently approached type/scripts/visual formats of language. Some of these projects used ML to map typefaces across a space, similar to the sound map project we looked at yesterday in lecture. Some other ones deliberately explore human created glyph systems (like A Book from the Sky by Xu Bing in this project).

Unrelated, but this  is a gorgeous project. 

 

 

junebug-LookingOutwards04

Xander Steenbrugge’s When the painting comes to life…

Gif of a few seconds from the video

This project was an experiment in visualizing sound using AI. In the Vimeo channel’s about section, he describes his workflow and how he makes his work:

1. I first collect a dataset of images that define the visual style/theme that the AI algorithm has to learn. 
2. I then train the AI model to mimic and replicate this visual style (this is done using large amounts of computational power in the cloud and can take several days.) After training, the model is able to produce visual outputs that are similar to the training data, but entirely new and unique.
3. Next, I choose the audio and process it through a custom feature extraction pipeline written in Python. 
4. Finally, I let the AI create visual output with the audio features as input. I then start the final feedback loop where I manually edit, curate and rearrange these visual elements into the final work.

What I just appreciate about this work is the way the paintings are able to smoothly flow from one piece to another – they gradually fade into another by an element at a time rather than the whole painting. This video was so mesmerizing to watch, and the audio that was created sounded like lo-fi music and I just appreciated how in tune the visual was to the audio.

Toad2 – LookingOutwards-03

 

 

Madeline Gannon – Tactum

Tactum is an augmented modeling system developed by Madeline Gannon that allows the user to create 3D wearables by interacting with an image projected onto their arm create ready print wearables. As a result, each design created by Tactum is designed to exactly fit the user’s body. Furthermore, this project focuses upon creating a naturalistic designing experience by using intuitive gestures such as pinch, poke, rub and touch. This project was interesting to me because of its use of augmented reality as well as by how user focused the project was. I found the idea of designing in augmented reality just very appealing.

miniverse-lookingoutwards03

I liked this project:

https://tangible.media.mit.edu/project/recompose/

The project utilizes gestural movements similar to the Soli, yet 10 years prior and with real objects responding in physical characteristics rather than a computer program on a screen. The precision and control that the user can impose these shapes reflect a more precise version of soli’s tap function, allowing them to shape the height of the cuboid landscape. This also allows a more natural “pinching” gesture to indicate pulling up the surface below. I think this some ideas of how to approach soli.