Toad2-LookingOutwards04

Link

Blackberry Winter (2019)  by Christian Mil Loclair is a project that generates human poses based upon human poses to show motion. This project really interests me because of the human yet unhuman quality of the images. I really love how the shiny ceramic contortions loosely form a human figure and how upon first glance, it’s hard to tell what the shapes are, but if you spend a second to observe what the figures represent, you can pick out a human shape. Additionally, I like how the shattered/hollow nature of the figure’s limbs further highlights how ephemeral movement is and creates a feeling of one motion blending into the next.

axol-LookingOutwards04

http://www.aiartonline.com/art/holly-grimm/

https://hollygrimm.com/acan_final

hollygrimm_ny_dieb201_3840 (1)

hollygrimm_pilar_bn178_3840 (1)

The project was done with additional constraints from a neural network trained on art composition attributes. It’s attempting to take traditional fine art concepts (Variety of Texture, size, color, shape, contrast …) and embodying them as a part of the constraints.

What fascinates me most about this project is the documentation in the second link. The documentation showed every step and the different passes of images through the model, the different dimensions of values that was modified to produce the look which I though was really cool. It’s interesting to see how some traditional concepts like color theory and textures gets replicated/translated into digital, machine learning space– and also just how the categorization of human art work fits into the listed categories.

gregariosa-LookingOutwards04

Ross Goodwin’s project is an AI literary road trip, where he drives around a text generating model that writes a novel based on what it sees through a camera. Through his work, he posits that artificial intelligence assists creatives to produce artwork, rather than replace them, as humans find more ways to collaborate with the AI. I was fascinated by this work, as we often think of machine learning models as static tools, spitting out results based on existing datasets. For this project, however, the AI is, in some sense, ‘experiencing’ the data together with the human, bringing into question the degree of entity it has. His choice to drive the model around through the trip, rather than showing an hours long video clip and gps data, seems profound, and the resulting text is also pretty interesting.

miniverse-lookingoutwards4

I chose this project:

https://artsexperiments.withgoogle.com/runwaypalette

Where Google Lab worked with  Business of Fashion to grab and clusters thousands of color palettes seen on fashion runways. I could stay on this site for days. I love fashion and this site has inspiration organized in an interesting way. Normally users go through different runway lines by designer, but this color palette organizational style is useful to laymen rather than designers.

(small comment: whoever trained the machine learning model to pull the color palettes did not teach it to ignore skin color and this biases the entire color space to neutral tones)

Here is the entire color space:

 

 

 

 

 

 

 

 

 

 

Here is an example palette:

lampsauce-LookingOutwards4

This work by Erik Swahn depicts many floor plans stacked on top of one another. It was made using a StyleGAN (generative adversarial network). I chose this work because it is really satisfying to look at cross sections of things you wouldn’t normally see. I think that this work is especially imaginative because it paints a picture of a building that does not exist, so legitimately trying to imagine what this creation would look like from how we normally interact with buildings is a fun challenge. I think this technique of interpolated layers has a similar aesthetic to Robert Hodgin’s Meander that we saw earlier this year. I also just really like how this looks because of the way it is rendered; normally floor plans are so boring

 

shoez-LookingOutwards04

 https://refikanadol.com/works/machine-hallucination/

Refik Anadol is the creator of Machine Hallucination, which was located in Artechouse NYC, New York.  Machine Hallucination was an attempt at “revealing new connections between visual narrative, archival instinct and collective consciousness”.  I’m not entirely sure what all of that means, but I interpret it as discovering new ways to visually represent memories. The exhibition itself is a “data universe” that Anadol created by feeing 100 million photographic memories of NYC to machine learning algorithms. The result is projected into a room and it tells a story through its massive archive of memories. The artwork itself is a 30-minute experimental cinema in 16K resolution and it visualizes the story of New York through it’s collective memories. What’s interesting is that the story being told is about the future where a hopeful relationship between man and machine will grow. I was originally drawn to it because it looked amazing and without a doubt it is amazing to look at. But now I’m more interested in the experience of being in the room.

thumbpin-LookingOutwards04

https://mlart.co/item/use-photogrammetry-to-extract-vector-points-from-images_-and-apply-a-optical-flow-based-styletransfer-with-noj-barke_s-dot-paintings

This project uses Style Transfer software algorithms and Optical Flow to texturize an environment with the style of Noj Barke’s dot paintings. The environment itself is created with photogrammetry. This project interested me because it showed me how machine learning can be used to create an immersive 3D environment, not just 2D images. I did a little bit of photogrammetry in a 3D printing class last semester and I really enjoyed it, but this project used photogrammetry in a way that I hadn’t considered before. This project is a video, but I think it would be interesting to explore an environment created in this way with VR.

sticks – LookingOutwards-04

“Proto-Agonist” Alice M. Chung (2017)

"Proto-Agonist" was a piece which stood out to me, where these anime and Japanese RPG sprites were generated and created from DCGANs (Deep Convolutional Generative Networks) and utilized an RPG generative process that characterizes the 2D sprites. I was amazed by the way our brains are able to distinguish the faces, hair, and body from pixels, where there doesn't need to be a lot of information in images for us to recognize sprites. I really enjoyed the transition of pixel color, that allows us to see so many permutations and iterations of 2D characters from color changes in the pixels. I find it interesting how individually, the sprites may appear simple in their pixelated form and color, but when looking at the generation of these seemingly-simple collection of pixels, there's many more layers of depth that go into creating endless images of pixels that read to us as distinct characters and sprites.

 

yanwen-LookingOutwards04

Forma Fluens collects over 100,000 doodle drawings from the game QuickDraw. The drawings represent participants’ direct thinking and offer a glimpse of collective expression of the society. By using this data collection, the artists tried to explore if we could learn how people see and remember things in relation to their local culture with the help of data analysis.

The three modes of Forma Fluens, DoodleMaps, Points in Movement, and Icono Lap, all present different insights into how people from each culture process their observations. DoodleMaps show the doodle results organized with t-SNE map, Points in Movement displays animations of how millions of drawings overlap in similar ways, and Icono Lap generates new icons from the overlap of these doodle drawings.

The part that draws my attention the most is how distinct convergence and divergence can happen with objects that we thought we might have common understandings of. Another highlight from the project is how the doodle results tell stories of different cultures, which may suggest that in a similar cultural atmosphere people observe and express in similar ways.

tale-LookingOutwards04

I really liked Draw to Art by Google Creative Lab, Google Arts & Culture Lab, and IYOIYO. Draw to Art is a program that uses ML to match drawings, paintings, sculptures, and many more artworks you could find in the museums around the world to the user’s input drawing.

Not only did I find this educational and informative, I also thought having this interactive program in museums would enhance the museum experience by making the visit to that museum more enjoyable (if the user gets an opportunity to visit the museum). It’s definitely a more interesting method to learn about another piece of art in a museum around the world. If the data the program trained on is limited to pieces from one museum, it could also lead to another game of finding the artwork matched by the program, providing more memorable experience at the museum. There are so many possible positive experiences this program could provide, so I really like the concept of this.