Learning to see: Gloomy Sunday from Memo Akten on Vimeo.
I didn’t want to post GAN art because it’s very predictable of me, but I thought it was interesting how the ‘Camera Transformed’ article conceived of them as capturing unseen possible aspects of reality through the latent space of an image dataset.
In Learning to see, artist Memo Akten trained a generative adversarial network (GAN), a type of machine learning model, using the Pix2Pix (pixel-to-pixel translation) technique. With Pix2Pix, the ML model learns a statistical mapping between representation A of an image and representation B. In this case, Akten used photographs of plants and landscapes as one representation, and line drawings of the same scene (generated by a simple edge detection algorithm) for another. Thus, the trained model could pick up the general edge arrangement of a mundane scene, like car keys and wires on a table, and translate it into the visual representation it was familiar with (natural scenes.)
Philosophically, the piece is interesting because it shows how preconceived notions of reality affect capture: art is never neutral. As Akten notes, the model “can only see what it already knows, just like us.” For this model, even a decidedly un-sublime environment doesn’t incite boredom and despair. Learning to see suggests seeking beauty and inspiration even in the winding shape of an electrical cord.