hunan-Pix2Pix

Cats are always fun. Not many surprises here as it’s just a classic Pix2Pix model. But I was surprised to find that many art projects, including this one (edges to cats) and the “Learning to see: Gloomy Sunday” we saw in class were actually included in the original Pix2Pix paper, which I thought was kinda cool. This site seems to run inference on the frontend and is quite unstable (I would get “page unresponsive” multiple times for each generation.)

hunan-ArtBreeder

It’s alright. There are still some deconvolution artifacts that give it away immediately (at least with the more photorealistic genres I’ve played with.) I love the freedom and flexibility it provides with various keywords and parameters, but the actual effect of these parameters in the images is rather disappointing. Resolution is not bad for generative models.

hunan-creature

http://142.93.58.191/automata/

INTERACTION  You can click or click-drag across the canvas to destroy the cells, they will regenerate (to a certain extent.) Y0u can also double click to spawn a new cell which will grow into a new colony.

INTRODUCTION  This is a living self-portrait in a cellular automata (game of life) where each pixel in the 48*48 grid is an individual “creature” that has a 256 bit memory, the first 32 bit of which is its displayed color (RGBA.) A fixed neural network (can be thought of as a set of very complex rules, these rules are learned instead of manually selected like in Conway’s game of life) is then used to operate on the state of the creature based on a 3*3 perception field and its memory.

IDEATION  This piece, especially the idea of a self-portrait, is inspired by Leibniz’s Monadology. In Leibniz’s metaphysics, he proposes that everything in the world is made of “monads” (conscious, thinking atoms) and that all objects, including us humans, are just many monads working in perfect, predetermined, harmony (established by God) without communicating with each other. The concept of cellular automata is exactly that — independent cells, each with limited perception, without any communication, reacting to the changes in their environment according to pre-established, unchanging laws (the neural network.)

TECHNOLOGY  The model takes the perception field of each cell and performs a depth-wise convolution with Sobel and identity kernels before feeding it into a standard Densenet-121, where it then goes through another dense block before outputting the 32 channel new state (top 4 will be the new appearance.) There is also an alive mask where all cells with alpha below 0.1 are reset. A better explanation of the technology behind this project can be found here: https://distill.pub/2020/growing-ca/. This is the paper that inspired part of this work and provided almost all the implementation for the model and the tensorflow.js code that powers the real-time inference on the frontend. The model currently in use was trained on the drawing (of myself) below for 30k epochs (~1.5h.)

SKETCH  This is a sketch that represents what I want to achieve visually if I had the time and skill. I decided to use different angles to highlight the fact that it is not about the image but rather the true three-dimensional self. The Warhol-like gradients add some visual interests and variety while subtlety hinting at the fading dichotomy of digital mass-production and individualism.

SCREENSHOT

DEMO

 

Hunan-Landscape

https://skyscape.glitch.me/

Initially, I intended on making a gigantic surreal landscape out of many balls, Inspired by the childhood memories of playing in big ball pits, I wanted to make a 3D, physics-enabled, interactive landscape where the user gets to drive around what is essentially a gigantic ball pit. But due to some mysterious and possibly quite stupid issues with the physics engine and Golan’s advice, I diverted to this idea — a simple poetic experience of chasing the sun. The user can control their movement to navigate through the rising ice particles in the pursuit of the unattainable ultimate destination.

Things I wish I had the time to add: background music, sound effects, better movement control (with acceleration,) pointer lock control (ability to look around,) generative terrain (below,) VR.

About the size requirement: the application adapts to your browser window size and aspect ratio (though not on resize, didn’t have time to add that, you’d have to refresh) so you can make any size and respect ratio you want by tweaking your browser window.

Hunan-Timepiece

World Clock/Production Clock/I don’t know how to name my art

http://oppr.org/s/0xxf6cZM

this clock traces the sum of 10 vectors with constant but unique lengths and rates of rotation. The ten vectors each represent a watch hand controlled by ten unique rates: [car produced, bike produced, computer produced, baby produced, newspaper circulated, TV sold, phone sold, emails sent, Tweets sent, ton of CO2] per sec. Inspired by the 2D visualization of Fourier transform, this clock can achieve unique looks by tweaking the radius of each vector. Which this particular setting, the clock draws a ring every 12 hours and clears every 24 hours so you can still tell the rough time. The coefficients can be tuned in ways that make it impossible to tell time.

Another set of coefficients:

Sketch:

Hunan-Loop

Yes, I forgot about the rect requirement. I don’t think it’s possible to refactor this with rect so you just have to live with it.

I made this bobbing thing in the hope that it will make a nice loading icon and a switch (with slight modification.)  The idea came from the motion of two water droplets combining in a weightless environment. It presented some very interesting control challenges and still has some issues (especially fine-tuning the parameters and having more customer shaping functions)  that I would like to fix when I have the time.

oppr.org/s/UUYDaNND

 

Hunan-LookingOutwards01

Holly Herdon is one artist that I like who uses AI and technolgy in a particularly well balanced way. She is a composer, musician, sound artist who uses an AI model “Spawn” to generate sound for her music. When you listen to her music, it’s not immedietly obvious that this is the case — the technology is not right in front your face. In fact, they sound less “electronic” than some of their experimental music counterparts that are not as technology intensive (at least no AI) such as Ryuichi Sakamoto. Compared to other generative music technology, such as OpenAI’s Jukebox (which I played with for a while with an unsatifying level of success,) the artist still has great control over her AI model in the creation of her music, which makes them sound human while being surreal and otherworldly.

One of her most recent projects is Holly+(https://holly.plus/), which is a digital twin of hers. The audiance is welcomed to upload audio files of their own to have it be processed to her style. It is also worth mentioning that the AI model — spawn — used to create many of her music is trained with audio segments of her own and her friends’. The model is also surprisingly lightweight (compared to Jukebox(https://openai.com/blog/jukebox/), which is a transformer-based monstrosity that was trained on 1.2 millions songs and takes 10 hours to generate a short segment on Tesla V100.) So I also appriciate her work from the technical perspective.