Open Processing Sketch

A couple of things to note at the start… totally bombed the easiest part of this project… making the canvas a 2:1 ratio. I drew the water and submarine as a square, then remembered it was supposed to be a 2:1 ratio, but the thought of redrawing all of it made me want to die, so I cropped it to a 4:3 ratio which is random but the best I could manage. Sorry 🙁 Second, I managed to do this in like one afternoon, which is pretty evident I’m sure. But, I’m trying to do my best with the time that I’ve got, and I do want to come out of this class knowing I at least gave every project a go.

The original inspiration for this project came from the atmospheric train ride from a past student that we looked at in class. A refresher:

I really liked this idea of a .png image and looking through a hole of some kind into the outside world. My immediate thought was to make the inside of a submarine, but I wasn’t really interested in making underwater creatures. It eventually morphed into a submarine expedition around the world, based on my boyfriend’s father who did just that in his time with the military. When looking at the final product, I’m dissatisfied because I think I accidentally left behind the one idea I found charming to begin with: a hole, a window, something surrounded.

In this sped up gif, you can see that the submarine and waves kind of bob up and down. I orginally had them static and felt like something was super off and weird. Giving them a little life (independent of each other too) definitely helped. You can also see that the sky cycles through day and night. I really like that I did this… it sort of gives this idea of the passage of time, how long this submarine expedition and adventure is really taking.

I can only really draw in a cartoon-ish way, so I also added the faux black outlines to the mountains to match my drawing, which definitely helped make everything more united, but after talking with Golan about trying to broaden my horizons and make less “cutesy” stuff, I wish I gave myself more time to try that. Hopefully I can do that with the creature project!


1. This may be a pretty obvious and elementary take on the question of effective complexity, but cliches are cliches for a reason… If, in grade school, you ever did the wintertime craft of folding paper and cutting out pieces to make a snowflake, you might remember that every single one came out looking completely different and unique. Hell, I still do it sometimes for fun when I’m bored, and I’m always entertained by the outcome. I think that’s a good indication that it’s effectively complex.

Since it’s human made and not generated, I do think it sits more toward order than random, especially considering that you can probably try and recreate a pattern by cutting a new one out in the same way you previously did. But, if you are trying to create different ones every time, you will!

And, for the age of technology, there is also a Tik Tok filter where you can make a snowflake pattern using your nose, and it’ll unfold into something unique! Here’s an example of the filter in use.

2. A lot of the problems that are brought up regarding generative art really make me think… especially “The Problem of Authenticity.” This is one I’ve seen debated quite a bit: “Given that it is in part created by an unemotional and unthinking system, is generative art really art at all?”  My opinion on this subject also takes into account “The Problem of Locality, Code, and Malleability”: “Is the art in the object, the system, the code, or something else entirely?”

I believe that the person coding it and the machine making it are partners in  the artwork. Of course it’s art! The machine wouldn’t be making that art unless it was told to make that art by an artist who had an idea, a plan. And of course, the artist wouldn’t have any art to show if the machine didn’t carry out their instructions. Any time I code something generative, I have a vision, and I add in the randomness that makes it generative. The machine carrying it out every time is my medium.



My concept started from an idea to make data-driven tree rings that represent precipitation/drought data in California, and are explorable at multiple scales. I had the idea to use branching as the natural way to explore scale, where larger scales (state, basin) could be larger rings that branch into the smaller rings (grid cell), so switching scales meant you were slicing the tree at different points. I had the code to draw the rings in this wobbly way, but plugging in the data as an input and getting a neat ring that starts and ends in the same place proved to be challenging, and my DeepNote started crashing repeatedly 🙁 I pivoted (for now) to make a landscape from the tree rings that can shift from a top-down view (slice) to a perspective view (branch).

I use the random function to place multiple tree rings on the canvas, mapping their y position to the number of rings, so lower “trunks” are thicker. I draw rings with some wobble using Perlin noise to offset the radius as we go around the circle (learning the trick to get the noise values to start and end in the same place: use sin/cos functions as xoff,yoff inputs to noise, so your domain for the noise space is also a circle), then add stippling with a random angle stagger, so the stipples are not all lined up.


For my landscape I created a hotel hallway scene where each room opens up to a completely new landscape. Most assets are rendered in Blender, and each new scene choose from two different landscape layouts (flat vs cylindrical), 2 types of trees, 2 types of creatures, 2 types of structures, and 4 or so backgrounds and 4 big structures off in the distance. Basically everything is PNGs, except for the first hundred or so frames which is a mp4. You may have to refresh the page once when initially starting it (I really don’t know why), but after that it should work.



Link to sketch: https://openprocessing.org/sketch/1486536




Question 1A:

I consider the hair accumulated in my shower an example of effective complexity. It’s pretty much total randomness in that you don’t really know how much hair is in there, the shape is totally random. But it has certain expected features (it’s made out of hair, is denser in the middle, has less hair on the outer part, etc).




Question 1B: Problem of Uniqueness

My problem is actually about the question itself: “Does it diminish the value of the art when unique objects can be mass-produced?” I think value really shouldn’t be at the center of the discussion with generative art. For me, value comes from exclusivity, ownership, and scarcity, and a big part of generative art is about its ability to dismantle such things in art. With generative art, we shouldn’t be looking at the value of each individual mass-produced unique object/works; the focus should be the mere realization/appreciation that mass-produced uniqueness is made possible through generative approach.

starry – landscape

OpenProcessing link 

Since a lot of my work is inspired by nature, I wanted to explore a generative landscape that focused on simulating cloud movement. I also wanted to emulate the look of a film camera using overlaid lines and black bars, and try to incorporate more graphic design influences. It’s made with Perlin noise and Bezier curves.

I’m satisfied with the final result artistically, I think in terms of computational complexity / cleanness of code it’s lacking. I feel that I was kind of just messing around with things without really understanding their concepts (i.e w/ the Bezier curves) so I felt that it took me much longer than it should have to arrive at the final outcome. I changed the landscape from my first draft, which was originally going to generate “puffy” looking clouds with a blue sky, as I felt it was too similar to my clock piece.



Interestingly, I would argue that Effective Complexity is fully determined by human perception.

Mathematical Equations, to some people, are mostly chaos since the change of numbers in the equation would not invoke psychological change to them because they are not interested in extracting information from a blackboard of non-sense. To them, the image above reflects low effective complexity. To mathematicians, however, each bit of information is important and thus the image above reflects high effective complexity.

My Personal Note

“Generative Art Theory” talks about generative art as repeating execution of rule-based art, which incorporates many ancient generative arts not executed by computers.

“Generative Art Theory” by Philip Galanter discussed Effective Complexity. The intuition lies within although the trajectory of individual gas molecules is not predictable, the overall effect of gas property is well known only with little random error. But this is just an intuition, is there a way to find a mathematical definition for Effective Complexity? If we can systematically quantify such metrics, the next generation GAN could be optimized to achieve high Effective Complexity! (Given that the metrics is computable and well defined, we can have genetic algorithm do the generation. It doesn’t have to be differentiable)

  • The information theory counts every detail in the system as bits of information, but the human perception clearly does not.
  • Is the Effective Complexity only exist given human perception or is it more fundamental?
  • One way to model a complex system is to use statistical tools like discerning the mean and standard deviation. Two gas systems with different information will still have similar mean and standard deviation which aligns with human perception.
The article suggest that extremely low or high information complexity gives low effective complexity
The article suggest that extremely low or high information complexity gives low effective complexity
Reaction-Diffusion System: simulation of chemical reactions that produce its own catalysts with changing diffusion rate
Reaction-Diffusion System: simulation of chemical reactions that produce its own catalysts with changing diffusion rate


My opinion about The Problem of Authorship: by defining generative art as repeating execution of rule-based art, all information is, therefore, encodable and can be represented by the rules themselves. If the final products follow exactly as the rules describe, then the final product, as a reflection of the rule, does not add additional meaning to the work. In this case, the authorship should fully belong to who wrote the rules. However, in the case of random number generation (especially for pseudo-random numbers), decisions (on which random number to use) are made by the computer, not the artist. Say, you wrote a program that uses total randomness to generate 100px by 100px images. Most of the time the resulting image is an image of white noise. However, it is still possible for the computer to generate something meaningful by small chance. This problem is magnified with artwork that involves latent space (typically in GANs) as this probability becomes larger. A computer can discover interesting random input to the latent space to “discover” an interesting artwork. At this point, we should attribute some authorship to that computer in choosing the right input. The “amount” of authorship we attribute to the executor should be proportional to the search space. This link to computational complexity is intuitive: as search space shrinks, the rule becomes more restrictive, and therefore more percentage of authorship should be rewarded to the rule-writer instead of the executor. In summary, for computer-generated art with uncertainties, I think the authorship should be split to both the rule-writer and the executor based on how restrictive the rule is.

Table of Content




I wanted to do something really simple for this project. I was getting too caught up with all the different extravagant ways I could approach this assignment, so I decided I would not do anything high tech at all.  I basically messed up a double for loop and noticed that it looked a bit like a hand-drawn hill/horizon line. I think I have been wanting more and more to do something that looks like it comes from my hand, and this felt closer to having a material quality.


I started with these, and I’m not sure which I like better at the moment. Overall, I think I like this work. It’s extremely simple, code-wise and visually, but it felt like a return to letting go with my art and not really thinking too hard about it.





Title: Use Your Heart to Prepare the Paper; Use Your Feet to Draw the Map

Use Your Heart to Prepare the Paper; Use Your Feet to Draw the Map

Image #1
Image #1
Image #2
Image #2

I am very interested in creating photorealistic quality using non-PBR tools. A map is relatively achievable and when those hand strokes are simulated by the machine, the quality increases but the authenticity still somehow remain as if it was crafted by hand. The entire image is made of many layers of clipping, many canvases, many Perlin noises, and many high school geometry. However, I strongly encourage the audience to not pay attention to the map that is drawn on the parchment because the map is clearly unfinished and badly made.

Process and Reflection


The most artistically challenging part is to make the parchment look real with a nice texture. The most technically difficult part is to figure out how clipping can work on pixel arrays. Of course, the project is over-scoped as I normally do. But since I think I can learn the most out of an over-scoped project, I will keep doing it until the time gradually shrink as I get increasingly familiar with 2D canvas coding.

Math on Paper
Math on Paper


link: https://openprocessing.org/sketch/1486519 

The inspiration for my generative landscape is vaporwave . I mainly want to recreate this style(vibrant, color and atmospheric perspective) with generative coding. The mountain is generated from perline noise, and the color is randomized with a random seed. And the bird’s location is also randomly generated. A new landscape will be generated when ever it’s refreshed.

I think the randomized color is especially successful for this project. Some thing I wish to improve on is implanting this in a 3D space with WEBGL.