Open Processing Sketch

A couple of things to note at the start… totally bombed the easiest part of this project… making the canvas a 2:1 ratio. I drew the water and submarine as a square, then remembered it was supposed to be a 2:1 ratio, but the thought of redrawing all of it made me want to die, so I cropped it to a 4:3 ratio which is random but the best I could manage. Sorry 🙁 Second, I managed to do this in like one afternoon, which is pretty evident I’m sure. But, I’m trying to do my best with the time that I’ve got, and I do want to come out of this class knowing I at least gave every project a go.

The original inspiration for this project came from the atmospheric train ride from a past student that we looked at in class. A refresher:

I really liked this idea of a .png image and looking through a hole of some kind into the outside world. My immediate thought was to make the inside of a submarine, but I wasn’t really interested in making underwater creatures. It eventually morphed into a submarine expedition around the world, based on my boyfriend’s father who did just that in his time with the military. When looking at the final product, I’m dissatisfied because I think I accidentally left behind the one idea I found charming to begin with: a hole, a window, something surrounded.

In this sped up gif, you can see that the submarine and waves kind of bob up and down. I orginally had them static and felt like something was super off and weird. Giving them a little life (independent of each other too) definitely helped. You can also see that the sky cycles through day and night. I really like that I did this… it sort of gives this idea of the passage of time, how long this submarine expedition and adventure is really taking.

I can only really draw in a cartoon-ish way, so I also added the faux black outlines to the mountains to match my drawing, which definitely helped make everything more united, but after talking with Golan about trying to broaden my horizons and make less “cutesy” stuff, I wish I gave myself more time to try that. Hopefully I can do that with the creature project!



My concept started from an idea to make data-driven tree rings that represent precipitation/drought data in California, and are explorable at multiple scales. I had the idea to use branching as the natural way to explore scale, where larger scales (state, basin) could be larger rings that branch into the smaller rings (grid cell), so switching scales meant you were slicing the tree at different points. I had the code to draw the rings in this wobbly way, but plugging in the data as an input and getting a neat ring that starts and ends in the same place proved to be challenging, and my DeepNote started crashing repeatedly 🙁 I pivoted (for now) to make a landscape from the tree rings that can shift from a top-down view (slice) to a perspective view (branch).

I use the random function to place multiple tree rings on the canvas, mapping their y position to the number of rings, so lower “trunks” are thicker. I draw rings with some wobble using Perlin noise to offset the radius as we go around the circle (learning the trick to get the noise values to start and end in the same place: use sin/cos functions as xoff,yoff inputs to noise, so your domain for the noise space is also a circle), then add stippling with a random angle stagger, so the stipples are not all lined up.


For my landscape I created a hotel hallway scene where each room opens up to a completely new landscape. Most assets are rendered in Blender, and each new scene choose from two different landscape layouts (flat vs cylindrical), 2 types of trees, 2 types of creatures, 2 types of structures, and 4 or so backgrounds and 4 big structures off in the distance. Basically everything is PNGs, except for the first hundred or so frames which is a mp4. You may have to refresh the page once when initially starting it (I really don’t know why), but after that it should work.



Link to sketch: https://openprocessing.org/sketch/1486536



starry – landscape

OpenProcessing link 

Since a lot of my work is inspired by nature, I wanted to explore a generative landscape that focused on simulating cloud movement. I also wanted to emulate the look of a film camera using overlaid lines and black bars, and try to incorporate more graphic design influences. It’s made with Perlin noise and Bezier curves.

I’m satisfied with the final result artistically, I think in terms of computational complexity / cleanness of code it’s lacking. I feel that I was kind of just messing around with things without really understanding their concepts (i.e w/ the Bezier curves) so I felt that it took me much longer than it should have to arrive at the final outcome. I changed the landscape from my first draft, which was originally going to generate “puffy” looking clouds with a blue sky, as I felt it was too similar to my clock piece.


I wanted to do something really simple for this project. I was getting too caught up with all the different extravagant ways I could approach this assignment, so I decided I would not do anything high tech at all.  I basically messed up a double for loop and noticed that it looked a bit like a hand-drawn hill/horizon line. I think I have been wanting more and more to do something that looks like it comes from my hand, and this felt closer to having a material quality.


I started with these, and I’m not sure which I like better at the moment. Overall, I think I like this work. It’s extremely simple, code-wise and visually, but it felt like a return to letting go with my art and not really thinking too hard about it.





Title: Use Your Heart to Prepare the Paper; Use Your Feet to Draw the Map

Use Your Heart to Prepare the Paper; Use Your Feet to Draw the Map

Image #1
Image #1
Image #2
Image #2

I am very interested in creating photorealistic quality using non-PBR tools. A map is relatively achievable and when those hand strokes are simulated by the machine, the quality increases but the authenticity still somehow remain as if it was crafted by hand. The entire image is made of many layers of clipping, many canvases, many Perlin noises, and many high school geometry. However, I strongly encourage the audience to not pay attention to the map that is drawn on the parchment because the map is clearly unfinished and badly made.

Process and Reflection


The most artistically challenging part is to make the parchment look real with a nice texture. The most technically difficult part is to figure out how clipping can work on pixel arrays. Of course, the project is over-scoped as I normally do. But since I think I can learn the most out of an over-scoped project, I will keep doing it until the time gradually shrink as I get increasingly familiar with 2D canvas coding.

Math on Paper
Math on Paper


link: https://openprocessing.org/sketch/1486519 

The inspiration for my generative landscape is vaporwave . I mainly want to recreate this style(vibrant, color and atmospheric perspective) with generative coding. The mountain is generated from perline noise, and the color is randomized with a random seed. And the bird’s location is also randomly generated. A new landscape will be generated when ever it’s refreshed.

I think the randomized color is especially successful for this project. Some thing I wish to improve on is implanting this in a 3D space with WEBGL.



Initially, I intended on making a gigantic surreal landscape out of many balls, Inspired by the childhood memories of playing in big ball pits, I wanted to make a 3D, physics-enabled, interactive landscape where the user gets to drive around what is essentially a gigantic ball pit. But due to some mysterious and possibly quite stupid issues with the physics engine and Golan’s advice, I diverted to this idea — a simple poetic experience of chasing the sun. The user can control their movement to navigate through the rising ice particles in the pursuit of the unattainable ultimate destination.

Things I wish I had the time to add: background music, sound effects, better movement control (with acceleration,) pointer lock control (ability to look around,) generative terrain (below,) VR.

About the size requirement: the application adapts to your browser window size and aspect ratio (though not on resize, didn’t have time to add that, you’d have to refresh) so you can make any size and respect ratio you want by tweaking your browser window.



The idea is to create terraced fields, because I wanted to play around 3D spaces and this could be an easy start. I generated random numbers to locate the mountains and then used Perlin Noise to draw their shapes.

The biggest challenge for me is the time lag. In fact, I also explored the lighting and texture functions of p5js and was hoping to create more realistic pictures, but the code for now already takes about 10-15 seconds to generate each picture, so I was afraid to add more details. I think I might have called the function “noise()” too many times. I’ll try to increase the efficiency if I have time.




In this project, I wanted to play around with portraying the crowded buildings of a city, whether it’s from an aerial perspective or upfront. Having grown up living in the city, I wanted to also add an abstract and surreal element to my work that strays from the direct representation of buildings and instead reminds people of living in a city through the geometric play with negative space. I added hints of color that may remind one of the reflections from skyscrapers.

I really like the result of the generative landscape. However, sometimes the buildings can appear very detailed and noisy which I am still unsure about. I think that if viewed on a bigger screen, the more detailed generations will appear better. I struggled with the colors of the work and how much color I should add. However, I like the small hints of colors that make viewers focus more on specific areas of the work and the play with primary colors in the end.