I have been working on getting my n-body gravity simulator to function correctly. It currently calculates gravity dependent on distance to the body, the density, and the center of mass of each body for each in-play body. From here, I want to add a function to combine multiple bodies if they get too close together and different materials.


I have been working on trying to get more than two spheres to all have a force of attraction between them. My overall project is to allow any number of spheres to all feel a force of gravity towards one another and allow them to combine into larger spheres to try to simulate planets and stars forming from debris.



I looked into the custom engine behind the indie game “Noita” called “Falling Everything”. This engine creates a fully simulated world of pixels with solids, powders, rigidbodies, liquids, gases, fire, and various oddly-behaving materials. They discussed how everything in the game works under the same fundamental algorithm that effectively generates the behavior of sand. I also learned that I will have to keep my physics simulation a lot more simple due to the amount of optimizations that they put in place for the engine to run effectively that I will be unable to do in unity.


I also played around with the orbit simulator here: https://academo.org/demos/orbit-simulator/  to get an idea of what the orbit section of what I want to create looks like in a 2 body simulation.


Idea 1:

Create an interactive, physics pixel-based world with the elements of fire, water, wood and earth. Fire burns wood, water snuffs out fire and flows, wood rots slowly over time, and earth erodes slowly over time.


Idea 2:

Keep working on my previous project to add multiple colors and spectra of color across the lines when multiple people are interacting at once. Also possibly add different shards of color in the shapes created by the intersection of lines.


My piece is a 2-person web.

I was lucky enough to think of a much better idea than the original, highly contrived fire-based piece that I was originally thinking of making. With this piece, I feel like I was able to think of a very basic idea that nonetheless lends itself to an infinite number of ways of communication, that could be fully developed with a series of geometric runes to describe anything that you could with a normal, linear language. I feel like this is the most successful piece that I have made yet due to what I mentioned above. Still, I feel like it has some limitations as there is exactly a single pattern the corresponds to a given combination of fingers across the two screens. I would like to continue this piece by allowing certain lines between fingers to be disconnected by the second user.



The project that I selected is “That Time I sold My Dreads Online” by E. Jane. This piece is a simple image of a small plastic bag containing several pieces of dread labeled to be “Locks From E. Jane’s Head” on ebay. I found it interesting because I think it is a very blunt, no-frills conversation about the dilemma that many artists face. Artists can choose to make art that they truly believe in but will often have trouble supporting themselves or they can create art that they may actively dislike making or find disgusting simply to be able to afford living.



In this project I tried to make a program that would track the positions of your hands (using the code given to us) and track generative fire onto your fingertips on the screen. I used several different factors to try to generate realistic fire that determined how rapidly it would ascend, how spread out the fire would get and how much smoke it produced. Overall, I am happy with how I was able to convert the idea behind my project into a reality, but the program does run very slowly, making the fire move upwards far more slowly than real fire would. I tried to fix this issue by using pixels instead of circles, but it did continue to run slowly. Something that I wish I could have added was the option to only have fire coming out of your index finger if you close your other fingers as this would allow you to draw with the fire, but I really did not know how to begin to implement this.


“Joy Buolamwini of the Algorithmic Justice League speak on wearing a white mask to get her own highly imaginative “Aspire Mirror” project involving facial recognition to perceive her existence”. I didn’t realize that biases in AI would compound into many projects that did not introduce any bias.

“In the early 1960s, Paul Ekman set out to discover whether facial expressions for communicating emotions are universal or cultural. He travelled the world with photographs of distinct faces, even traveling to remote locations, and found that they were interpreted consistently”. It seems obvious today that facial features are based in nature as opposed to nurture, I hadn’t realized that someone most likely had to prove it before it became common knowledge.


I spent quite a while trying to get the VQGAN+CLIP site to work, but I was completely unsuccessful. I instead used the Pixray readymade. I was surprised by how unrelated to the prompt the first couple of images were and then I found it to be a window into the way in which ML works to see how it takes that image and slowly makes it fit closer and closer to the prompt.


I went to a coffee shop downtown the other day and was in a corner by myself.

After reading a few pages of a book I noticed a boy come in alone and sit down in a chair at the table next to me.

He opened a pad of paper and started writing.

Every so often he glanced over his shoulder to make sure nobody was coming.

He was in deep thought.

Then he put his pen down and a moment later got up and left.

A short time later a woman entered the coffee shop alone.

She sat down at a table across from me.

She put her coat down on a nearby chair and leaned back in the chair.

I thought, “Aw, that’s nice,” and continued reading my book.

When it seemed like she’d waited a long time for her drink she left the shop again and started walking down the street.

I thought, “Hmm … interesting.”

Then a man sat down at my table and started reading.

After a few minutes of quiet conversation I thought, “That is a lot of coffee.”

He leaned back in the chair and closed his eyes.

I thought, “He’s napping,



A Duck

Wall to wall carpeting

The duckling was sitting on the roof of the duck house, watching the birds fly by. As the sun started to set, the ducklings could see the carpet in the distance, stretching all the way to the horizon. They knew that they would have to find a way to get over the wall to get to the carpet, and they were excited to do so.


I find it interesting how these two different tools approach the same idea differently. One just asks for themes while the other asks for the start of a story. I wonder if both algorithms use the given information in the same way.