My project was mainly a learning experience pointed at learning in the ins and outs of Unity. Because of this, a lot of my time was spent doing tutorials and getting used to the new interface and programming language (C#). I chose to insert objects from a Blender scene to make my job a bit easier, but it didn’t actually end up helping that much because I had to spend a lot of time baking the textures from the objects and remapping the objects’ UV’s. There were two main complex problems I sought to solve in my project. 1st was to create a player character that moved coherently in relation to a third person character. The 2nd issue was to create a shader system so that parts of objects only within a transparent cube would be rendered. The latter proved especially hard to figure out. I was able to implement both of these things, which I was pretty satisfied with.
Here is a video showing more clearly the capability of the cube-clipping shader system.
Where to go from here:
I had a lot of ideas for where I could take my project further, and I plan on implementing at least some of them before the submission. Here are some of them:
-make the camera moveable with mouse control
-make the player be able to ride the train
-create an above-ground environment the player can travel to
-implement better lighting
-improve materials (the trains especially)
-add other characters into the scene, perhaps with some simple crowd simulation thar allows the characters to get on and get off the train
-implement simple character animations
The main creator I have been looking at as I am ironing out the kinks of Unity is the YouTuber Sebastian Lague. He has a ton of videos that detail how he creates small but interesting projects in Unity, such as creating a flight simulator that uses real-word Earth data (such as Earth depth data to create a normal map for the globe model), or creating a barebones yet interconnected simulated ecosystem. I found his series on creating procedural plants particularly helpful. I don’t think any of these projects would necessarily be super engaging to play or are super artistically interesting, but I don’t think they aren’t really meant to be. I think they function well to highlight what can be done in Unity and to get viewers excited to create, which I think is really all I need for this project. I have also been watching a few videos on crowd simulation in Unity because I have a few ideas on how I can use a lot of AI agents in my project, which has been useful, but most videos detail projects that are probably more complex than I’ll need.
The prompts I gave for these two images, respectively, are “worms in Parliament” and “worm lawyer in worm court.”
The InferKit Tool has definitely sampled from a lot of writing because I kept trying to make the text generator give new Fight Club rules, experimenting with changing the syntax of the sentence and removing words, but without fail it kept saying that the first rule is “you don’t talk about fight club.” That answer is accurate of course, but I kind of wish the generator was worse at it’s job so it could output a funnier answer. But otherwise, I thought the InferKit created surprisingly realistic and natural writing most of the time. The Narrative Device website created similarly natural-sounding writing. However, in my little experimentation I did with it, I felt that one theme usually overpowered the other. Like here I kept switching the second theme, keeping the first as “dogs with guns,” and each new prompt always had nearly nothing to do with the second theme, and was focused squarely on the exploits of armed canines. Maybe the generator is just really really interested in dogs with guns.
These are two images I generate, using the shoe and cat modes respectively. I thought this was pretty cool, but at the same time pretty frustrating, because with the cat one I gravitated to making the cat have really complex, unnatural forms, but Pix2Pix kept having trouble interpreting the image and it would keep coming out as an incomprehensible, blurry mess. I do realize why this is the case, however.
The project that caught my attention the most was a real-time SketchRNN project called Scrying Pen by Andy Matuschak on the Chrome Experiments website. The experiment predicts the future strokes of the user as they draw. I have seen a few stroke-based predictive machine learning experiments before, but I this is the most interesting application of it that I’ve seen so far. As I drew, is started to feel less like the algorithm was predicting what my new stroke would be, and more like it was making judgements and suggesting what that stroke should be, and I found myself actually recreating the predictive strokes somewhat, which was somewhat of a strange experience.
In form, my creatures are basically just three lines connected to a point, but I gave them a lot of behaviors that hopefully give them a lot of personality and character. They avoid the cursor, running away from it when it’s close by and hopping away when it’s a but further away. Sometimes they trip, and sometimes the leap really high. I didn’t use any physics libraries or anything like that so the code is super janky, but hopefully that jankiness adds to the character of the creatures and their unpredictability instead of distracting from it. I also added a feature when if you press “s” on the keyboard, the all hop in unison.
Link to the sketch: https://openprocessing.org/sketch/1491759
For my landscape I created a hotel hallway scene where each room opens up to a completely new landscape. Most assets are rendered in Blender, and each new scene choose from two different landscape layouts (flat vs cylindrical), 2 types of trees, 2 types of creatures, 2 types of structures, and 4 or so backgrounds and 4 big structures off in the distance. Basically everything is PNGs, except for the first hundred or so frames which is a mp4. You may have to refresh the page once when initially starting it (I really don’t know why), but after that it should work.
Link to sketch: https://openprocessing.org/sketch/1486536
Link to the sketch: https://openprocessing.org/sketch/1476237
My project is a clock that measures time through the petal-like formation of a cosine polar equation. The highest petals count the hours, the medium count the injures, and the small ones are the seconds. Additionally, the background changes overtime, from a right blue during the day to a deep purple at night. I had somewhat been introduced to the concept of polar equations back during the assignment to make a spiral in p5.js, and it really opened my eyes to how useful and interesting they could be. I had learned about them in my calculus class in high school, but seeing them used in this creative coding environment led me to see them in a new light.
I found using these polar equations to be surprisingly difficult, as I for all three shapes I had to modify the start and end angles in order to make the radius of petals start at zero and end at zero. I couldn’t really find a consistent pattern so I had to eyeball it, but I think it works pretty well and is pretty accurate and legible. One thing that I definitely think can be improved about this piece is its presentation. I did spend a lot of time on it, but right now it’s pretty plain. I did had ideas to implement different “themes” depending on the time of day, like it would look neon at night and more like a regular flower strong the day, but I didn’t get to chance to really flesh this out. I definitely do want to go back and implement those ideas.
This is what the clock looks like during the day:
and this is it at night:
Here are my sketches: