Student Area

Hunan-FinalDocumentation

The previous posts already contains all the documentation I have for the final project so I’ll just copy them over here.

PART 1

A unity walkaround with procedurally generated terrain of (medieval?) Pittsburgh. A simulator of ancient Pittsburgh, if you will.

An example of heightmap data used as terrain information. This information is available for almost the entirety of the US and many parts of the world at: https://heightmap.skydark.pl/

[Looking at point park from a little bit to the west of (what is now) Grandview Overlook]

For reference:

[Looking at Schendly from CMU]

[Random houses on Mt. Washington (south shore of PGH)]

PART 2

I was working on procedural terrain generation for the first half of this project but because I found a really good package for that, there’s very little left for me to do apart from learning how to use that package. Since my goal was to learn unity, I decided to make something that requires more scripting. So I made a 3D in-game builder (the best way to explain it would be the combination of Kerbal Space Program’s user interaction combined with the mechanics of Minecraft.)  I got to explore scripts, material, audio sources, UI, character controller, physics ray casting, box collider, and mesh renderer. The environment is the HDRP sample scene in Unity. It only took 150 lines of code for the builder and another 50 for a very naive character movement controller, which is quite amazing.

 

Hunan-LookingOutwards03

https://experiments.withgoogle.com/imaginary-soundscape

I found this project and thought it was a really cool use of ML with a lot of potentials. In the demo, the model generates sound based on the street view, giving the user a full audio visual experience from the purely visual street map data. I thought it really utilized DL to its strength — finding patterns for massive datasets. With only a reasonably sized training set, it can be generalized to the massive street view data we have. The underlying model (if I understood/guessed correctly) is a CNN (which is frozen) that takes in the image to generate an embedding, and another CNN that is trained but discarded at inference which takes in the sound file to generate an embedding as close to the one generated by the image CNN for the paired image. The embedding for each sound file is then saved and used as a key to retrieve sound file at inference time. With more recent developments in massive language models (transformers,) we’ve seen evidence (Jukebox, Visual transformers, etc.) showing that they are more or less task agnostic. Since the author mentioned that the model sometimes lacks a full semantic understanding of the context but simply match sound files to the objects seen in the image, these multi-modal models are a promising way to further improve this model to account for more complex contexts. It might also open up some possibilities of remixing the sound files instead of simply retrieving existing sound data to further improve the user experience. This might also see some exciting uses in the gaming/simulation industry. Windows’ new flight simulator is already taking advantage of DL to generate a 3D model (mainly buildings) of the entire earth from satellite imagery, it’s only reasonable to assume that some day we’ll need audio generation to go along with the 3D asset generation for the technology to be used in more games.

spingbing-FinalProject

This final project is an attempt at using the Looking Glass HoloPlay Unity plugin to create an interactive animation/mini installation. The animation is of a character I created using Google’s TiltBrush application (that I made for an earlier project, found here) which I exported into Blender to rig and keyframe.

My concept was to create a story where the Looking Glass was a sort of window into a box that the character was trapped in. It sulks in the box alone until the viewer interacts with it by pressing a button–this causes the character to notice the existence of the viewer and to attempt to jump and attack the viewer, only to be stopped by the constraints of the box.

To break it down, I needed one idle animation, and one “reaction” animation that would occur with the button press. In this process, I made a preliminary walk cycle animation:

To give it more character, I added some random twitching/jerking:

I did these animations as practice, but also because I thought I would use them in my final project as part of my idle animation. I tried to mess around with Blender’s Action Editor, NLA Editor, etc. to piece together multiple animations but it didn’t come out the way I wanted to.

At the end, I was able to make an idle animation + reaction animation that I liked and that worked in Unity, so the only thing left was to put it on the Looking Glass. However, I ran into an issue immediately:

[Error] Client access error (code = hpc_CLIERR_PIPEERROR): Interprocess pipe broken. Check if HoloPlay Service is still running!

Looking up this issue led me to delete and reinstall the HoloPlay plugin, restart Unity, duplicate my project, delete certain files off of my computer, and many other things that other suggested online. However, nothing worked. I did see a comment from someone who I believe worked on the Looking Glass which said that they would be fixing this problem in a new update.

Without the Looking Glass, here is my final project:

https://youtube.com/shorts/MKlnXfXNzsc?feature=share

Just the idle state:

Just the reaction:

 

Sneeze-FinalDocumentation

I spent my time catching up on past projects, and I worked on the telematic project and created a drawing website that has rooms. These rooms would be used as prompts for people around the world to draw together from a Twitter bot account that would present a new prompt everyday. The Twitter account is @CollabDrawABCs and you can view it here. Also, here is my blog post about the deliverable.

The way I achieved merging a drawing app with an app that has rooms was by looking at template code. I used the Persistent drawing app code as well as the Piano rooms code as the basis of my project.

I had some trouble in the beginning for going about merging rooms into the drawing app because the drawing app used a database to store the drawings. This made creating rooms different from the Piano rooms code, and made me understand lowdb and socket.io better.

I initially thought about having a roomID be passed as arguments into functions for the server to understand and have the server store the data separately, but that ended up not working. After doing a couple of other approaches, I finally understood what needed to be done to get this functioning.  

This is where packages to the server get packaged. Each package has a roomID in the array and gets serialized and unserialized together with the line data and color data. This package then gets stored in the database by the server.

This is on the client end. It receives a package and removes the last two elements which are room and color before creating the lines.

These are the two main things I changed. There were other things that needed tweaking like drawings that are not stored and are being drawn live and changing clients to store roomID as well.

I also created a twitter bot to tweet out prompts each day. I used cheapbotsdonequick, which basically handles everything for me.

It uses Tracery which basically allows you to replace a variable in a base line of text with random other pieces of text. After that I set it to tweet once everyday.

Wormtilda- FinalPhase4

My project was mainly a learning experience pointed at learning in the ins and outs of Unity. Because of this, a lot of my time was spent doing tutorials and getting used to the new interface and programming language (C#). I chose to insert objects from a Blender scene to make my job a bit easier, but it didn’t actually end up helping that much because I had to spend a lot of time baking the textures from the objects and remapping the objects’ UV’s. There were two main complex problems I sought to solve in my project. 1st was to create a player character that moved coherently in relation to a third person character. The 2nd issue was to create a shader system so that parts of objects only within a transparent cube would be rendered. The latter proved especially hard to figure out. I was able to implement both of these things, which I was pretty satisfied with.

Here is a video showing more clearly the capability of the cube-clipping shader system.

Where to go from here:

I had a lot of ideas for where I could take my project further, and I plan on implementing at least some of them before the submission. Here are some of them:

-make the camera moveable with mouse control

-make the player be able to ride the train

-create an above-ground environment the player can travel to

-implement better lighting

-improve materials (the trains especially)

-add other characters into the scene, perhaps with some simple crowd simulation thar allows the characters to get on and get off the train

-implement simple character animations

bookooBread – 08FinalPhase4

Houdini Explorations in 3D Character Pipeline

I have been self-teaching and using Houdini as my primary 3D tool for the past 4-ish months and have learned so, so many things. One area that I wanted to dive deeper into was exploring Houdini’s newer capability to do the entire character pipeline. With their rigging toolset’s update, you do pretty much everything without leaving the software. So I went through the whole character development process, with every step being entirely procedural! Very exciting, very cool. This was just a first stab at this whole pipeline within Houdini, so I’m excited to iterate in the future and learn from mistakes I made along the way.

Process:
  1. Modeling

  1. Rigging

  1. Animation

  1. Simulation
    • Floppy, soft-body character.

  1. Other Computational Stuff

  1. Rendering
    • This one I have not yet gotten to since it will take quite a while. I will be turning in the final render as my final documentation.
  1. Plus plenty of stuff in between

 

 

duq-FinalPhase4

 

I have been working on getting my n-body gravity simulator to function correctly. It currently calculates gravity dependent on distance to the body, the density, and the center of mass of each body for each in-play body. From here, I want to add a function to combine multiple bodies if they get too close together and different materials.