Wormtilda- FinalPhase4

My project was mainly a learning experience pointed at learning in the ins and outs of Unity. Because of this, a lot of my time was spent doing tutorials and getting used to the new interface and programming language (C#). I chose to insert objects from a Blender scene to make my job a bit easier, but it didn’t actually end up helping that much because I had to spend a lot of time baking the textures from the objects and remapping the objects’ UV’s. There were two main complex problems I sought to solve in my project. 1st was to create a player character that moved coherently in relation to a third person character. The 2nd issue was to create a shader system so that parts of objects only within a transparent cube would be rendered. The latter proved especially hard to figure out. I was able to implement both of these things, which I was pretty satisfied with.

Here is a video showing more clearly the capability of the cube-clipping shader system.

Where to go from here:

I had a lot of ideas for where I could take my project further, and I plan on implementing at least some of them before the submission. Here are some of them:

-make the camera moveable with mouse control

-make the player be able to ride the train

-create an above-ground environment the player can travel to

-implement better lighting

-improve materials (the trains especially)

-add other characters into the scene, perhaps with some simple crowd simulation thar allows the characters to get on and get off the train

-implement simple character animations

bookooBread – 08FinalPhase4

Houdini Explorations in 3D Character Pipeline

I have been self-teaching and using Houdini as my primary 3D tool for the past 4-ish months and have learned so, so many things. One area that I wanted to dive deeper into was exploring Houdini’s newer capability to do the entire character pipeline. With their rigging toolset’s update, you do pretty much everything without leaving the software. So I went through the whole character development process, with every step being entirely procedural! Very exciting, very cool. This was just a first stab at this whole pipeline within Houdini, so I’m excited to iterate in the future and learn from mistakes I made along the way.

Process:
  1. Modeling

  1. Rigging

  1. Animation

  1. Simulation
    • Floppy, soft-body character.

  1. Other Computational Stuff

  1. Rendering
    • This one I have not yet gotten to since it will take quite a while. I will be turning in the final render as my final documentation.
  1. Plus plenty of stuff in between

 

 

duq-FinalPhase4

 

I have been working on getting my n-body gravity simulator to function correctly. It currently calculates gravity dependent on distance to the body, the density, and the center of mass of each body for each in-play body. From here, I want to add a function to combine multiple bodies if they get too close together and different materials.

spingbing – Presentation

My goals for this past month were to become more comfortable with Unity, practice rigging, and mainly to create a project encapsulating both of these tasks by learning how to use the Looking Glass.

After seeing some very nontraditional interactive/immersive animations made in Unity, I realized I could kill two birds with one stone by learning the Unity interface while also expanding my use and knowledge of the medium. For the first couple weeks, I dove into watching tutorials on Unity, the Looking Glass, and general animation tutorials on Blender. I usually don’t let myself actually learn when I make projects, which ends up with a bunch of really janky artwork full of workarounds and just plain incorrect usage of the software. This prevents me from really learning, so given the time allotted specifically for tutorial watching, I finally gave myself the opportunity to learn.

This is a continuation of project from last semester https://courses.ideate.cmu.edu/60-428/f2021/author/spingbing/. I have finished with the animation aspect of the project and now need to take the information I got from the tutorials I watched and apply it to actually finishing the project.

To Do:

  1. Script to trigger animation on button press
  2. Set up simple box environment
  3. Fine-tuning configuration with Looking Glass package
  4. Document display


CrispySalmon-FinalPhase04

Training GAN with runwayML

subwayHands

    • Process:
      1. Instagram image scrape using selenium and bs4.
        • This part took me the most time. Instagram makes it really difficult to scrape, I had to make several test dummy accounts to avoid having my own account banned by Instagram.
        • Useful Tutorial: https://www.youtube.com/watch?v=FH3uXEvvaTg&t=869s&ab_channel=SessionWithSumit
      2. Image cleanup, crop(square) and resize(640*640px)
        • After I scrapped this ig account’s entire page, I manually cleaned up the dataset to get rid of any images with face, more than two hands or no hands at all. And the remaining 1263 pictures are cropped and resized to be better processed by runwayML.
      3. Feed dataset to runwayML and let it do its magic.
    • Result: Link to google folder Link to runwayML Model
      • When looking at the generated images zoomed out, thy defiantly share a visual proximity with the ig:subwayhands images.

generated

original

However, looking at the enlarged generated images, it’s clearly not quite at the level of actual hands. You can vaguely make out where the hand will be and what’s the posture of the hand/hands.

My Hair

    • process:
      1. scan my hairs:
        • I had this notebook full of my hairs that I collected throughout my freshmen year. I scanned them in to unified(1180px*1180px) squares, and at the end I was able to get an image dataset with 123 images of my own lost hairs.
      2. Feed dataset to runwayML and let it do its magic.
    • Result: Link to google folder Link to runwayML model
      • Despite the small size of this dataset, the generated images exceeded my expectations and seems to more successfully resemble an actual picture of shower drain hair piece. This is probably because the hair dataset is much more regularized compared to the subway hands dataset.  The imagery is also much simpler.

qazxsw-FinalPhase4

I decided to learn Unity from the tutorial provided by Emmanuel Henri on LinkedinLearning. My original plan was to finish the tutorial and create a small project for myself using Unity, but my time is limited so I only finished the tutorial. Below is the screenshot of what I have done so far.

This gif is part of the kitchen walkabout I created following the tutorial. I know the lighting is currently very bad.

Metronome- Dr. Mario

A rhythmic fighting game where you have to copy your opponent’s musical piece to reflect the damage back at them!

The closer you are to matching the notes the less damage you take and the more damage you deal.

The graphics are really bad right now and my goal is to make it much more visually appealing, this is just a proof of concept.  I will also be adding more songs, multiple verse fights, menu systems and some other quality changes for the final submission on May 11th.

 

 

Devlog 1: May 11th Update

I have made quite a few quality of life changes already:

  • fixed the damage numbers so they show after each note is hit
  • damage pop ups that show how much damage was dealt and to who
  • buttons to show which row is which and to show if you’re pressing down on it or not
  • Notes now spawn in from the creation lines instead of popping up
  • fixed scaling issues with the UI
  • Fixed a turn based bug that caused nothing to happen if you pressed the button too early, now a fail SFX plays
  • Notes no longer trail of the screen if you don’t hit them and instead do full damage

 

Future changes:

  • Popups that tell you how well you hit the note (Good, perfect, Bad)
  • directional variability on where the damage numbers fall
  • Full Battle, multiple music sheets that loop until the enemy or you die.
  • Start screen
  • End screen

Bonus Goals:

  • Multiple levels
  • Art instead of default sprite
  • Music sheet from music input

 

 

 

Here is my final project for Metronome: https://lexloug.itch.io/metronome

You can access it using the link and the password: 60212, if that doesn’t work please message me. I also only built it for windows so if someone wants a mac version just let me know.

This is a fully completed demo that shows off how one combat would feel like. Though lacking in the artistic aspects, I believe it shows off my idea pretty well. Let me know if any bugs are found.

 

 

 

 

starry – FinalPhase04

I was trying to learn GLSL but due to my lack of motivation, I ended up making a project in Blender instead. I lot of what I learned in GLSL actually helped in making this scene, since Blender’s shader nodes are very similar to visually programming GLSL. I tried to do most of this procedurally using the node system (besides the grass), the flowers are made using an SDF I found in the Book of Shaders. Besides GLSL, I tried to explore Blender’s features more such as using particle systems, procedural animation, and using the compositor. As this was kind of rushed I wish I put more details into the scene and used less bloom, but I liked how it turned out.

https://youtube.com/shorts/56q45MYR7E8