I’m interested in learning how to write shaders as well as creating an immersive world using Unity and Blender. My three ideas are either one or the other or a combination of both. I’m not sure if I want to go with writing shaders in Unity however since it might be too difficult to tackle both learning HLSL and getting used to Unity.
For my final project, I am going to use Houdini to make a very short animated scene that involves a character. I hope to use the VR headset as well to be able to quickly sculpt some 3D characters that I can use in the short. I haven’t completely decided yet if I want to bring my Houdini stuff into Unity to make it an interactive project or simply render it in Houdini (maybe both?). I think the most important thing I want to focus on with this project, is just having fun. I’m thinking I’ll end up with something kinda goofy, but we’ll see. At the moment, I want to repurpose this idea I came up with earlier in the semester for this class and make something fun out of it. It will involve using a lot of softbody, springy simulation. I’m not completely sure though if I’m set on this idea.
I’m thinking about making a transformation pipeline that can turn a set of GIS data into a virtual scene, not too different from procedurally generated scenes in games. This will consist of two parts: a transformation that is only done once to generate an intermediate set of data from the raw dataset and another set of code that can render the intermediate data into an interactive scene. I’ll call then stage 1 and stage 2.
To account for the indeterministic nature of my schedule, I want to plan different steps and possible pathways, which are listed below.
- Using google earth’s GIS API, start with the basic terrain information for a piece of land and sort out all the basic stuff (reading data into python, python libs for mesh processing, output format, etc.) and test out some possibilities for the transformation.
- Start to incorporate the RGB info into the elevation info. See what I can do with those new data.
- Find some LiDAR datasets if point clouds can give me more options.
- Just take the code I used for SkyScape (the one with 80k ice particles) and modify it to work with the intermediate format instead of random data.
- Make it look prettier by using different meshes, movements, and postprocessings that work with the overall concept.
- Using something like oimo.js or enable3D, add physics to the scene to allow for more user interactions and variabilities of the scene.
- Enhance user interaction by enhancing camera control, polishing the motions, adding extra interaction possibilities, etc.
- (If I have substantial time or if Unity is just a better option for this) Learn Unity and implement the above using Unity istead.
I’ll start with modifying the SkyScape code to work with a terrain mesh, which would give me a working product quickly, and go from there.
An idea that I have is making an augmented mirror with Snap Lens Studio. As these filters are popular on social media, it reminded me of a trend on social media where people would put on a filter that makes them “ugly” and then take it off to boost their self-confidence. My concept is similar in that I want people to recognize their beauty by using my augmented mirror. As everyone has different insecurities about their physical appearances, I think I would either have to make a customizable or a very general augmented mirror, which I will solidify after conducting more research about Snap Lens Studio.
I wanted to go for Golan’s idea of making a one button game. For this I wanted to maximise the fun for this minimal input, my initial reaction was to do something rhythmic or reaction based like geometry dash. However, the mechanics there are a little too simple and basic to the point I’d mostly have to focus on the looks of the piece instead which isn’t what I want to do.
The conclusion was to mix up this rythmic aspect with a combo game, inputs could be based on timing with a moving bar or sheet music where holding or tapping a note could make a difference in full damage attacks or weakened ones. I think the hardest part would be getting all the sound effects for the notes to play when you tap, I’d have to figure out if there was an open database with notes to use already. I think timing wouldn’t be too difficult with the use of timers and maybe an overarching metronome that controls all the time in the game.
Some reach goals could be adding a defending part where you match the notes of the enemy, a fury attack which could be similar to a solo (but with no way to control the pitch of the sound it would just be a one button mash), making it look nice (proper effects on hit, a pulse metronome in the corner, etc…)
Since I’m so behind on my work, my number one priority for this final project is getting all of that done. This includes:
- Deliverables 1B
- Deliverables 1C
- Deliverables 3C
- Deliverables 4
Once I’ve completed those deliverables, if I have any remaining time, I really want to do the following:
- Create a website for my SoundCloud clock (and update it, because it’s already starting to decay away with people deleting their songs!)
- Refine my Longest Word project, particularly by troll-proofing it and optimizing it for all screen widths and heights
My intention with these two projects is to start building my portfolio, since I’m a rising senior and still don’t have a website to showcase my work! That’s my summer project for myself, so getting a head start on these two things would be helpful.
Yes, I am learning shader.
Create an interactive, physics pixel-based world with the elements of fire, water, wood and earth. Fire burns wood, water snuffs out fire and flows, wood rots slowly over time, and earth erodes slowly over time.
Keep working on my previous project to add multiple colors and spectra of color across the lines when multiple people are interacting at once. Also possibly add different shards of color in the shapes created by the intersection of lines.
I am planning on using the looking glass with Unity for my final project to create a piece that is responsive to the user using the makey-makey.
For my concept, I have recently found myself interested in the immediate as well as ultra-long term effects of nuclear war. I have already produced a couple of projects around this concept in other classes regarding the far end of this timeline, but for this class I would like to address the immediate effects.
The piece will consist of:
- a foggy, dusty environment with particles and debris flinging around the walls of the looking glass
- explosions in the background which may potentially “shake” the camera view of the box, triggered by the makey-makey button
- a person or a couple of people who are coughing, running around, and banging on the glass as though they are trying to escape the environment they are in
- a default position will be to be sitting curled up in the corner looking miserable. Perhaps I will use a webcam to notice when the user comes so the person can look up at them?
For the final project, I would like to revisit the Telematic project. I did not complete a deliverable for this project due to other class deadlines and exams; however, I wanted to learn how to send things across multiple devices/clients. My main idea for revisiting the telematic project is to create an app that lets friends stay in touch online.
Some ideas I have are daily fortunes sent by friends or “sticky note” drawings that are sent from one user to the other.