I’m thinking about making a transformation pipeline that can turn a set of GIS data into a virtual scene, not too different from procedurally generated scenes in games. This will consist of two parts: a transformation that is only done once to generate an intermediate set of data from the raw dataset and another set of code that can render the intermediate data into an interactive scene. I’ll call then stage 1 and stage 2.

To account for the indeterministic nature of my schedule, I want to plan different steps and possible pathways, which are listed below.

Stage 1:

  1. Using google earth’s GIS API, start with the basic terrain information for a piece of land and sort out all the basic stuff (reading data into python, python libs for mesh processing, output format, etc.) and test out some possibilities for the transformation.
  2. Start to incorporate the RGB info into the elevation info. See what I can do with those new data.
  3. Find some LiDAR datasets if point clouds can give me more options.

Stage 2:

  1. Just take the code I used for SkyScape (the one with 80k ice particles) and modify it to work with the intermediate format instead of random data.
  2. Make it look prettier by using different meshes, movements, and postprocessings that work with the overall concept.
  3. Using something like oimo.js or enable3D, add physics to the scene to allow for more user interactions and variabilities of the scene.
  4. Enhance user interaction by enhancing camera control, polishing the motions, adding extra interaction possibilities, etc.
  5. (If I have substantial time or if Unity is just a better option for this) Learn Unity and implement the above using Unity istead.

I’ll start with modifying the SkyScape code to work with a terrain mesh, which would give me a working product quickly, and go from there.