Student Area

starry – FinalPhase1

I’m interested in learning how to write shaders as well as creating an immersive world using Unity and Blender. My three ideas are either one or the other or a combination of both. I’m not sure if I want to go with writing shaders in Unity however since it might be too difficult to tackle both learning HLSL and getting used to Unity.

Inspirational Projects & Useful Technologies

AirPods Evolution

Scan of the Month features CT scans of common products, using 3D rendering + scrolling to highlight different components and stories behind their manufacture. It uses Webflow + Lottie but I think something like this would work well in three.js.

Demo 3 uses our open-source camera control library to scroll through the photogrammetry model.

The NYTimes is naturally a source of inspiration, and they have done some helpful write-ups + demos using three.js and their own (public) libraries for loading 3D tiles and controlling stories with three.js.

More cool things made with three.js:

CrispySalmon-FinalPhase2.

  • Technical Implementation:
    • Instagram Scraping Tools/Tutorials
      • Apify – Instagram Scraper:  I played around with this tool for a bit. Overall, it’s really easy to use. Pros: fast, easy, I don’t need to do much other than inputting the Instagram page link. Cons: It crapes a lot of information, some of which are irrelevant to me. This requires me to filter/clean up the output file.  And is doesn’t directly output the pictures, instead, it provides the display URL for each image, and the URL expires after a certain amount of time.
      • GitHub – Instagram scraper :
        • There is a youtube tutorial showing how to use this.
        •  I simply could not get this one to work on my laptop, though it sees to work for others. Need to play around with this a bit more.
      • Python library  – BeautifulSoup:
      • Alternative method: Manually screenshot pictures
        • Although this is boring, redundant work, it gives me more control on what pictures I want to use to train my GAN.
    • Pix2Pix: If time allows, I want to look into using webcam to record hand posture and generate images based on realtime camera input.

Sneeze-FinalPhase2

Because I am finishing up previous projects, this post will be about telematic experiences. I want to make a simple game or communication method for friends online, I looked at some apps and websites that I enjoy playing or using with my friends.

One game that me and my friends played a lot during quarantine was Drawphone. Basically, each person in the group draws a prompt given by another person in the group and it continues like the game telephone. A simple drawing game like this brings a lot of joy and laughter, which is something I want to emulate.

Also, I took some inspiration from upperclassmen in design who created a website/application called Pixel Push. The project allows for groups of friends to draw together using their webcams as the paintbrush. They used simple tools like HTML, p5.js, Socket.io, and Google teachable machine to achieve the end result. These are technologies we learned about in class, so I didn’t have to research about them.

kong-FinalPhase2

Inspirational Work

Ines Alpha

I find the textures used in Alpha’s work to be inspirational–and how the fluid yet superficial texture flows out of the eyes. I also noticed that the fluid sometimes disappears into the space and sometimes flows down along the surface of the face. It seems to differ depending on the angle of the face, and I wonder how Alpha achieves such an effect.

Aaron Jablonski

I found this project captivating as it had similar aspects to my initial ideas in that it places patches of colors on the face (it reminded me of how paint acts in water). Further, this filter was created using Snap Lens Studio, which led me to recognize the significant capabilities of this tool.

Snap Lens Studio

To implement my ideas, I’m planning on utilizing Snap Lens Studio, which I was able to find plenty of YouTube tutorials for. Though I will have to constantly refer back to the tutorials throughout my process, I was able to learn the basics of Snap Lens Studio, how to alter the background, and how to place face masks by looking through the tutorials. I also found a tutorial regarding the material editor that Aaron Jablonski referred to in his post. I believe this feature would allow me to explore and incorporate various textures into my work.

bookooBread – 08FinalPhase1

For my final project, I am going to use Houdini to make a very short animated scene that involves a character. I hope to use the VR headset as well to be able to quickly sculpt some 3D characters that I can use in the short. I haven’t completely decided yet if I want to bring my Houdini stuff into Unity to make it an interactive project or simply render it in Houdini (maybe both?). I think the most important thing I want to focus on with this project, is just having fun. I’m thinking I’ll end up with something kinda goofy, but we’ll see. At the moment, I want to repurpose this idea I came up with earlier in the semester for this class and make something fun out of it. It will involve using a lot of softbody, springy simulation. I’m not completely sure though if I’m set on this idea.

bookooBread – LookingOutwards04

Nine Eyes of Google Street View by Jon Rafman

This work is an archival and conceptual project started in 2008 by artist Jon Rafman. These images are all screenshots from Google Street View’s image database, back when it was a new initiative without the kinds of regulations that it has nowadays. At that time it was (and still somewhat is) “A massive, undiscerning machine for image-making whose purpose [was] to simply capture everything, Street View takes photographs without apparent concern for ethics or aesthetics, from a supposedly neutral point of view.” The work meditates on the implications of this kind of automation of image-making, and at this kind of scale.

I am so glad I came across this project. It really struck me going through all of these images, I just could not stop looking. As a collection for photography’s sake, they work together to really poetically capture subtle moments of life, whether it be ugly, mundane, funny or really beautiful. But as you look through these photos, it feels really eerie too. It feels like you shouldn’t be looking at a lot of these (I mean.. because we shouldn’t) and yet here they are, moments of life captured with no regard for its subject. Some of these photos made me audibly gasp, like the one of an inmate running or one showcasing what looks like a kidnapping. And then in the same collection you have a guy mooning the camera. Or a set of white laundry billowing in the sun. It’s just strange and so eerie. I really, really love it though.

I think this also holds up so well in our current times with the development of computer vision and artificial intelligence. The technology is certainly developing and yes, the ethical side is too – but definitely not enough considering many of the issues posed in Rafman’s work are still relevant today.

Here is the image collection:

https://conifer.rhizome.org/despens/9-eyes/20180103101851/http://9-eyes.com/

bookooBread – Telematic

I’ll keep this semi short and also just add a few things Jean didn’t mention in her post.

The first idea, as discussed with Golan, was based on the idea of a space for release via limitless destruction on the internet. It would’ve been similar to a rage room except because it isn’t real, you could destroy things you wouldn’t have been able to normally. It’s also extremely accessible because it’s a public website. However, this ended up just not be feasible within the constraints of a telematic environment, so we decided to go with another idea of making a more subtle, calming, ripple environment. We did actually plan to add sound, but just didn’t get to it. The goal was if just one person clicked it would sound like a soft water droplet, and as more people join and start clicking, it would sound like a quiet rainfall. Thus, it would have a few different peaceful states based on collective effort/how many people are using the site at that moment. We also just thought there was something poetic about being able to see the effects of some anonymous person’s actions in a visible ripple. In addition, in one of our preliminary ideas that included the background, it diluted the focus of this idea and instead made it more about the cute environment – which is not what we wanted.

Final product:

https://grizzled-south-shoe.glitch.me

p.s. thank you jean for presenting for us 😅

bookooBread – Readings on Faces

From Joy Buolamwini’s talk “1 in 2 adults in the U.S. have their face in facial recognition networks”… a terrifying fact because as she says, these networks are very often wrong. Misidentifying someone in the context of policing and the justice system takes this fact to an entirely new level of terrifying. There are many people out there that because they do not know how these systems work (or do, but know that others don’t), take it to be full-proof and factual, using these “facts” to leverage their goals.

In Kyle McDonald’s Appropriating New Technologies: Face as Interface, he describes how “Without any effort, we maintain a massively multidimensional model that can recognize minor variations in shape and color,” Going further to reference a theory that says  “color vision evolved in apes to help us empathize.” I found this super interesting and read the article that it linked to. The paper, published by a team of California Institute of Technology researchers “[suggested] that we primates evolved our particular brand of color vision so that we could subtly discriminate slight changes in skin tone due to blushing and blanching.” This is just so funny to me, we are such emotional and empathetic creatures.

hunan-FinalPhase1

I’m thinking about making a transformation pipeline that can turn a set of GIS data into a virtual scene, not too different from procedurally generated scenes in games. This will consist of two parts: a transformation that is only done once to generate an intermediate set of data from the raw dataset and another set of code that can render the intermediate data into an interactive scene. I’ll call then stage 1 and stage 2.

To account for the indeterministic nature of my schedule, I want to plan different steps and possible pathways, which are listed below.

Stage 1:

  1. Using google earth’s GIS API, start with the basic terrain information for a piece of land and sort out all the basic stuff (reading data into python, python libs for mesh processing, output format, etc.) and test out some possibilities for the transformation.
  2. Start to incorporate the RGB info into the elevation info. See what I can do with those new data.
  3. Find some LiDAR datasets if point clouds can give me more options.

Stage 2:

  1. Just take the code I used for SkyScape (the one with 80k ice particles) and modify it to work with the intermediate format instead of random data.
  2. Make it look prettier by using different meshes, movements, and postprocessings that work with the overall concept.
  3. Using something like oimo.js or enable3D, add physics to the scene to allow for more user interactions and variabilities of the scene.
  4. Enhance user interaction by enhancing camera control, polishing the motions, adding extra interaction possibilities, etc.
  5. (If I have substantial time or if Unity is just a better option for this) Learn Unity and implement the above using Unity istead.

I’ll start with modifying the SkyScape code to work with a terrain mesh, which would give me a working product quickly, and go from there.