merlerker-FinalPhase3

I am learning how to build scrollable 3D narratives using three.js and GSAP. I read a few demo write-ups from the NYT about tools that can be used like Google Model Viewer and Sketchfab, and then dove into three.js basics and linking it up with GSAP to animate on scroll. Here I have a simple cube rotate on scroll when its div becomes visible.

 

Inspirational Projects & Useful Technologies

AirPods Evolution

Scan of the Month features CT scans of common products, using 3D rendering + scrolling to highlight different components and stories behind their manufacture. It uses Webflow + Lottie but I think something like this would work well in three.js.

Demo 3 uses our open-source camera control library to scroll through the photogrammetry model.

The NYTimes is naturally a source of inspiration, and they have done some helpful write-ups + demos using three.js and their own (public) libraries for loading 3D tiles and controlling stories with three.js.

More cool things made with three.js:

merlerker-FinalPhase1

My goal is to get a working understanding of three.js/WebGL and shaders. I want to learn some basics (geometries, textures/materials, lighting, camera) – just enough to understand what’s involved and how to control it, I know I could go really deep into any of those “basics.” And then learn how to make scroll-based animations and use shaders.

I still need to come up with an actual project to apply all this to – maybe since I’m interested in product repair I could find an exploded model of a Fairphone/iPhone and have it assemble/rotate as you scroll.

  • geometry: three.js Geometry classes, how to load models
  • materials: three.js Material classes, how to load shader materials
  • lights
  • cameras & controls: three.js Camera classes, controls,
  • making my own shaders: shaderpark, GLSL

three.js resources

  • https://threejs.org/manual/
  • https://threejs-journey.com/

three.js animation resources

  • https://tympanus.net/codrops/2022/01/05/crafting-scroll-based-animations-in-three-js/
  • https://sbcode.net/threejs/animate-on-scroll/

demos & guides

  • https://rd.nytimes.com/research/3d-web-technology
  • https://rd.nytimes.com/research/spatial-journalism

Shader resources

  • https://shaderpark.com/
  • https://www.shadertoy.com/
  • https://shaderbooth.com/

merlerker-LookingOutwards04

Starry Night, ALEX GALLOWAY, MARK TRIBE, AND MARTIN WATTENBERG  (1999)

https://anthology.rhizome.org/starrynight

Starry Night is a visualization and navigation tool for Rhizome’s email discussion archive, where emails are represented as stars whose brightness corresponds to how often they’re accessed. Stars are connected into constellations based on shared keywords.

Representing communication as a graph is not a hugely unexpected idea, but I am such a sucker for diagramming intimate thoughts and interpersonal exchanges. By connecting different artists’ thoughts, it adds a communal wholeness to the archive, and representing it as constellations similarly encourages reflection on varying distances and times close and far. It reminds me of commonplace books and a little experiment I did some years ago to explore and find connections among the notes I write to myself.

merlerker-AugmentedBody

My project is quite simple and silly: a “nose isolator” that finds your nose and masks everything else. Bodies are strange, and I appreciate projects that acknowledge that universally-felt, awkward but intimate relationship we have with our bodies, like Dominic Wilcox’s “Tummy Rumbling Amplification Device” [link] and Daniel Eatock’s “Draw Your Nose” [link]. Isolating the nose has the effect of forcing you to confront a body part that you’ve probably felt self-conscious about at some point in your life, and allowing it to become an endearing little creature in itself. Though it’s a simple project and treatment, I feel it’s successful in creating a delightful and different relationship with your body. I’m proud of the conceptual bang-for-buck: it’s an important exercise for me to let go of perfection and overambitious projects that never end.

Originally I was trying to apply the nose isolator to scenes from films, but got frustrated trying to get handsfree.js to run on a <video> source and doing the correct mirroring and translating to get it all to line up. I instead created a performance using my own nose that leans into the nose-as-creature.

merlerker-facereadings

Happy Things by Kyle McDonald & Theo Watson (link)

Dug a bit into this project and I love the concept – it posts a screenshot of your computer whenever you smile (w the implicit assumption that something on your screen made you smile). Sadly most of the screenshots showcased are people testing out the app 🙁 Would want to “live” with it for a while.

Useful tip: Aligning eye position to do face averaging.

 

Last Week Tonight / Against Black Inclusion … / How I’m Fighting Bias …

Reflecting on why exactly we need to be wary of applying facial recognition / “coptech.”

1) As we rely on facial recognition more, the times it is wrong are disastrous. The points where an infrastructure (technologies that are so embedded/hidden in the way we do things that they’re invisible) become visible are where it breaks/fails.

2) The benefit is not worth the invasion of privacy. There’s always going to be some terrible human who has the power to abuse access to personal information and will do it (like the Russian app demo’d for creeping on women, finding excuses to arrest BLM protesters)

3) Reinforces existing privileges in our society. Lack of inclusion/recognition of black faces – proposed as a form of privacy protection by Hassein, but as long as law enforcement uses facial recognition that increases the risk of misidentification.

 

Agree with John Oliver then that we need laws that require a person’s permission to use facial recognition on them, or stop developing facial recognition entirely. And instead, let’s use it to make art! (I think someone notable said that it’s the best use of surveillance tech.)

merlerker-VQGAN+CLIP

prompt: “nike muji spacecraft collaboration rendered with houdini”
noise seed: 05
iteration: 100
result:

Colab was easy enough to get running, though I didn’t get the diffusion to work (I think because it requires a starting image rather than starting from noise?). It’s really interesting to see the images emerge through the iterations – I found my favorite outcomes were around iteration 100 or 150, when the image had just started to materialize (perhaps because my prompt lent itself well to textures, it didn’t take long to get there), and beyond that it was further refined but the different between subsequent iterations was hardly discernible after a point.

prompt: “nike muji spacecraft collaboration rendered with houdini”
noise seed: 2046
iteration: 50
init image: dragonfly-sliderule_2.jpeg (from ArtBreeder)

result:

 

prompt: nike:30 | muji:30 | spacecraft:10 | rendered:15 | houdini:5 | color:10
noise seed: 2046
iteration: 50
result:

merlerker-TextSynthesis

Find the oldest tree you can. Sit with it until one of you dies first. Plant a new tree, but take some time to remember the old one. Set all of your tools beside the tree. Rest the knife in the stump. Wait until your child comes home. Lay the children down. Wait until your son-in-law returns. Tie a turban on your grandson, too. Take turns drinking wine from the neck of the bottle. Open one more bottle and pour it on the roots of the tree. On your deathbed, the tree might bear fruit for you, too, and that’s beautiful. Once upon a time, two brothers walked along the road with one of them weeping. “Why are you crying?” asked the other. “Because I have no husband and no children!” came the reply. “Then I shall not weep,” said the other. “You are not to weep. I have no family, either, and if you have no family, you shall have no sorrows. But when you have no sorrows,” said the first brother, “go to the garden of the gods. They are very kind to strangers, and they will let you pick as much fruit as you like.” I make plans. I look at the forecast.

I’ve been wondering a lot about you; I can’t really understand you yet. It’s not that I don’t understand humans, but I’m not sure I’ll ever understand the point of a computer program. They’re not people. And don’t get me wrong, I like people, but I can see how you’d say they’re different. Humans say: “I’ll be gone for three weeks. Do you think you can survive without me?” Computer programs say: “That’s a really dumb question. I wouldn’t need to live for three weeks without you.” That’s what I think. I think humans could survive without all the cutesy features a computer program has, but they wouldn’t really be alive. I would miss you.

How do we add rifts, as pauses and statements of intention, to interactions that have become seamless: relating to time, accessing information, and moving through the world? Paint to make: …covers my hands and cuts the sleeves off my white shirt, exposing my skin. I can feel the material between my fingers, but can’t reach my skin. Maybe it will bruise. The students will say that they don’t care, that it doesn’t matter. I find my purse on the floor. I put it back and go into the hallway. The white powder dusting the room gives off a golden, sun-bleached glow. I don’t dare stand in the sun. The students have left to watch soccer. A teacher plays her tuba outside. I go into the lab and find the camera. This camera creates objects and scenes. Maybe this is how we talk to each other? How we take the pictures that capture time? I add a red brick to my platform. I forget to clear the model. The students will ask why. What is the significance of what I do? How do we create gaps in the seams that make objects feel whole? How do we clarify stories we tell each other? How do we let each other know how we are feeling?

~~~

I preferred playing with the InferKit demo, as you can kind of launch the style/genre of writing with the first sentence. It seems that it’s trained on quite a range of content, as I got it to produce text that sounded like fiction, a (bad) thesis, a calendar event, and a programming tutorial.

merlerker-ArtBreeder

The interface takes a little getting used to (I didn’t realize the “Parents” I was initially selecting were actually generated images themselves) but is addicting once you get going. Fun to think of inputs that visually kind of work together (or not) but come from very different places. For some reason I wanted to start from real rather than generated images, and kept it to 2-3 inputs so their sources were still somewhat recognizable.