I was trying to learn GLSL but due to my lack of motivation, I ended up making a project in Blender instead. I lot of what I learned in GLSL actually helped in making this scene, since Blender’s shader nodes are very similar to visually programming GLSL. I tried to do most of this procedurally using the node system (besides the grass), the flowers are made using an SDF I found in the Book of Shaders. Besides GLSL, I tried to explore Blender’s features more such as using particle systems, procedural animation, and using the compositor. As this was kind of rushed I wish I put more details into the scene and used less bloom, but I liked how it turned out.
Relevant media – I’m interested in how to explore generativity within nature and how to achieve that using shaders. While the following results most likely aren’t achievable given my current knowledge and time frame of the project, I thought it was relevant to what I want to explore.
Technologies and techniques – I’m still unsure whether to pursue only learning GLSL or learning HLSL + Unity, so I included resources for both.
Freya Holmer – Shaders for Game Devs series – I’ve tried to watch this for the past year and I haven’t gotten past the first video
Roystan Unity shaders
Char Stiles’ shader workshop page + Book of Shaders
I’m interested in learning how to write shaders as well as creating an immersive world using Unity and Blender. My three ideas are either one or the other or a combination of both. I’m not sure if I want to go with writing shaders in Unity however since it might be too difficult to tackle both learning HLSL and getting used to Unity.
My project is a collaborative drawing program where visitors draw constellations in an imagined space.
My project utilized the limbs and body to control the movement of a forest. I wanted to explore movement in inanimate objects, not just a single tree, and I think with natural subjects it’s easier to build complexity visually since I could copy paste the trees to create a forest. By moving the arms back and forth, the user can simulate movement of branches, and their distance from the camera determines the size of the sun / visibility of the ground.
I liked how it turned out visually but I didn’t like the user interaction since the frame rate was pretty bad and it caused the movement to look very laggy. I also wanted to process the user’s movement somehow to create smoother looking movement, similar to wind moving through trees, but couldn’t figure out how to do so.
How Do You See the Disappeared? A Warm Database (Mariam Ghani, Chitra Ganesh, 2004)
Link | Video
This project was developed in response to the U.S. government detaining immigrants, most of whom were Muslim, following 9/11; many others were also subject to questioning. The “warm data” in the project’s title stands in opposition to the data generated from such questioning; it is “deeply personal but non-identifying”, as opposed to common forms of data categorization such as place of origin or birthday. The project is currently inaccessible, but is separated into 4 parts; I liked the 3rd and 4th parts, which was the questionnaire that collected the “warm data” and the interface for viewing such responses. I thought this project was really interesting because of how it utilized databases and data that focused on humanizing and turned its focus away from categorization, and how it can evoke an emotional response from the viewer despite the fact that there is nothing known about the person who submitted the response.
Kyle McDonald – Appropriating New Technologies
“What we discovered is that that expression alone is sufficient to create marked changes in the autonomic nervous system” (Paul Ekman)
I thought this was interesting because it seems like human emotion is usually dictated by some internal mechanism within the brain and being able to induce emotion purely based on physical motions was not something I’d considered before.
Nabil Hassein – Against Black Inclusion in Facial Recognition
“Machine learning researchers have even reinvented the racist pseudoscience of physiognomy, in a study claiming to identify criminals with approximately 90% accuracy based on their faces alone — using data provided by police.”
I thought the arguments the author discussed were very interesting, the quote above reminded me about how important it is to understand the societal impact of any piece of technology especially when it can perpetuate racist ideas and not simply focus on innovation or technical mastery.
Prompt: ‘film photograph of pittsburgh pinterest’
I wanted to see if I could generate specific locations, and since I saw that that the GAN could respond to prompts such as “unreal engine” I tried to use “pinterest” as a tag. I think it’s interesting the different aesthetics various user bases have, so it was interesting to see how tagging it as Pinterest turned out.
I felt nostalgic about spending summer at home when I was a kid.
Of course, back then, it was just life as usual for us.
(that is, work, eat, sleep, repeat… or so I thought) I wanted to go outside and look at the trees.
I felt like I was actually “at home.”
It was as if all of life had happened to someone else and I had gotten a break.
(Don’t worry, I’ll stop by my parents’ house and get a dish of my favorite ice cream to ease my pain.
(We all need a pick – me – up every now and then)
The feeling of home was forever taken away from me, it would seem.
I thought InferKit was more interactive than Narrative Device, and the texts it generated often had more variety and I enjoyed the prose better. With Narrative Device I found it harder to use abstract themes because it wouldn’t generate a lot of sentences.
This was pretty fun but I thought once the amount of genes got past 4 or 5, the outcome became significantly less controllable. In the second image I had pomeranian as one of the genes but I kept getting images of babies.