I spent a few runs learning the “rules” of the kinds of forms and details the model responded to (narrow paths for zippers, loops like in strappy sandals, wavy bottoms for soles…) then tried to figure out how to push them. For facades I followed the rules for the most part, here trying to see if I could believably render a facade upside down. For shoes, I played more with how the language of “shoe-like” contours can be composed in unexpected ways. It seems the more interesting results come from composing familiar details in new ways, as the model responds well to details and composes the image however the input dictates – and this gets at a balance of “crystallization” (human-recognizable details) and “chaos” (unexpected compositions).
Author: merlerker
merlerker-LookingOutwards03
AR Copy-Paste by Cyril Diagne
I actually remember seeing this project last year, and I think it’s so genius. Especially because I’m coming from the perspective of a designer, I’m interested in the types of interactions that are made possible with Deep Learning. After doing a little digging on Cyril Diagne, I realized they are also the person behind cleanup.pictures, which has the same kind of magic. It’s such a simple interaction from the user’s perspective, that is so intuitively what we’d want to do with technology: drag something we see through our camera in the real world, directly into screen world. To me, a project like this seems almost obvious after the fact, but involves noticing of the moments of frustration many (designers, in this case) have with technology but accept and put up with, and taking a step back to question whether there are actually ways around it.
merlerker-landscape
My concept started from an idea to make data-driven tree rings that represent precipitation/drought data in California, and are explorable at multiple scales. I had the idea to use branching as the natural way to explore scale, where larger scales (state, basin) could be larger rings that branch into the smaller rings (grid cell), so switching scales meant you were slicing the tree at different points. I had the code to draw the rings in this wobbly way, but plugging in the data as an input and getting a neat ring that starts and ends in the same place proved to be challenging, and my DeepNote started crashing repeatedly 🙁 I pivoted (for now) to make a landscape from the tree rings that can shift from a top-down view (slice) to a perspective view (branch).
I use the random function to place multiple tree rings on the canvas, mapping their y position to the number of rings, so lower “trunks” are thicker. I draw rings with some wobble using Perlin noise to offset the radius as we go around the circle (learning the trick to get the noise values to start and end in the same place: use sin/cos functions as xoff,yoff inputs to noise, so your domain for the noise space is also a circle), then add stippling with a random angle stagger, so the stipples are not all lined up.
merlerker-Creature
Common Rumpus
I developed my creature by sketching ideas of how to make a creature-like thing out of 3 rectangles, with some vague ideas for interaction/motion, and then jumping into implementing it. Upon reflection, I think this approach defeated the purpose of the 3-rectangles constraint: I spent a long time thinking about how to make something interesting (statically) with 3 rectangles, when I could have benefitted from thinking of the interaction/animation first, rather than making those interaction decisions on the fly. Interestingly, I wasn’t thinking of it as an interactive animation, but I’m realizing how challenging that is – I have experience making animations, controlling paths, keyframes, and easing, but when that’s all interactive it feels like a very different challenge, because it can happen anywhere on the screen, in any direction.
The interaction I have established creates a relationship that is sort of curious and odd wild creature/observer. The creature is friendly and approaches the mouse with its goofy little bounce, but when you try to “move” toward it, the creature doesn’t let you get too close and starts backing away.
merlerker-LookingOutwards02
Zach Lieberman does beautiful things with color and here creates a generative work that feels like a material-rendering but simply using a number of moving radial gradients (like point lights). He posts many iterations/experiments on his instagram so you can really see how his techniques develop and grow more complex.
https://www.instagram.com/p/CTetxTHjvsL/
merlerker-Reading03
Jorge Luis Borges’ Library of Babel (which inspired the project by Jonathan Basile, pictured below) has stuck in my mind as an example of effective complexity. His concept of the generated book leverages the order, rules, and familiarity of books as pages of paragraphs, sentences, and words made up of letters and spaces while imposing additional constraints that each book has 410 pages, 40 lines per page, and ~80 letters per line – which teasingly seems to promise meaning. He introduces disorder through the characters in the book being random. What I find compelling about his concept is that it appears to follow total order in terms of meaning (taking form in the book-medium that is typically equivalent with meaning/knowledge) but in actuality is total randomness: a white noise of letters and spaces. It also illustrates the “10,000 Bowls of Oatmeal” problem well in that the resulting books are so random that they are meaningless and effectively indiscernable from each other. The story that plays out the concept is predicated on that exact false promise of meaning.
(sorry that was > 50 words)
The “Problem of Meaning” with generative art is especially challenging to me. From what I see of the generative art world, much of it tends towards being “unapologetically abstract and formal in [its] generative practice, seeking only to reinvigorate the sublime and instill a sense of awe” (173) which in a sense is appealing to me as a humble goal that demonstrates and celebrates beauty in its “truth to process” or truth to system. At the same time, I can’t help but look at all the generative art on Twitter or sold as NFTs and feel some skepticism and clash with my (debilitating) idealism: it shouldn’t be so “easy” to capitalize on generative art, it feeds a little too seamlessly into fast-consumption platforms, and at worst falls short at being visual eye candy.
merlerker-timepiece
Sky Clock
The Sky Clock shows an image of the sky every minute, that was captured at that minute. I first wrote a Python script (here) to get photos using the Flickr API that matched the search “sky,” which were then parsed to only use photos with EXIF metadata (to get the actual capture time, not the time the photo was uploaded to Flickr), and then saving into a json with keys for each ‘HH:MM’ time and values with the urls of matching photos. I loaded this json into my p5 sketch to then display an image matching the current time. If there is more than one image for the current time, a random one is chosen.
My hope was this would give a real-time sense of the sky that would shift throughout the day, from sunrise through daytime, sunset, and night. It was in part inspired by this post sent to me by a friend a few months ago:
I think the concept is a poetic contribution to the genre of crowdsourced clocks we saw in examples like The Human Clock, especially if it worked as described above: a shifting image of the sky throughout the day, along with (charmingly imperfect) intrusions of cropped bits of things that are not the sky. Unfortunately, I quickly ran into problems that challenged this vision. I knew 1) that the same time of day will look differently depending on the season and latitude at which is was taken, but did not anticipate 2) getting the actual time a photo was captured is incredibly difficult. Flickr’s API provides a “date_taken” but they also state “The ‘taken’ date represents the time at which the photo has taken. This is extracted from EXIF date if available, else set to the time of upload.” Ok, so to tell if date_taken is the actual time at which the photo was taken, I check if that photo has EXIF data. EXIF data is basically all the metadata you see that comes along with photos, sometimes including GPS coordinates, the camera model it was taken on, and camera settings, as well as a number of date fields. This is where I really got into trouble…
In short, there is NO RELIABLE STANDARD for getting the local time at which a photo was taken. In addition, actually going through all the data to cross-check EXIF data added tons of computation time (it was going to take upwards of 17 hours to process before I killed it).
I put in quite a few hours learning about XML parsing and then trying to figure out the EXIF timezones, and I couldn’t find a way to consistently get the local capture time of an image. I found the best solution I could given the time constraint, but it disrupts that vision of a smoothly shifting sky. Perhaps a lesson is that you can’t have specific of a vision when it comes to crowd-sourced data, or else you need to be prepared to put a lot of time in!
merlerker-Loop
I succeeded in learning to draw isometric cubes (using scale-shear-rotate) in p5 and to use p5.func and createLoop to make a seamless loop. I didn’t totally understand the t01 calculation in Golan’s code – it seems like it’s figuring out what frame we are on and normalizing it, but I would have thought the 3000 is loop duration*1000, but changing that value completely bugged out the animation so I assume it’s related to framerate in some way. it’s the same as doing animLoop.progress (where are we in the animation from 0 to 1), but they don’t play well together for GIF export because they may be phase-offset.
I took a long time to decide what to do for my loop, as I wanted to try to do something more conceptual, but in the end went with this simple visual. Could have better heeded the “This is a mini-project, so KEEP IT SIMPLE” note.
It’s not a very complex animation, but it’s elegant and works well. An easy strategy to make it more visually impressive could just be to make a grid of the module I’ve built (taking a cue from our references), perhaps shifting the timing of each row so it’s a cascade / canon of sorts. Edit: Added this.
merlerker-TimeResearch
“Like maps, celestial coordinate systems become a reified intellectual construct, a graphical scheme through which human beings create a relation to the phenomenal world.” (Drucker 72-73)
It’s common knowledge that most calendars are abstractions of planetary movement, constructs that ultimately root human routine in the phenomenological. Yet, the abstraction (more than what it represents) is almost too embedded in our everyday lives: I hardly find myself gazing at the information structure of GCal and feeling deeply connected to the position of the stars and moon, the impermanence of my existence, or the cycles of the natural world (maybe just me).
“Errors in early pendulum clocks were eclipsed by those caused by temperature variation” (From Wikipedia “History of Timekeeping Devices”)
We don’t tend to think of clocks as susceptible to messy or imprecise influences like temperature. Time is usually thought of as a dimension on its own, but what if the x-axis was warping and breaking based on some other axis?
merlerker-LoopResearch
David Whyte
Whyte uses this technique a few times in his work: he creates an illusion where a cube is textured with a “painting” of smaller cubes, that then move as if they were separate cubes. In actually exposing where the illusion breaks, Whyte is acknowledging the trick that gets the illusion to work. He takes the viewer through the magic of the motion but also lets them in on the secret.
Cindy Suen
Many of Suen’s looping GIFs create this illusion of an infinite assembly line process by having an object move across a path coming into frame in one form and then going out in another form (here, cleverly treating the computer screen as a frame itself). Let’s not think too hard about infinite production and consumption, and just appreciate how satisfying it is to watch an infinite stream of burgers get uploaded to the metaverse.
Lucas Zanotto
https://www.instagram.com/p/CTheuwVl3pu/
Zanotto is a master of easing, which is what makes his looping characters appear like they are powered by momentum and creates that moment of tension as the motion approaches an inflection point and stands still for a moment before falling into the next movement.