starry – VQGAN + CLIP

Prompt: ‘film photograph of pittsburgh pinterest’

I wanted to see if I could generate specific locations, and since I saw that that the GAN could respond to prompts such as “unreal engine” I tried to use “pinterest” as a tag. I think it’s interesting the different aesthetics various user bases have, so it was interesting to see how tagging it as Pinterest turned out.

merlerker-VQGAN+CLIP

prompt: “nike muji spacecraft collaboration rendered with houdini”
noise seed: 05
iteration: 100
result:

Colab was easy enough to get running, though I didn’t get the diffusion to work (I think because it requires a starting image rather than starting from noise?). It’s really interesting to see the images emerge through the iterations – I found my favorite outcomes were around iteration 100 or 150, when the image had just started to materialize (perhaps because my prompt lent itself well to textures, it didn’t take long to get there), and beyond that it was further refined but the different between subsequent iterations was hardly discernible after a point.

prompt: “nike muji spacecraft collaboration rendered with houdini”
noise seed: 2046
iteration: 50
init image: dragonfly-sliderule_2.jpeg (from ArtBreeder)

result:

 

prompt: nike:30 | muji:30 | spacecraft:10 | rendered:15 | houdini:5 | color:10
noise seed: 2046
iteration: 50
result:

Dr. Mario-WQGAN+CLIP

It was very confusing at first as the link on the website gave a different version to the one in the video tutorial, and the one on the website didn’t work. Once I got it working though it was pretty interesting, very similar to the app WOMBO Dream. It was a little slow but the outcomes were pretty interesting.

The prompt is “Magic Tower”

Sneeze—VQGAN+CLIP

Prompts
“magical giant village in tree tops” and “medieval cats fighting”

Using the Google Colab notebook was interesting. I have used Google Colab notebook before but only briefly and unsuccessfully for another project. I just remember being very confused about the concept of running pre-made code from others and not understanding that all you had to do was press the play button on the code snippets. It was very easy to complete and the most difficult part was waiting for the images to render. I thought it was cool that you could either give it a base image to work off of or just have it generate everything from scratch. Both of my prompts were generated from scratch by the colab notebook.