SPINGBING-VQGAN+CLIP

I used the Pixray Readymade tool because I was using colab for another task and it would not let me do them simultaneously. I typed in “nuclear mutated flower field sunny day” and this is what happened:

It is a little too saturated and primary for my personal taste, but I am impressed by the interpretation of the prompt.

Solar-VQGAN+CLIP

(dreamscape of dystopia in apocalypse)

(the Song Dynasty in the style of an impressionist painting)

It was very intriguing to see the program generate these versions step by step. However, by the 400 iteration, the images started becoming more and more similar where I saw little variation. Hence, I played with multiple text prompts to explore the many surreal and imaginative images that could be created.

qazxsw-TextSynthesis

InferKit:

I go to school today wearing two pair of shoes.

I’m wearing the one pair with the cuter toes and the less funky colors (neon pink with a green swoosh, and green and orange with leopard stripes).

I’m also wearing one pair of sneakers that I can play basketball with.

I don’t think any of the students or professors have noticed my footwear yet, which is a little sad.

The fashion of the faculty, such as it is, seems to be sweaters and the occasional pair of well worn jeans and / or skivvies that could be described as casual.

None of them are stylish.

Most of them look like they got dressed that morning and just threw their stuff together.

Most of them have also learned how to leave the house without looking like they’ve just got out of bed.

They know that if they look so worn out that you don’t know if they’re awake or not, they don’t have any business leaving the house at all.

 

Narrative Device:

Master Yoda, the wrinkled old Jedi, looked up from his book and smiled at the two children. “Coca-cola, my young ones,” he said, “would make you very happy.”

 

Both the two tools are really cool to play with. I like InferKit better as the text it generated has more variety than the Narrative Device.

CrispySalmon-VQGAN+CLIP

Prompt: Rick and Mort messing in the white house.

I actually like the image produced around the 50th iteration the most. It resembles the color/feeling of a Rick and Mort scene the most. I am mostly interested in seeing if it will successfully recreate the style of Rick and Morty. It failed in the sense that it’s becoming more and more realistic, and loosing the sense of color and flatness.

CrispySalmon-TextSynthesis

Inferkit:

I Feel Like My Life Is Just A Series Of Unrelated Wacky Adventures.

My family and I were watching Once Upon a Time the other night (I haven’t seen it in years.
I wasn’t surprised when it made it to ABC).
It was about time to head to bed.
Well…
My sister Kori had been trying to use a lipstick without a mirror, but in the dark, she hadn’t seen it.
All she had seen was her hand.
And all she could remember was seeing her hand.
That’s right.
Narrative Device:
Both results read naturally. However I defiantly think the Inferkit paragraph is more suited to the prompt, whereas the narrative device paragraph is completely different from what I’m expecting.

CrispySalmon-ArtBreeder

It’s actually really hard to imagine what the outcome will be just based on the sliders. The resulted images aren’t really what I was  expecting, but they do have somewhat distinguishable features from the genes.

kong-TextSynthesis

InferKit: bright day

Narrative Device: rope, intestine

This was my first experience using an AI, and I was pleasantly surprised to find that AI was able to generate engaging stories. With InferKit, I even sometimes felt that there were poetic phrases. With Narrative Device, I was surprised as the story seemed like a horror story. It was interesting for me because it is a type of writing that I personally wouldn’t have thought about writing. Moreover, one thing I noticed is that when I typed “heart chain” as the input, it didn’t generate a story but rather a list of related words, such as hairclip, rings, games, dolls, etc.

CrispySalmon-Pix2Pix

Above is two examples I made with edges2shoes. I notice that it works best it you draw realistically with perspective(image 1). Whereas a highly stylish, 2D drawing(image 2) doesn’t render as well.

starry – VQGAN + CLIP

Prompt: ‘film photograph of pittsburgh pinterest’

I wanted to see if I could generate specific locations, and since I saw that that the GAN could respond to prompts such as “unreal engine” I tried to use “pinterest” as a tag. I think it’s interesting the different aesthetics various user bases have, so it was interesting to see how tagging it as Pinterest turned out.