junebug-ArtBreeder

I was super impressed with the resources ArtBreeder had and how customized you could make your creation – especially the portraits and landscape categories I played around with. I really liked playing around with the specific categories compared to the general category because of how much you can customize each “factor” under the genes column. For example, I was super impressed with how many details you could change about the final image and how realistic the altered image looks compared to the parent images.

General category – parent: flowers, genes: underwater creatures and chaos

General category – parent: bubble, genes: nematode, bubble, and chaos

Landscape category

Portrait category – parent: 2 random male portraits and 2 random female portraits, played around with gene categories regarding art style and chaos

junebug-Pix2Pix

I found pix2pix so much fun to play around with. At first, I wanted to use my iPad to draw and see if anything would change with more accurate input on the user side, but I couldn’t draw inside the box with my Apple Pencil.

I tried drawing nyancat to see what pix2pix would make

I tried drawing something completely unrelated to a cat to see what would come out.

This one was a more realistic creation to see how realistic it would be.

This one was just completely random to see what would happen.

Croc with edge2shoes

I wanted to see if edges2handbags would recognize a fanny pack.

junebug-LookingOutwards04

Xander Steenbrugge’s When the painting comes to life…

Gif of a few seconds from the video

This project was an experiment in visualizing sound using AI. In the Vimeo channel’s about section, he describes his workflow and how he makes his work:

1. I first collect a dataset of images that define the visual style/theme that the AI algorithm has to learn. 
2. I then train the AI model to mimic and replicate this visual style (this is done using large amounts of computational power in the cloud and can take several days.) After training, the model is able to produce visual outputs that are similar to the training data, but entirely new and unique.
3. Next, I choose the audio and process it through a custom feature extraction pipeline written in Python. 
4. Finally, I let the AI create visual output with the audio features as input. I then start the final feedback loop where I manually edit, curate and rearrange these visual elements into the final work.

What I just appreciate about this work is the way the paintings are able to smoothly flow from one piece to another – they gradually fade into another by an element at a time rather than the whole painting. This video was so mesmerizing to watch, and the audio that was created sounded like lo-fi music and I just appreciated how in tune the visual was to the audio.

junebug-SoliSandbox

Soli Sandbox: Helping Me Heal

Code

Backstory:

My Soli project is based on the grief and loss I have/am experiencing over the loss of my roommate who recently passed away. She passed away this summer after battling leukemia. Her parents are driving down to Pittsburgh this weekend to pack up her belongings from our apartment, and my feelings of grief are slowly coming back in waves thinking that I will be losing one more connection with her. I’ve previously made an intaglio print dedicated to her as she was completing her second round of chemo, and I view this project as a continuation of my work that is dedicated to her.

Concept:

This project is inspired by the “cheesy” haiku/poem posts that exist everywhere on the internet where the background exists a calming scene and the foreground contains a poem. In my piece, the phone in some aspects is comforting the user by providing different poems about grief and loss every time the user taps the phone and also playing a melancholic song quietly in the background. The flowers in the corners are called spider lilies – a summer flower native throughout Asia, and in Asian cultures (especially Japan) the red spider lily means “the flower of the heavens” & is associated with final goodbyes and funerals.

Self-Critique:

This project’s execution was way too simple. I completely understand and totally agree with that. But in some aspect, I think for the concept and idea I was going for, the simplicity makes it better because the content is very heavy and I wanted to focus on the user dealing with their grief and loss while reading the poems without any distracting background visuals.

Inspiration – the “cheesy” poem posts

Project gif

junebug-facereadings

One thing I found super interesting from the readings was the conflict of wanting the facial recognition programs to be more inclusive and removing the algorithmic bias that exists within technology, but also the problems that will arise with better facial recognition technology in the hands of an oppressive and racist society. Maybe the fact that there is color bias in existing facial recognition programs is a good thing in our current racially biased justice and prison system to help provide a temporary advantage to those who will be targeted with this technology. I’m just disappointed in the fact that it is our reality where we have to debate about whether to fight for our inclusivity in technology that could potentially danger the lives of those who we are fighting inclusivity for.

The second thing I found interesting is just the whole concept of how we (as human civilization) is now just moving in uncharted territory with our innovations and improvements in technology. In regards of whether something is moral or not is the debate we have with many things related to new technology. In my Designing Human-Centered Software course, we discussed the morals and ethics of A/B testing. If you think generically, the idea of A/B testing is in no way of violating anyone’s privacy or morals – for example, Google’s 41 shades of blue test for their new Google search button. However, depending on how the test was conducted or what the test is testing, it could quickly become immoral (for example, Facebook’s “Mood Manipulation Study”). It just makes me wonder how the future will look like when we imagine how much further we plan on incorporating technology in our lives; will our boundaries for privacy change in about 30 years? Will our definition of moral/ethical change too?

junebug-soli-sketches

I really would like to focus on creating beautiful, mesmerizing art with this project because I really appreciated how different Caroline’s Chalky Weirdos was from other Soli projects because of the introduction to illustrations.

Idea #1: Landscape Pond Visual
Design will include lily pads, water ripples, koi fish flocking, firefly flocking, and maybe more later?
Tap: add more ripples, which the koi fish/fireflies will then avoid the area
Swipe: Change of time of day (swipe right: morning –> noon –> dusk –> night / swipe left: reversal)

Idea #2: Tarot Card Reading
Each card will be hand-illustrated and would like it to be very detailed and look like Caroline’s project
4 card placements with a stack of cards at the bottom in a random array.
Swipe: move a card from stack to a placement
Tap: Flip over the card

Idea #3: Forest Landscape Visual
Inspired by my Looking Outwards 2 post about Manolo Gamboa Naon’s work. Want to create a hypnotizing work of a landscape with generated trees.
In a perfect world, I would like it if the sensor could tell the position of the user’s hand hovering over the phone so then the colors of the trees would shift colors where the hand hovers over
Swipe: make trees rustle/move side to side
Tap: change color variations

junebug-CriticalInterface

Critical Interface Manifesto

Tenet #4: “The interface collects traces: traces and remains of all agents/agencies which converge in it”
Provided propositions:
• “Swap computers with your partner for a week. Swap computers with a stranger for a day.”
• “Avoid synchronizing your devices…or you will be hunted and tracked down.”
• “Use your first Nokia 3210 phone again (try not to get too nostalgic about it).”
• “Become an Open Data Donor. Only give data deliberately and make sure it stays Open.”

I chose this tenet because it is so blatantly obvious, yet I don’t think we as a society stop and realize it/really care enough to think about. We are so connected to our devices that they have basically become a digital copy of ourselves and our identity. If you only see the interface without the user, you (as a third-party observer) can still reveal personality traits, identity characteristics of the user, and the other agents that have come across the interface. Through people’s applications, messages, personal photos, etc. – people’s traces and information are stored into the interface and has given that interface a collection of your identity. An example of this I think is really just our daily lives nowadays. I doubt the majority of us can live without our phones/computers for one day (me included). Our technology has become so significant in our lives to make us feel included into society, and we’ve lost touch with reality and society in a sense since we’ve depended on our devices for so long (i.e. not memorizing our address book and contact numbers, needing a GPS to move around a city instead of memorizing a city’s map, etc.) I just find it interesting that our interfaces have almost become ourselves, but has the capability to retain more knowledge about us than we can about ourselves.

junebug-LookingOutwards03

Sachiko Kodama & Minako Takeno’s “Protrude, Flow” // 2001

This work is an interactive installation that expresses a man’s desire and passion for life, and moves like our instinctual feelings – unlike machines, this installation reminds us of the energy pulsating in our own body. It was created using a black magnetic fluid that is manipulated through sound. The fluid’s movements are due to changes in a magnetic fluid produced by several electromagnets installed above and below. Each electromagnet’s voltage is controlled according to the sound input of the spectators’ voices in the exhibition space by computer processing, making the fluid pulsate like a living being. Specifically, the sounds are captured by microphones hanging from the ceiling, then a computer converts the sound amplitude into electromagnetic voltage, determining the strength of the magnetic field.

What I found interesting about this project were the forms that were created through people’s voices. Although the artists said it was supposed to mimic the energy pulses of a living being and express man’s desires, I didn’t really get that feeling from watching the videos and seeing the images. To me, it seems like particles and individual parts forming into one whole individual being/form – focusing on creating societies, one together form. The installation is really mesmerizing and the landscapes formed are so interesting to watch. The fluid forms spikes but the multitude of spikes throughout the work end up being super organic and fluid.

Still image of the installation