Looking Outwards 2: Generative Art

Ian Cheng’s Emissaries is “a trilogy of simulations about cognitive
evolution, past and future, and the ecological conditions that shape
it
,” as per the artist’s website. Cheng created the live, CGI
simulations through a video game engine and observed the works from
2015 – 2017 as the game essentially “played itself.” The characters
and creatures within Emissaries play out their lives in open-
ended narratives generated and modeled through predictive technology
usually reserved for forecasting election results or climate change.

Emissaries is a fascinating case study on narrative consciousness,
evolution, and dealing with natural chaos in life. I had the good
fortune of listening to Ian speak about this work when he visited
Carnegie Mellon back in 2017, and I was struck by the sheer complexity
and time he invested into these generative worlds. In order to build
this game and have it play itself over two years, I think Cheng would
have had to feed the algorithm an inordinate amount of data on past
major events, disasters, and research on human psychology so that the
game could generate narratives and characters that accurately reflect
our world.

LO-02

I find artist Manfred Mohr’s Algorithmic Modulations inspirational since it uses simple rotations to make sophisticated visual effects. In the project, lines and color bands are used to present the n-dimensions rotation in 2-dimensions. The color bands have similar colors of watercolor, like light blue and light violet, which are calming. However, lines that come across those bands are of much darker colors, which are nearly back. The contradictory color choices tell us about the artist’s aesthetics and sensibilities.

Mohr. M. (2019). Algorithmic Modulations. http://www.emohr.com/ww4_out.html

According to the artist, the algorithm is written in this way: One diagonal path across a 12-D hypercube is selected randomly, which is then put in 3 angular positions. This diagonal path in three positions represents the initial graphic constellation as the inherent generator of the images. After being rotated in 12-D and projected into 2-D, the diagonal-paths become visible as transparent color bands. A thin horizontal black line is drawn starting from each vertex (of all three diagonal paths), crossing the vertical center of the image and continuing to the opposite side. In addition, for each vertex a slightly thicker horizontal black line is drawn starting from the vertical center of the image and continuing to the negative x-value of the respective vertex (reflection from the vertical center). The rotation in 12-D horizontally modulates the black line construct, whereas its symmetric aspect, creates a graphic counterpoint to the visually floating transparent color bands. In the screen-based work the algorithm randomly changes the colors of the 3 transparent bands and occasionally switches completely to white bands on a grey background. On other occasions the 3 bands transform to full color and while returning to their transparency they are overlayed with a shrinking and slowly fading structure revealing the generating diagonal-paths.

animation – example

LookingOutwards-02: Generative Art

I truly admire “Flow Field #1 – Reinterpreting the Starry Night” (2018) by the software and net artist, LIA. The first thing that I had noticed about this work was that the movement of the flow fields were so hypnotizing. Despite being two-dimensional lines flowing, it showed a lot of smooth feather-like texture and movement. I admired this because I was impressed at how it showed a lot of three-dimensional depth despite its two-dimensinal form. Though I do not know exactly what algorithm was used to create this work, it is said that LIA took the original color palette of Starry Night, and translated the work into her own interpretation with a custom-made computer code using the flow fields technique. This technique reveals the lines and loops of the artwork into these flow fields to present depth and motion throughout the canvas. Based on LIA’s other works as well, it is obvious that this artist enjoys working with lines and shapes, and distorting them in an ethereal manner, where the projects look like a hallucinated and imagined state of mind.  

Title: “Flow Field #1 – Reinterpreting the Starry Night by Vincent Van Gogh”

Artist: LIA

Year of Creation: 2018

Variable Face

variableface
var faceWidth = 300;
var faceHeight = 320;
var eyeSize = 60;
var faceC = 180
var eyeC = 255
var pC = 0
var maskC = 200
var eyebC = 50
var hairC = 255
function setup() {
    createCanvas(640, 480);
    background(220);
}

function draw() {
	var colorR = random(0, 255); //red
    var colorG = random(0, 255); //green
    var colorB = random(0, 255); //blue
    background(colorR, colorG, colorB)
    stroke(0)
    strokeWeight(1)
    fill(0,0,0)
    rect(25, 25, 590, 430)
    //hair back
    fill(hairC)
    noStroke()
    ellipse(width / 2, height / 2, faceWidth + 150, faceHeight + 150)
    //face
    stroke(0)
    fill(faceC)
    strokeWeight(1)
    ellipse(width / 2, height/ 2, faceWidth, faceHeight);
    var eyeL = width / 2 - faceWidth * .25
    var eyeR = width / 2 + faceWidth * .25
    //hair front
    fill(hairC)
    noStroke()
    ellipse(width / 2, height / 6, faceWidth/ 1.3, faceHeight / 3.1)
    fill(eyeC)
    stroke(0)
    ellipse(eyeL, height / 2, eyeSize, eyeSize); //eyeLeft
    ellipse(eyeR, height / 2, eyeSize, eyeSize); //eye Right
    var eyeP = 30
    fill(pC)
    ellipse(eyeL + eyeSize / 8, height / 2 + eyeSize / 5, eyeP, eyeP); //pupil L
    ellipse(eyeR + eyeSize / 8, height / 2 + eyeSize / 5, eyeP, eyeP); //pupil R
    //mask string
    noFill()
    stroke(hairC)
    strokeWeight(10)
    ellipse(eyeL - 60, height / 1.4, faceWidth / 3, faceHeight / 3)
    ellipse(eyeR + 50, height / 1.4, faceWidth / 3, faceHeight / 3)
    //mask
    fill(maskC)
    strokeWeight(1)
    rect(eyeL - 25, height/1.7, faceWidth / 1.5, faceHeight /3.5 ) //mask
    fill(0)
    line(eyeL - 25, height/1.7 + 10, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 10 ) // mask lines
    line(eyeL - 25, height/1.7 + 20, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 20 )
    line(eyeL - 25, height/1.7 + 30, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 30 )
    line(eyeL - 25, height/1.7 + 40, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 40 )
    line(eyeL - 25, height/1.7 + 50, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 50 )
    line(eyeL - 25, height/1.7 + 60, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 60 )
    line(eyeL - 25, height/1.7 + 70, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 70 )
    line(eyeL - 25, height/1.7 + 80, eyeL - 25 + faceWidth / 1.5, height / 1.7 + 80 )
    //eyelash
    stroke(0)
    strokeWeight(5)
    line(eyeL- eyeSize/ 2, height/2, eyeL - eyeSize / 2 - 20, height/2.1)
    line(eyeR + eyeSize/ 2, height/2, eyeR + eyeSize / 2 + 20, height/2.1)
    line(eyeL - eyeSize/2, height/2.1, eyeL - eyeSize / 2 - 20, height/2.2)
    line(eyeR + eyeSize/2, height/2.1, eyeR + eyeSize / 2 + 20, height/2.2)
    //eyebrow
    noStroke()
    fill(eyebC)
    rect(eyeL - eyeSize / 2, height/3, eyeSize, eyeSize / 6)
    rect(eyeR - eyeSize / 2, height/3, eyeSize, eyeSize / 6)
}

function mouseClicked(){
    faceWidth = random(250, 450);
    faceHeight = random(270, 470);
    eyeSize = random(40, 80);
    eyeP = random(20, 40);
    faceC = color(random (0, 255), random(0, 255), random (0, 255))
    maskC = color(random (0, 255), random(0, 255), random (0, 255))
    eyeC = color(random (180, 255), random(180, 255), random (180, 255))
    pC = color(random (0, 255), random(0, 255), random (0, 255))
    eyebC = color(random (0, 255), random(0, 255), random (0, 255))
    hairC = color(random (0, 150), random(0, 150), random (0, 150))

}

For this project I decided I wanted a face with the mask. As I started coding I chose to make a female face that would change colors and sizes. The hard part was getting the mask to align correctly with each face same goes for the top of the hair. I wanted my faces to look like a cartoon would during Covid.

LO 2 – generative art : “Please Smile”

When researching generative art, I stumbled upon a piece titled “Please Smile” it is a robotic installation done in 2011 by artist Hye Yeon Nam and the robotics were done by Changhyun Choi along with assistant programmer Sam Mendenhall.
I was instantly drawn to the piece as the concept is very intriguing. The exhibit consists of five robotic skeleton arms which are designed to interact with humans in three different ways. First if no human is in view of the camera the robotic arms stay in the default position if a human comes in view the arms are designed to point at the human following the person if he or she were to move. Finally the third way is if the person were to smile the robot would then wave with all five arms.
The piece sounds simple but, with my knowledge of robotics, it is not the easiest thing to program. I’m very inspired by not only the thought process behind the art but also the execution. The video I watched about this piece showed the robot running very smoothly which must have required a sound program.
There is a clear indication of time effort and creativity that went into this piece making it not only cool to look at but also interactive.
It is clear that the artist had a definite vision going into the project and worked alongside others to make her idea reality.

Hye Yeon Nam - Please Smile, Robotic Installation 2011 - YouTube
pictured here is the piece “Please Smile”

Project 2-Changing Faces

changing_facesDownload
var eyeSize=40;
var faceWidth=250;
var faceHeight=300;
var browHeight=270
var smile=5
var pie=0
var pupil=18
var lidSize=5

function setup() {
    createCanvas(480, 640);
}

function draw() {
    background(165, 222, 214);
    strokeWeight(1);
    noFill()
    ellipse(width/2, height/2, faceWidth, faceHeight);
    strokeWeight(2);
    fill(255);
    var eyeL=width/2-faceWidth*0.25
    var eyeR=width/2+faceWidth*0.25
    ellipse(eyeL, height/2, eyeSize); //eyes
    ellipse(eyeR, height/2, eyeSize);
    fill(0);
    ellipse(eyeL, height/2, pupil); //pupils
    ellipse(eyeR, height/2, pupil);
    fill(118, 186, 177);
    arc(eyeL, height/2, eyeSize + lidSize, eyeSize, 9.4, 2*PI); //eyelids
    arc(eyeR, height/2, eyeSize + lidSize, eyeSize, 9.5, 2*PI);
    noFill();
    strokeWeight(4);
    eyeL=width/2-faceWidth*0.35
    eyeR=width/2+faceWidth*0.35
    line(eyeL, 270, 200, browHeight); //brows
    line(eyeR, 270, 275, browHeight)
    arc(width/2, 375, 100, smile, 0, pie*PI);//mouth
}

function mousePressed() {
    browHeight=(random(240, 300) | 0);
    console.log(browHeight);
    smile=random(3, 75);
    pupil=random(10, 30);
    pie=random(0,1);
    eyeSize=random(40, 60);
    lidSize=(0, 6)
}

I decided to make the various expressions of someone who would get punched in the face at a bar.

LO-Generative Artwork

The project Zauberflöte is a massive architectural feat from generative artist Michael Hansmeyer. He created it as set design for Romoeo Castellucci’s production of Mozart’s Magic Flute for the La Monnaie theater in Brussels. The first thing I admire about it and that anyone notices, is the scope. The structure is massive and consumes the entire stage. The detail of the sculpture is intense, and I truly can’t comprehend how many tiny details that would never be seen from the audience were included in this piece. I also admire the way it fits with the show, the Magic Flute.

Zauberflöte Grotte, Michael Hansmeyer (2018)

Hansmeyer uses CAD software and algorithmic architecture techniques. For the Zauberflöte grotto, the algorithm was used to create something that appeared both synthetic and organic using a generative subdivision algorithm. There is a simple input form that is divided into smaller surfaces over and over again. By altering division rations, Hansmeyer and his team can create very complex geometric surfaces.

Hansmeyer often asks himself “what is the origin of the forms that we design?” and “what kind of forms could we design if we could free ourselves from our experience?” He uses computational design to find these unseen forms to create something no human could think of. His artistic sensibilities are centered around exploration and a deviation from the tradition, and that’s why he uses the generative subdivision algorithm to create shapes that are seemingly irrational.

LO-02 Generative Art

Glenn Marshall created this generative art piece as a visual rendering for the song Codex by Radiohead, in 2015. Marshall created this piece using Python programming language inside Cinema 4D. It is a combination of three of his previous pieces, Particle Man, Star Girl, and Universe Hand. I admire the power this piece has to captivate viewers. Once you see the first few seconds you are drawn in by the movement of each element and compelled to continue watching. Each shot starts off as a chaos of dots and lines that draw the viewer’s eyes to follow along until the main image is created.

I am also inspired by his attention to detail to tell a story. For example, the Universe Hand, Marshall sets the “particles” to expand and contract in sync with the Indian breathing technique “Pranayama” to parallel the “scientific model of an infinitely expanding and contracting cycle of the universe.” This also helps achieve his goal is to create meditative and philosophical narratives. He manifests this into the algorithm by syncing audios and visuals to create a calming effect to spark that meditative state.

Looking Outwards 02: Generative Art

Design has different mediums and some even unable to be seen by the human eye, but by using code, Joshua Davis used music to create art. He is a designer with a MoMa class on designing with sound, called Painting with Sound, and each piece is done differently or reacted to differently. Specifically, The Lament Configuration (2015) is a sporadic example of his computational design. The video is a frantic collage of abstract shapes and their relationships with the music. Incorporating an acoustic aspect to the art gave it transitions and dynamic of each shape a purpose. The smoothness of and the matching of the beat to the piece gave it a coherent flow when watching the video. Not only does the viewer see and hear the beats but the way the shapes are moving touches the viewer.

If coded the animation was reacting to something specific about the music. There could’ve been a variable that changed everything the music changed. When the beat was louder, or more intense the shape grew larger. In addition, the shape was constantly spinning. The spinning component can be a variable that changes the coordinates of the shape and the enlargement can be a variable that is affected by sound. The red color of the background stayed constant but the shape affected by the music is changing perspectives, which changed how light affects the shape. Maybe this is possible when the shapes react a certain size of coordinate the color changes, using an if else condition.

Joshua Davis has a different piece for every song he analyzes, but not only changing colors or shapes but the format of his piece. From the way the shapes are organized to how they move, each piece he has emits a different emotion, the same way music channels different emotions in everyone.

13
https://joshuadavis.com/MoMA-Classes-Painting-with-Sound

LO-02: Generative Art

#Deepdream is blowing my mind.

Memo Akten 

2015.

Memo Akten’s #Deepdream is blowing my mind intrigued me by the way he used this project to reveal our confirmation biases and strong urge to reflect something familiar onto something that is simply “noise with particular distributions.” The combination of art, perception, and machine learning makes this project philosophical yet compelling and interactive. By watching the algorithm produce new combinations of patterns, it pushed me to seek familiarity in the array of colors and designs. Akten uses a video of his face, an unfamiliar image to the machine, which then uses familiar images to generate patterns that it perceives and recognizes in his face. In this way we can see the similarities and differences between machine biases and our human confirmation biases when presented with an unfamiliar image. 

Akten manifests his ideas in the algorithm by revealing the poetry of the algorithm’s processes in comparison to how we as humans perceive and reflect on our world. Akten’s exploration of machine learning and its relationship to human learning allows us to understand different modes of thinking and how the process can be quite similar with different results. 

The deepdream algorithm gives further insight to how we make meaning of the things around us by using the algorithm to show clearly how we engage and understand the images created by it, and how our biological neural network arrives at the conclusions it does. This project uses machine learning to help us better understand our own human learning process and consciousness.

Memo Atken’s #Deepdream is blowing my mind.