rkarp1 – LookingOutwards-03 – Frequencies

Frequencies” (2017) is a computational digital fabrication by convivial studio.

In “Frequencies,” a triptych of 3-D, map-like reliefs were made by CNC machines using Perlin Noise algorithm frequencies. The reliefs were meant to imitate rocky and fluid patterns, to match those often found on relief maps. According to convivial studio’s description of the project, “The generative application [used to make “Frequencies”] allows an infinite number of outcomes,” and so convivial studio sought to show a range of patterns across the triptych.

Here’s a video that details the creation of “Frequencies.”

I was intrigued by “Frequencies” for a number of reasons. For one, I’m in the early stages of a project about map-making (specifically with regards to redistricting), and I was curious to investigate map-making from a very different angle. In addition, I had just read about Perlin Noise in one of the required readings for our class, and I don’t know if I fully understand it but I am intrigued by it. I also love the combination of huge machines making very delicate-seeming art. I had to look up what a CNC machine is (read about it here) and I am looking forward to going down the rabbithole of other CNC machined artworks.

convivial studio used openFrameworks with add-ons including ofxMtlMapping2D, ofxFlowTools, ofxAutoReloadedShader for the generative 3D and projection software. They used artCam to generate the code needed for the CNC machine. A more detailed description is available here.

As seen in the video, a projection layer is added on top of the reliefs that, according to convival studio, “aims to challenge the perception of relief.” Personally, I found myself more interested in the reliefs themselves and the means by which they were made, but they do make for some striking images.

An image of “Frequencies” with its projection layer

convival studio, based in London, describes itself as working “at the intersection of art, design and technology. Merging the digital with the physical, convivial creates emotionally engaging experiences with an element of wonder.” I certainly felt wonder watching the creation of “Frameworks.” I hope you do, too!

rkarp1 – Project-03 – Section A

Rachel Karp – Dynamic Drawing

var d = 10; // distance between triangles

/*
This drawing consists of three "modes"
1) "Old" mode, when mouseY is above the middle of the canvas
(minus variable d, the space between the triangles)
2) "Young" mode, when mouseY is below the middle of the canvas
(plus variable d, the space between the triangles)
3) "Contemplative" mode, when mouseY is within the space between the traingles
*/

function setup() {
    createCanvas(480, 640);
}

function draw() {
    //BACKGROUND
    //set background color for "old" mode
    background(0);
    //background color changes in "young" mode
    if(mouseY>height/2+d){
        background(255, 255, 77);
    }
    //backgrond color changes in "contemplatative" mode
    if(mouseY>=height/2-d & mouseY<=height/2+d){
        background(255);
    }

    //set outline color
    stroke(255);

    //TOP TRIANGLE
    //set color in "young" mode
    fill(255, 51, 0);
    //set color in "old" mode
    if(mouseY<height/2-d){
        fill(150);
    }
    //set color in "contemplative" mode
    if(mouseY>=height/2-d & mouseY<=height/2+d){
        fill(255);
    }
    //draw triangle dependent on mouseY
    triangle(width/4, height/2-d, width/2, mouseY, 
        width-width/4, height/2-d);

    //BOTTOM TRIANGLE
    //set color in "young" mode
    fill(255, 51, 0);
    //set color in "old" mode
    if(mouseY<height/2-d){
        fill(150);
    }
    //set color in "contemplative" mode
    if(mouseY>=height/2-d & mouseY<=height/2+d){
        fill(255);
    }
    //draw triangle dependent on mouseY
    triangle (width/4, height/2+d, width/2, height-mouseY, 
        width-width/4, height/2+d);

    //TOP TRIANGLE LINE (age line)
    strokeWeight(3);
    //draw line dependent on mouse Y
    line(0, mouseY, width, mouseY);

    //BOTTOM TRIANGLE LINE (age line)
    //draw line dependent on mouse Y
    line(0, height-mouseY, width, height-mouseY);

    //TEXT
    //add text in "young" mode
    if(mouseY>height/2+d){
        fill(102,102,255);
        //set font size dependent on mouseX
        //constrain font size so that "young!" will remain visible relative to mouseX
        var sizeX = constrain(mouseX, 0, 160);
        textSize(sizeX);
        textAlign(RIGHT);
        text("I feel " + mouseX + " years young!", width, mouseY);
    } 
    //add text in "old" mode
    if(mouseY<height/2-d){
        fill(107, 107, 71);
        //set font size dependent on mouseX
        //constrain font size so that "old." will remain visible relative to mouseX
        var sizeX = constrain(mouseX, 0, 300);
        textSize(sizeX);
        textAlign(RIGHT);
        text("I feel " + mouseX + " years old.", width, mouseY);
    }
    //add text in "contemplative" mode
    if(mouseY>=height/2-d & mouseY<=height/2+d){
        fill(0);
        noStroke();
        textSize(50);
        textAlign(CENTER);
        text("I feel...", width/2, mouseY);
    }

}

I had a hard time coming up with a concept for this project. I knew I wanted to experiment with triangles because I thought I hadn’t made myself work with them much before, and I knew I wanted to experiment with text. I ended up with this. I like the “age lines” that track age based on mouseX, and I like that I got to learn about constrain().

rkarp1 – Looking Outwards-02 – Generative Art – Section A

LIA – Waves (2016)

When I read the description for “Waves,” I thought, ok, I’m going to watch a video of some sine waves. When I pressed play, I was blown away. The video completely transformed my emotional state, and then some. I was immediately lulled into a calm, the sort that is stereotypically ascribed to spending time by a body of water. I could even hear the sounds of an ocean. I found myself raising the volume on my computer to try to hear the sound better–only to realize that the video had no sound, and the sound was in my head, brought up from my memory by the video itself.

Still from “Waves,” a generative installation by LIA

On the project’s site (linked at the top of the post), LIA doesn’t go into too many details about the algorithms by which she made “Waves,” but the project description explains that the algorithms allow for changes in rhythm and scale while keeping the direction constant. It is all randomly determined, making for endlessly new patterns. She calls it a “motion painting.” It reminds me of the exercises we did in class today, with the circle moving back and forth at different speeds–but it’d be as if the circle were allowed to leave the canvas and endlessly new, randomly-generated circles appeared directly behind it, forever.

In the About section on her website, LIA states that her minimalist work combines traditional drawing and painting with new digital and algorithmic aesthetics. The bio continues: “She focuses on the translation of certain experienced principles into abstract forms, movements and colours in order to allow the viewer to explore the same on a subconscious level.” I can attest to her success at that from my experience of “Waves.”

As highlighted on the project site, “Waves,” was displayed as part of a concert in 2017. I think “Waves” begs to be combined with music, as my response, which was also aural, did.

Waves @ concert “Blades” by @c (www.at-c.org), with Angelica V. Salvi, Ricardo Jacinto, João Pais Filipe, gnration, Braga. Portugal, 2017/03/03. Photos by gnration / André Henriques

rkarp1 – Project-02 – Variable Face – Section A

Rachel Karp Variable Faces

//face
var faceColorR = (0);
var faceColorG = (200);
var faceColorB = (40);
var faceWidth = 200;
var faceLength = 300;
var faceOutline = (3);

//mouth
var mouthColorR = (0);
var mouthColorG = (0);
var mouthColorB = (0);
var mouthSize = 100;
var mouthOutline = (2);

//eyes
var eyeColor = (0);
var eyeStroke = (255);
var eyeSclera = (6); // strokeWeight for the eye

//canvas
var canvasColorR = (0);
var canvasColorG = (0);
var canvasColorB = (0);

function setup() {
    createCanvas(640, 480);
    angleMode(DEGREES); // Change the mode to DEGREES
}

function draw() {

    background(canvasColorR, canvasColorG, canvasColorB);

    //face
    stroke(255);
    strokeWeight(faceOutline);
    fill(faceColorR, faceColorG, faceColorB);
    ellipse(width/2, height/2, faceWidth, faceLength);

    //mouth
    strokeWeight(mouthOutline);
    fill(mouthColorR, mouthColorG, mouthColorB);
    arc(width/2, height/2+faceLength/6, mouthSize, mouthSize, 0, 160, CHORD);

    //eyes color and outline
    stroke(eyeStroke);
    strokeWeight(eyeSclera);
    fill(eyeColor);
    
    //left eye
    ellipse(width/2 - faceWidth/4, height/2-faceLength/6, 20, 20);

    //right eye
    ellipse(width/2 + faceWidth/4, height/2-faceLength/6, 20, 20);

}

function mousePressed() {

    //face color
    //When clicked, face color changes randomly.
    faceColorR = random(0,255);
    faceColorG = random(0,255);
    faceColorB = random(0,255);

    //face size
    //When clicked, face width changes randomly, and face height changes in relation.
    faceWidth = random(100, 340);
    faceLength = random(faceWidth, faceWidth*3/2);

    //mouth color
    //When clicked, mouth color changes in relation to face color.
    mouthColorR = faceColorG;
    mouthColorG = faceColorB;
    mouthColorB = faceColorR;

    //mouth size
    //When clicked, mouth size changes in relation to face width.
    mouthSize = random(10, faceWidth/3);

    //eye
    //Stroke color remains constant.
    eyeStroke = (255);
    //When clicked, the size of the sclera (strokeWeight) changes.
    eyeSclera = random(1,18);

    //face outline
    //When clicked, face outline changes in relation to sclera size.
    faceOutline = eyeSclera/2

    //mouth outline
    //When clicked, mouth outline changes in relation to sclera size.
    mouthOutline = eyeSclera/3

    //canvas
    //When clicked, canvas color changes to match mouth color.
    canvasColorR = mouthColorR;
    canvasColorG = mouthColorG;
    canvasColorB = mouthColorB;

}

This project made me have to figure out a lot of things (some with the help of Maayan Albert during office hours) including how to make colors change and how to make sure different changing shapes (eyes, mouth) could stay inside a bigger changing shape (face). Once I figured out how to make colors change, I wanted to play with that a lot to play with how different colors can affect the viewer’s emotions. I also became interested in how outlines can change the emotional quality of an image. E.g., when the white around the eyes is really big, the figure seems afraid; when the white around the whole face is heavier, the figure seems more active and zany.

Rachel Karp-LookingOutwards-1

I had known about theater artist Annie Dorsen for years, but my first direct exposure to her work was Yesterday Tomorrow , which I saw in New York in 2016.

Natalie Raybould, Jeffrey Gavett and Hai-Ting Chinn in “Yesterday Tomorrow.” Credit Sara Krulwich/The New York Times

Yesterday Tomorrow is a performance by three vocalists who sight-sing a score created live by a computer algorithm that transforms the Beatle’s song “Yesterday” into the musical Annie’s song “Tomorrow” over the course of about an hour. Each time the algorithm is run, the evolution from “Yesterday” to “Tomorrow” is unique. It is the third in Dorsen’s trilogy of what she calls algorithmic theater, “in which customized, algorithm-driven computer software controls the transformation of dramatic content in real-time.”[1]

The performance involved a number of creators, some typical for a musical theatrical work (director, musical director, lighting designer, sound designer, production manager), but others not always found in the credits, including a lead computer programmer, Pierre Godard, and a video systems designer, Ryan Holsopple.

The idea built on Dorsen’s previous algorithmic works and arose more specifically from Dorsen’s research into evolutionary computation. As she explained in an interview with BOMB Magazine, “I was learning about evolutionary computation, and I had a thought: You could use an algorithmic tool to slowly and unpredictably turn one thing into another. And then the very next thought was to turn the song ‘Yesterday’ into the song ‘Tomorrow.’ It was that automatic.”[2]

In another interview on website Esoteric.codes, Dorsen describes the process by which she and her team landed on the specific types of algorithms to use. (Sadly I couldn’t find the total number of people involved in the programming development team.) At first she wanted to use a genetic algorithm, in which the computer would transform “Yesterday” to “Tomorrow” by learning. But she and a programmer found that that method did not ensure the computer would reach “Tomorrow.” So instead, Dorsen worked with Godard and music director Joanna Baile to land on migration algorithms through which “Yesterday” shifted to “Tomorrow” through 30 steps (a number they also arrived at through experimentation; personally I think a few fewer steps might have made for a more compactly satisfying experience). Each element of performance has its own migration algorithm, meaning that the rhythm, lyrics, and melody migrations are generated independently.[3] Within all this structure, a lot of randomness is allowed, ensuring the performance is unique each time the program is run. The randomness has a direct tie to John Cage, whom Dorsen cites as an influence.[4]

To me, Dorsen’s use of algorithm in theater points to the coming ubiquity of including advanced technology across the theatrical field, as everyone and everything, theater included, transforms from yesterday to tomorrow.

A video excerpt from Yesterday Tomorrow

For more information, check out the full Esoteric.codes interview, which features a detailed explanation of the algorithms used as described by programmer Godard, this review from the New York premiere that I saw, and Dorsen’s 2012 essay about algorithmic theater.

Citations:
[1] Hallet, Nick. “Annie Dorsen.” BOMB Magazine, 12 Jan. 2016, www.bombmagazine.org/article/7164111/annie-dorsen.
[2] Hallet, Nick. “Annie Dorsen.”
[3] “A look at Algorithmic Theatre with ‘Yesterday Tomorrow’ creator Annie Dorsen.” esoteric.codes, 23 Feb. 2016, www.esoteric.codes/post/139854787758/a-look-at-algorithmic-theatre-with-yesterday.
[4] Hallet, Nick.

 

Rachel Karp – Project-01 – Self-Portrait (Face)

This process presented a great combination of practicing code and also distilling my conception of myself down into units basic enough for me to code. Product-wise, I am happy with the elements I chose to focus on: my eyes, my thoughts about my place in the world, my thoughts about other forces in the world (move your mouse around to play with that). But I know a greater facility with how to determine coordinates, especially in service of making triangles and quads, would have given me more precision with final my choices.

Rachel Karp Self-Portrait

function setup() {
    createCanvas(400, 400);
    background(125);
}

function draw() {

    //face
    fill(227,255,207);
    noStroke();
    ellipse(320, 270, 190, 300);

    //left eye
    fill(0, 178, 255);
    stroke(125);
    ellipse(270, 253, 50, 34);

    //right eye
    fill(0, 178, 255);
    stroke(255);
    ellipse(350, 250, 50, 40);

    //left pupil
    fill(0);
    ellipse(280, 262, 10, 10);

    //right pupil
    fill(0);
    ellipse(360, 262, 10, 10);

    //eyebrow
    stroke(0);
    line(300, 220, 250, 210);

    //nose
    fill(125);
    stroke(255);
    triangle(290, 320, 310, 260, 330, 310);

    //mouth
    fill(125);
    rect(290, 340, 40, 20);

    //mouth partition
    stroke(0);
    line(280, 352, 344, 352);

    //the Other
    fill(227,255,207);
    noStroke();
    quad(0, 31, 200, 150, mouseX, mouseY, 0, 76);

}