anabelle’s blog 05

One 3D computer artist I find inspiring is @dedouze on Instagram. His works propose an interesting side of 3D art because his works look 2D! Actually, it can be hard to tell which works on his page are 3D models versus sketches sometimes because the visuals stay so consistent in a static image. The only time you can really differentiate the two is when he animates his 3D works. He uses blender for all his works (this is particularly inspiring to me, since I just learned how to use blender a week ago for another class. It’s like, ‘wow, this is what I can work up to?!’). I think his art also departs from the usual aesthetics we associate with 3D art — I’m thinking Pixar’s cartoony style or Hitman’s hyper realistic one. dedouze’s art is the only convincing example I’ve seen that makes me think that 2D animation can be improved with 3D. You know how sometimes a 2D show briefly switches to 3D and it’s really jarring and not visually pleasing? I feel like the studios in charge of those shows could learn from that it is possible to make smooth 2D and 3D transitions without removing the audience from the visual experience.

Looking Outwards-05

The piece of work I selected is “Cellular Forms” by Andy Lomas. The piece of work depicts a simple example of morphogenesis. In the video, the form morphs and changes and goes through a series of evolutions. Each time the form undergoes a change it shakes and undulates and then almost “settles.” I picked this piece because I was intrigued by how the artist represents something quite scientific like cell morphogenesis, and makes it feel very peaceful and elegant. I was unable to find any information about the specific algorithm Lomas used but it seems like he used some kind of additive algorithm and based all of his additions on a spherical boundary surface and all of the spheres which made up the larger cell object divided by an even amount within that boundary surface. In my opinion, it seems like Lomas took into account the overall action of the cell’s division but chose to remove the human aspect of it in order to make it feel more like art.

Looking Outwards-04

The piece of work I selected is “data.matrix” by Ryoji Ikeda . The piece was released in December 2005. When listening to the piece its electronic quality comes to the forefront. It sounds almost as if it is made up of “beep boops” and tapping. However, there is an effect added to the sound which really enhances its electronic energy. Additionally, the piece sounds closer to noise frequencies than a traditional music piece. Ikeda’s pieces are often compared to a soundscape and that is apparent in data.matrix. Ikeda has had large scale sound installations all across the world and had prominent work at the TWA flight center at JFK. It reads more as art, and even though I was not able to find what algorithm Ikeda used for this piece it is apparent that some kind of algorithm must have been used. There is a clear order and organization to the piece that makes it feel quite mathematical and computational.

P-04 String Art

sketch
// Bridget Doherty, bpdohert, 104 Section C

// draw at least 3 string art shapes

var dx1;
var dy1;
var dx2;
var dy2;

// line density
var numLines = 40;

// starting line values
var x1 = 0;
var y1 = 0;
var x3 = 0;
var y3 = 300;

// second line values
var x2 = 400;
var y2 = 300;
var x4 = 400;
var y4 = 0;

var mouseClick = 0;

function setup() {
    createCanvas(400, 300);
    background('black');
    blendMode(EXCLUSION);
}

function draw() {
    drawCircles();
    string1();
    string2();
    string3();
    
    noLoop();
}

function drawCircles() {
    for (i = 20; i < width; i+= 40) {
        fill(250);
        noStroke();
        circle(i*1.3, i, 40);
    }
    

}

function string1() {
    for(i = 0; i <= height; i += 10) { 
        stroke('green');
        line(width/2.5, height/2, 0, i);
        line(width/2.5, height/2, width, i);
    }
    for(i = 0; i <= width; i += 10){
        stroke('yellow');
        line(width/3 + width/2, height/2, 0, i);
        line(width/3 + width/2, height/2, width, i);
    }
}

function string2() {
    stroke('blue');
    dx1 = (300-50)/numLines;
    dy1 = (0)/numLines;
    dx2 = (50-300)/numLines;
    dy2 = (300-300)/numLines;
   
    for (var i = 0; i <= numLines; i += 1) {
        line(x1, y1, x2, y2);
        x1 += dx1;
        y1 += dy1;
        x2 += dx2;
        y2 += dy2;
    }   
}

function string3() {
    stroke('cyan');
    dx1 = (300-50)/numLines;
    dy1 = (0)/numLines;
    dx2 = (50-300)/numLines;
    dy2 = (300-300)/numLines;
   
    for (var i = 0; i <= numLines; i += 1) {
        line(x1, y1, x2, y2);
        x1 += dx1;
        y1 += dy1;
        x2 += dx2;
        y2 += dy2;
    }  
}

LO-04


I get to talk about my favorite coding platform Max again! I found this pretty interesting granular synthesis patch, where the creator set up a patch where you can take samples of music playing and loop them back quickly, and still be locked into the set tempo (not sure if this is algorithmically read off of the sound or input by the person running the patch) when you release it. Makes for some pretty cool audio effects. The creator demonstrates this using a sort of folk-world-choral piece, with mostly vocals and drums/percussion, but it would be really interesting to use this on a multitrack recording of an orchestra or other multi-part ensemble, to play with different parts of the music while the rest of the orchestration continues underneath it. This video is from 2013, and the software has improved a lot since then, so I would be really interested to play with the patch in a newer version of Max and see how I could clean it up in presentation mode so it could possibly be used in a live performance format.

Project 04 – String Art

Sunset at the Beach

//Srishty Bhavsar
//15-104
//Section C
var dx1;
var dy1;
var dx2;
var dy2;
var numLines = 50;

function setup() {
    createCanvas(400, 400);
    background(200);
    text("p5.js vers 0.9.0 test.", 10, 15);
    line(50, 50, 150, 300);
    line(300, 300, 400, 100);
}




function draw() {
    createCanvas(400,300);
    //sky blue background
    background(193,242,254);
        // for loop that initializes =, lines up to 700, and spaced +1
    dx1 = (150-50)/numLines;
    dy1 = (300-50)/numLines;
    dx2 = (350-300)/numLines;
    dy2 = (100-300)/numLines;
    
// faint mountains in background
    var x1 = -100;
    var y1 = -30;
    var x2 = 500;
    var y2 = -200;

    for (var i = 0; i <= numLines; i += 1) {
        x1 += dx1;
        y1 += dy1;
        x2 += dx2;
        y2 += dy2;
        push()
        stroke(167,84,41,40);
        translate(200,150);
        rotate(PI/3); // flipped so they look like mountains
        line(x1/2, y1, x2, y2);
        pop();
        push()
        stroke(167,84,41,30);
        translate(100,150);
        rotate(PI/3);
        line(x1/2, y1, x2, y2);
        pop();
    }


    for (var i = 0; i <= numLines; i += 1) {

        var x1 = 0;
        var x2 = (i*width)/50;
        var y1 = (i*width)/50;
        var y2 = height;

        //Ocean blue waves (light to dark)
        stroke(219,247,253);
        line(x1 + 50, y1 + 100, x2, y2);
        stroke(0,157,196); // brightest blue
        line(x1, y1 + 80, x2, y2);
        line(width + (x2), height - y1, x2, y2);
        stroke(1,90,112); // oceanside blue
        line(width+500, height-y1, x2, y2);
        line(width+500, height-y1, x2 + 100 , y2);
        line(x1, y1 + 150, x2, y2);
        line(x1, y1 + 200, x2, y2);


        //sand
        stroke(246, 240, 210); // dark sand color
        line(x1, height - 20, x2 - 100, height);
        stroke(205, 170, 109); //lighter sand color
        line(x1, height-30, x2 + 10, height);
        stroke(255,255,240); //ivoru sand color
        line(x1, height -10, x2 -100, height);

    }

    //setting sun
     for (var i = 0; i <= 50 + numLines; i += 1) {
        
        // sun sets position above water
        var x1 = 200;
        var y1 = 250;
        stroke(250, 180, 20, 100); // sunny yellow 
        strokeWeight(2)
        push() // stored
        translate(x1, y1);
        rotate((i*PI)/numLines); //center filled circle
        line(0, 0, 0, 100); // rays
        pop() // resets

    }


    noLoop();
}


Blog 04

French American jazz pianist Dan Tepfer is also a coder and has developed a series of algorithms so that his computer can play with him. Jazz is known for its unpredictability and improvisations, which Tepfer’s algorithms can respond to. Tepfer, however, is a strong proponent of believing that computers should not be too intelligent but rather broaden the horizons of one’s imagination. Tepfer connects his Yamaha Disklavier to his computer where his playing is understood and the algorithms and “plays” the piano as well. His album showcasing his work, Natural Machines, was released in 2018. Tepfer is currently working on bringing his ideas to the Melbourne Planetarium where his algorithm will also project moving images in the dome.

anabelle’s blog 04

Since middle school, I’ve always been a huge Vocaloid fan. Vocaloid is a software used to simulate human singing and was originally created to help performers and producers add vocal music even if they did not have access to a real singer. However, Vocaloid expands beyond the software and includes “characters” for each voicebox/voicekit, with popular examples including Hatsune Miku and Kagamine Len&Rin. Initially released in 2004 in a project led by Kenmochi Hideki in Barcelona, the Vocaloid software continues to be updated and rereleased today, with numerous versions and iterations of the same character with new singing abilities. If popular enough, Vocaloids are also given 3D holograms that are capable of holding real-life concerts in real-life venues (with really great turnout). I think the algorithms to create Vocaloid are fairly simple — a commissioned singer records the base notes for a character, which can be modified and edited by producers. What I love about Vocaloid is how each character is given vocal “limitations” to produce their own unique sounds. For example, Teto is best used for soft, low energy ballads, and Kaito’s deeper range will sound distorted in higher ranges.

Here’s an example of a vocaloid concert — the turnout is actually crazy for these things:

Link to Vocaloid website (anyone can buy the software): https://www.vocaloid.com/en/

Project – 03

right click to refresh

sketch

//grid lines vertical
var x1 = 30
var y1 = 30
var x2 = 30
var y2 = 60

//grid lines horizontal
var Ax1 = 60
var Ay1 = 30
var Ax2 = 90
var Ay2 = 30

//orange square
var Sx1 = 60
var Sy1 = 60
var d = 30

//purple square
var PSx1 = 240
var PSy1 = 300
var Pd = 30

//yellow square
var YSx1 = 420
var YSy1 = 210
var Yd = 30


var gap = 30
var scolor = 0
var slant = 0
function setup() {
    createCanvas(600, 450);
    background(0);
    //text("p5.js vers 0.9.0 test.", 10, 15);
}



function draw() {

//orange square
strokeWeight(0)
fill('orange')
square(Sx1,Sy1,d)

//purple square
strokeWeight(0)
fill('purple')
square(PSx1,PSy1,Pd)

//yellow square
strokeWeight(0)
fill('yellow')
square(YSx1,YSy1,Yd)

// scaling square orange
if(dist(mouseX,mouseY,Sx1+d/2,Sy1+d/2)<d){
    //squaresscale
    d+=10
}if(d==width/2-15 || d==height/2-15){
    strokeWeight(0);
    fill(0)
    square(0,0,1000);
    strokeWeight(0);
    fill(random(200,255));
    square(Sx1,Sy1,d);
    d=60;
}

// scaling square purple
if(dist(mouseX,mouseY,PSx1+d/2,PSy1+Pd/2)<Pd){
    //squaresscale
    Pd+=30
}if(Pd==width/4 || Pd==height/4){
    strokeWeight(0);
    fill(0)
    square(0,0,1000);
    strokeWeight(0);
    fill(random(200,255));
    square(PSx1,PSy1,Pd);
    Pd=60;
}

// scaling square yellow
if(dist(mouseX,mouseY,YSx1+d/2,YSy1+Yd/2)<Yd){
    //squaresscale
    Yd+=1
}if(Yd==width/4 || Yd==height/4){
    strokeWeight(0);
    fill(0)
    square(0,0,1000);
    strokeWeight(0);
    fill(random(200,255));
    square(YSx1,YSy1,Yd);
    Yd=60;
}



//grid lines vertical left to right
//color
if(scolor==0){
stroke('red');
strokeWeight(3);
line(x1,y1,x2,y2);
}if(scolor==1){
stroke('green');
strokeWeight(3);
line(x1,y1,x2,y2);
}if(scolor==2){
stroke('blue');
strokeWeight(3);
line(x1,y1,x2,y2);
}if(scolor>2){
    scolor=0;
} 




//grid creation animation
x1+=gap
x2+=gap

//lines hit edge of canvas
if(x1>=width-60){
    //move down a row
    y1+=gap*2;
    y2+=gap*2;
    //reset x values
    x1=gap;
    x2=gap;
//loop lines across screen
}if(y2>=height){
    x1=30;
    x2=30;
    y1=30;
    y2=60;
    scolor +=1
}

//grid lines horizontal color top down
//color
if(scolor==2){
stroke('red');
strokeWeight(3);
line(Ax1,Ay1,Ax2,Ay2);
}if(scolor==1){
stroke('green');
strokeWeight(3);
line(Ax1,Ay1,Ax2,Ay2);
}if(scolor==0){
stroke('blue');
strokeWeight(3);
line(Ax1,Ay1,Ax2,Ay2);
}if(scolor>2){
    scolor=0;
}

//grid creation animation
Ay1+=gap
Ay2+=gap

//lines hit edge of canvas
if(Ay1>=height){
    //move across a row
    Ax1+=gap*2;
    Ax2+=gap*2;
    //reset y values
    Ay1=gap + slant;
    Ay2=gap;
//loop lines across screen
}if(Ax2>width){
    Ax1=60;
    Ax2=90;
    Ay1=30;
    Ay2=30;
    scolor +=1
}



//refresh page
if(mouseIsPressed){
    if(mouseButton == RIGHT){
    
    fill(0)
    rect(0,0,width,height);
    //orange square
    Sx1 = 60
    Sy1 = 60
    d = 30

    //purple square
    PSx1 = 240
    PSy1 = 300
    Pd = 30

    //yellow square
    YSx1 = 420
    YSy1 = 210
    Yd = 30
}
}

//slant
if(mouseX>width/2){
    //slant for vertical
    x1=x1+gap;

}
if(mouseY>height/2){
    //slant for horizontal
    Ay1=Ay1+gap;

}

}

Blog-03

Designer Hasan Ragab creates immersive digital art using an AI text-to-image generator for Parametric Architecture. The model is called Midjourney and is hosted on a Discord server. After inputting responses to prompts, the AI bot will produce four variations of the result, then you can make more variations out of the existing variations. Hasan Ragab implements his Egyptian heritage into his work, creating futuristic Egyptian temples and reimagining Gaudi-esque buildings. Ideas are processed through a “parametric copy-paste” and new possibilities come with each variation. A perk of Midjourney is that it prioritizes more artistic than realistic results. Taking less than a minute, Midjourney is also very accessible and efficient, as easy as playing a game on your phone. A community of architects and designers has also formed surrounding the model to push the boundaries. Hasan Ragab views AI tools as a new path in architecture. AI tools will have to shift our views on creativity, whether it is for better or worse if up for debate.