Han Yu Looking Outward 11

Skirting on Thin Ice by Neotropic (Riz Maslen)

Riz Maslen is a British electronic music artist, producer, singer and multi-instrumentalist. She is one of the most recognized female composers in the field of post-techno experimental electronics. She first started out experimenting electronic music in 1996. Malsen had produced an extensive profile of music under the names Neotropic and Small Fish with Spine and performed in huge music festivals like Coachella.

Skirting on Thin Ice is performed by Riz Malsen on the Opening of  London’s Artisit Quarter in 2011. Aside from making music and programming, Malsen also produces photographic slides and films as a live background during her performance as a complimentary medium to present a more holistic and multi-media experience for her audience. Just like the other tracks by Malsen I’ve came across when researching for this project, Skirting on Thin Ice has the same warm and mellow vibe that brings out the perfect amount of chill and energy. There are also slight variations nicely woven into the whole song which keep listeners eager to find out what’s next. I am impressed by the vast amount of creativity and innovation found in Malsen’s productions and I especially like this opening song as it somehow brings out my inner peace whenever I listen to it.

Christine Seo – Looking Outwards 11

Caption: A video documentation, filmed in Montreal, Québec, of Mesa Musical Shadows (2016), an interactive musical pavement.

Mesa Musical Shadows is an interactive art related to computational music by Daily Tous Les Jours studio, an interaction design studio with a focus on work in public spaces. This project was inspired by a mix of performance, musical instruments, and digital arts, using contemporary tools such as sensors and real-time data. The artists wanted passing crowds to be invited to play a critical role in the transformation of their environment and their relationships. The creator’s artistic sensibilities manifest in the final form by using bright geometrically tiled surfaces that make the piece more inviting for the audience. The piece allows shadows to cast with the sensor, where sounds and melodies are projected by speakers. The sensors are controlled by a MaxMSP patch linking Arduino Mega boards via OSC and are constantly recalibrate themselves in order to define new threshold values to determine what is a shadow and what is not.

The shadows cast on different tiles, which trigger different voices, all singing in harmony. Additionally, the music dependent on the weather and the time of the day (due to variations of the lengths of the shadows). I thought that this project was very intriguing because the artists let the audience compose the music and are welcomed by a moment of surprise and an invitation to engage with the piece, as well as interact with other visitors and passersby. I believe that music is a language that is universal, and it is really exciting to see art and music collaborate in order for the community to come together as well.

https://www.creativeapplications.net/maxmsp/playin-the-pavement-daily-tous-les-jours-mesa-musical-shadows/

John Legelis – Looking Outwards 11, Computer Music

Teenage Engineering OP-1 Synthesizer/Sampler/Sequencer

Teenage Engineering is a young company from Stockholm Sweden. They developed the OP-1 instrument in response to the creativity-stifling environment of a completely unstructured Digital Audio Workspace.

The OP-1 as a digital instrument is far more restrictive to the user that creating music on a computer would be. The creators claim that “…[the] limits boost the creativity. Limitations are OP-1’s biggest feature.”

The device draws inspiration from retro 80’s synthesizers such as the Casio VL-1, which some developers grew up with and discovered that the physical barriers of older syths spurred unconventional ideas and creativity.

Sample of OP-1 in use.

There are 11 separate synthesizer chips in the device that use varying methods to create different types of sounds.

Hannah Cai—Looking Outwards—11

For this week’s Looking Outwards, I chose this sample of music, which was generated by David Cope’s “Experiments in Musical Intelligence” computer program. It’s termed as a “Bach-style chorale” in the title, which, based on the comments, confused and angered a lot of the audience, who said it sounded nothing like Bach. One commenter clarified this distinction: ” A “Bach Chorale” refers to a style, not a composer – pretty much any SATB vocal piece utilizing the rules of counterpoint. That’s why a computer can do it — once you lay in one of the voices, you really can fill in the rest “by the numbers”. But don’t confuse that with actually composing or creating. And no offense but any real lover of Bach can tell that this sounds nothing like him — it has none of his intrigue or quirkiness (some of which has to do with his deliberate breaking of said rules). It sounds incredibly bland and boring to me. Which is exactly what I’d expect from computer-generated music.” ”

I found the reactions to this experiment more interesting than the audio itself, which I enjoyed, but didn’t find really special. I guess that personally, I feel like music needs some kind of human creativity or spontaneity/randomness to truly be “art.” Because this piece only “fills in” harmonies based on preset rules, it is less interesting/valuable to me, and apparently to quite a few other people as well. I still find the experiment impressive though, and I’d love to experiment with some kind of generative music in the future, if I can manage to learn how to.

Dani Delgado Looking Outwards – 11

The laptop orchestra during a performance

The project I chose to research this week was the Stanford Laptop Orchestra (or SLOrk for short). This orchestra, which creates full performances by having people using controllers to generate music from more than laptops,  was founded in 2008 by Ge Wang and other students, faculty, and staff at Stanford University’s Center for Computer Research in Music and Acoustics. They use the ChucK programming language in order to create/ synthesize sounds and design the instruments.

The controllers that can be used to generate music. The knobs at the end can be pulled and twisted by the performers which then communicates to the laptops

I find this project to be fascinating, as it requires similar skills as a normal orchestra does (in terms of cohesion, practice, and performance) but utilizes a completely new medium. I was honestly very excited to stumble across this because I was expecting to find a sole person composing music, but instead, I found this entire group working to synthesize music live which is impressive.

^A video of one of their performances

Their website

Lan Wei-Looking Outwards-11

The name of the project is ‘Mr Shadow’ and the project was created by Scientists at SONY CSL Research Laboratory in 2016.

I looked at lots of videos about computer music before decided to investigate this one. The reason is that for me most of the ‘computer music’ are only making use of computers to save time in producing the rhythms without digging into the specialities of computers. But this one produced by Artificial Intelligence is special in my perspective. It contains not only rhythms that lift people’s spirit, but also, most importantly, human motions. It’s exciting to see how a ‘dead’ computer can produce such vivid music with ups and downs through analysing precedents. And this leads to another question: whether computers will replace artists totally in the future? I can write an essay about this but I’m not going to discuss the topic here. I just want to say this project is very exciting and inspiring.

Besides the music itself, the graphic effect is also very good and it echoes with the music very well. In general, the computer-generated music really takes me into its world.

Lingfan Jiang – Looking Outwards 11

This week, I am interested in the project called “looks like music” done by a Japanese creator Yuri Suzuki in 2013. It consists of a miniature robot which detects and follows a circuit – a black line traced in marker pen – interspersed with colored reference points that the device translates in sound. The main reason I admire it is that it is an easy but super fun project that everyone can be involved in. For the general public, people normally are not able to be part of the artwork, so sometimes they do not understand them. Also, the algorithms behind it should be pretty easy too. The robot has a sensor inside which is able to differentiate between black and white and use “0” and “1” to represent them. As for the color, every color would have a specific sound. Similar to the “pixel.get” command in p5js, the robot is able to get different colors from the paper.

As for the creator’s sensibilities, being a sound artist, designer and electronic musician himself, Yuri Suzuki really incorporated his professions into the project. It is a simple graphic artwork that combined with sound and electrics. His intention is “to raise public awareness of the way in which sound and music are produced”, and I think he did it really well in this project.

Eliza Pratt – Project 11

click to redraw!

/*
Eliza Pratt
Section E
elpratt@andrew.cmu.edu
Project 11
*/

var face = 5; //face size
var faceY = 100; //starting y coordinate for face
var w = 2; //stroke weight
var eye = {LX: 0, LY: 0, RX: 0, RY: 0}; // eye coordinates
var lipX; // lip x position
var lipY; //lip y position

function setup() {
    createCanvas(480, 360);
    frameRate(10);
}

function draw() {
    background("beige");

    for (var i = 0; i < 3; i++) {
      var turt = makeTurtle(90 + i * 150, faceY);
      turt.penDown();
      turt.setColor("black");

      //draw face features
      faceBrowsNose(turt);
      eyes(turt);
      glasses(turt);
      lips(turt);
    }

    noLoop();

}

//redraw face when the mouse is pressed
function mousePressed() {
  draw();
}


//draws face, eyebrows and nose
function faceBrowsNose(ttl) {
  //FACE
  for (var i = 0; i < 180; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 100 + random(-2.5, 2.5));
    ttl.forward(face + random(-face / 3, face / 3));
  }
  ttl.left(20);

  //LEFT EYEBROW
  for (var i = 0; i < 50; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 100);
    ttl.forward(2 + random(-2, 2));
    //save coordinates at top of brow to assign eye position
    if (i == 25) {
      eye.LX = ttl.x;
      eye.LY = ttl.y + random(10, 25);
    }
  }
  //LEFT NOSE
  for (var i = 0; i < 10; i++) {
    turtlepressure(ttl);
    ttl.right(random(-0.5, .5));
    ttl.forward(1);
  }
  //BOTTOM NOSE
  for (var i = 0; i < 50; i++) {
    turtlepressure(ttl);
    ttl.left(360 / 100);
    ttl.forward(0.5 + random(-1, 1));
    //save bottom of nose coordinates for lip position
    if (i == 25) 
      lipY = ttl.y + random (20, 30);
      lipX = ttl.x + random (-30, 10);
  }
  //RIGHT NOSE
  for (var i = 0; i < 10; i++) {
    turtlepressure(ttl);
    ttl.right(random(-0.5, .5));
    ttl.forward(1);
  }
  //RIGHT EYEBROW
  for (var i = 0; i < 50; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 100);
    ttl.forward(2 + random(-2, 2));
    if (i == 25) {
      eye.RX = ttl.x;
      eye.RY = ttl.y  + random(10, 25);
    }
  }
}


//draws eyes
function eyes(ttl) {
    ttl.penUp();
    ttl.goto(eye.LX, eye.LY);
    ttl.penDown();

    //left eye
    for (var i = 0; i < 100; i++) {
      turtlepressure(ttl);
      ttl.right(360 / 50);
      ttl.forward(.5 + random(-0.75, 0.75));
    }

    ttl.penUp();
    ttl.goto(eye.RX, eye.RY);
    ttl.penDown();

    //right eye
    for (var i = 0; i < 100; i++) {
      turtlepressure(ttl);
      ttl.right(360 / 50);
      ttl.forward(.5 + random(-0.75, 0.75));
    }
}


//draws glasses
function glasses(ttl) {
  ttl.penUp();
  ttl.goto(eye.LX + random(10, 18), eye.LY);
  ttl.penDown();

  //lens 1
  ttl.face(90);
  for (var i = 0; i < 100; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 50 + random(-4, 4));
    ttl.forward(2 + random(-0.75, 0.75));
  }

  ttl.penUp();
  ttl.goto(eye.RX - random(10, 18), eye.RY);
  ttl.face(330);
  ttl.penDown();

  //lens 2
  ttl.face(270);
  for (var i = 0; i < 100; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 50 + random(-4, 4));
    ttl.forward(2 + random(-0.75, 0.75));
  }
}


//draws lips
function lips(ttl) {

  ttl.penUp();
  ttl.goto(lipX, lipY);
  ttl.penDown();
  ttl.face(310);

  //TOP LEFT
  for (var i = 0; i < 20; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 100);
    ttl.forward(1 + random(-1, 1));
  }
  ttl.left(50);

  //TOP RIGHT
  for (var i = 0; i < 20; i++) {
    turtlepressure(ttl);
    ttl.right(360 / 100);
    ttl.forward(1 + random(-1, 1));
  }
  ttl.face(180);

  //LINE
  for (var i = 0; i < 30; i++) {
    turtlepressure(ttl);
    ttl.forward(1);
    ttl.right(random(-2, 2));
  }
  ttl.face(90);

  //BOTTOM LIP
  for (var i = 0; i < 50; i++) {
    turtlepressure(ttl);
    ttl.left(360 / 100);
    ttl.forward(1 + random(-1, 1));
  } 
}


//varies stroke weight to create "hand drawn" effect
function turtlepressure(turtle) {
  w += random(-0.4, 0.4);
  if (w <=0) w = 0.4;
  else if (w>= 3) w = 2.7;
  turtle.setWeight(w);

} 

/////////////////////////////////////////////////////////////////
function turtleLeft(d) {
    this.angle -= d;
}


function turtleRight(d) {
    this.angle += d;
}


function turtleForward(p) {
    var rad = radians(this.angle);
    var newx = this.x + cos(rad) * p;
    var newy = this.y + sin(rad) * p;
    this.goto(newx, newy);
}


function turtleBack(p) {
    this.forward(-p);
}


function turtlePenDown() {
    this.penIsDown = true;
}


function turtlePenUp() {
    this.penIsDown = false;
}


function turtleGoTo(x, y) {
    if (this.penIsDown) {
      stroke(this.color);
      strokeWeight(this.weight);
      line(this.x, this.y, x, y);
    }
    this.x = x;
    this.y = y;
}


function turtleDistTo(x, y) {
    return sqrt(sq(this.x - x) + sq(this.y - y));
}


function turtleAngleTo(x, y) {
    var absAngle = degrees(atan2(y - this.y, x - this.x));
    var angle = ((absAngle - this.angle) + 360) % 360.0;
    return angle;
}


function turtleTurnToward(x, y, d) {
    var angle = this.angleTo(x, y);
    if (angle < 180) {
        this.angle += d;
    } else {
        this.angle -= d;
    }
}


function turtleSetColor(c) {
    this.color = c;
}


function turtleSetWeight(w) {
    this.weight = w;
}


function turtleFace(angle) {
    this.angle = angle;
}


function makeTurtle(tx, ty) {
    var turtle = {x: tx, y: ty,
                  angle: 0.0, 
                  penIsDown: true,
                  color: color(128),
                  weight: 1,
                  left: turtleLeft, right: turtleRight,
                  forward: turtleForward, back: turtleBack,
                  penDown: turtlePenDown, penUp: turtlePenUp,
                  goto: turtleGoTo, angleto: turtleAngleTo,
                  turnToward: turtleTurnToward,
                  distanceTo: turtleDistTo, angleTo: turtleAngleTo,
                  setColor: turtleSetColor, setWeight: turtleSetWeight,
                  face: turtleFace};
    return turtle;
}

Ever since our second variable face assignment where we saw examples of Moka’s generative faces, I’ve wanted to code doodles that look blind contours! I had a lot of fun playing with the turtles and got some pretty great results from randomizing different things. Also, by randomizing the stroke weight for each point I was able to make my own “brush” of sorts. It was challenging to have any control over the randomness factor, but here are some of the beautiful creations that came out of it before I decided to do multiple faces:

friend with hair
derp
sleepy friend
abstract friend

Kade Stewart – Project 11 – Composition

ants

//Kade Stewart
//Section B
//kades@andrew.cmu.edu
//Project-11

//variables storing each of the ants, the repellent values, 
//and the foot variables
var ants = [];
var rpx;
var rpy;
var rpc = 0;
var foot = false;
var time = 0;


function setup() {
    createCanvas(480, 480);
    rectMode(CENTER);

    stroke(255);
    strokeWeight(6);
    strokeJoin(MITER);
    strokeCap(PROJECT);

    //initialize all of the ants (which are actually each of type turtle)
    for (var i = 0; i < 100; i++) {
        ants[i] = makeTurtle(random(0, width), random(0, height));
        ants[i].face(random(0, 360));
        ants[i].penDown();
        ants[i].setWeight(4);
        ants[i].setColor(255);
    }

}

function mousePressed() {
    //when the mouse is pressed, start the foot step

    foot = true;
    time = 120;
    rpx = mouseX;
    rpy = mouseY;
}

function draw() {
    background(0);
    

    //loop through each of the ants in the list
    for (var i = 0; i < ants.length; i++) {
        var t = ants[i];


        //if a foot is coming (if the mouse has been clicked)
        //make the ants start to avoid the shadow of the foot
        if (foot) {
            var d = dist(t.x, t.y, rpx, rpy);
            var f = rpc / (Math.pow(d, 2));
            var dirx = (t.x - rpx) / d;
            var diry = (t.y - rpy) / d;
            t.x += dirx * f;
            t.y += diry * f;
        }


        //make the ant wrap around the screen
        //because there are so many, it just looks like new ones are being added
        if (t.x >= width) {
            t.x -= width;
        } else if (t.x <= 0) {
            t.x += width;
        }
        if (t.y >= height) {
            t.y -= height;
        } else if (t.y <= 0) {
            t.y += height;
        }


        //actually move the ant...finally
        t.forward(5);

    }


    //if the foot is still terrorizing the ants (if there is still time),
    //decrease the time limit and make the repellent force larger
    //otherwise, make sure the foot variable is false
    if (time != 0) {
        time--;
        rpc = map(time, 120, 0, 0, 25000);
    } else {
        foot = false;
    }

    //if the foot is more than 1/3 of a second away, draw it's growing shadow
    //otherwise, the foot will start to stomp on the ground
    if (time >= 20) {
        noStroke();
        fill(255);
        ellipse(rpx, rpy, 100 - map(time, 0, 120, 0, 100), 100 - map(time, 0, 120, 0, 100));
    } else if (time > 0 & time < 20) {
        fill(210, 180, 140);
        noStroke();
        push();
        translate(rpx, rpy);
        scale(abs(- 2.5 + time/4));
        angleMode(DEGREES);
        rotate(5);
        rect(20, 45, 50, 80, 10);
        rect(0, 0, 10, 30, 10);
        rotate(5);
        rect(10, 1, 10, 30, 10);
        rotate(5);
        rect(20, 2, 10, 30, 10);
        rotate(5);
        rect(30, 3, 10, 30, 10);
        rotate(5);
        rect(40, 4, 10, 30, 10);
        pop();
        angleMode(RADIANS);
    }

    //the help text
    fill(0);
    stroke(255);
    textSize(20);
    text("click to stomp on these poor ants", width/6 + 10, 20);

}



function turtleLeft(d){this.angle-=d;}function turtleRight(d){this.angle+=d;}
function turtleForward(p){var rad=radians(this.angle);var newx=this.x+cos(rad)*p;
var newy=this.y+sin(rad)*p;this.goto(newx,newy);}function turtleBack(p){
this.forward(-p);}function turtlePenDown(){this.penIsDown=true;}
function turtlePenUp(){this.penIsDown = false;}function turtleGoTo(x,y){
if(this.penIsDown){stroke(this.color);strokeWeight(this.weight);
line(this.x,this.y,x,y);}this.x = x;this.y = y;}function turtleDistTo(x,y){
return sqrt(sq(this.x-x)+sq(this.y-y));}function turtleAngleTo(x,y){
var absAngle=degrees(atan2(y-this.y,x-this.x));
var angle=((absAngle-this.angle)+360)%360.0;return angle;}
function turtleTurnToward(x,y,d){var angle = this.angleTo(x,y);if(angle< 180){
this.angle+=d;}else{this.angle-=d;}}function turtleSetColor(c){this.color=c;}
function turtleSetWeight(w){this.weight=w;}function turtleFace(angle){
this.angle = angle;}function makeTurtle(tx,ty){var turtle={x:tx,y:ty,
angle:0.0,penIsDown:true,color:color(128),weight:1,left:turtleLeft,
right:turtleRight,forward:turtleForward, back:turtleBack,penDown:turtlePenDown,
penUp:turtlePenUp,goto:turtleGoTo, angleto:turtleAngleTo,
turnToward:turtleTurnToward,distanceTo:turtleDistTo, angleTo:turtleAngleTo,
setColor:turtleSetColor, setWeight:turtleSetWeight,face:turtleFace};
return turtle;}

In the process of writing my code, I realized that the turtles each look like little ants. So, I made a shadow descend upon the ants, and eventually a foot try and stomp them. Don’t worry – none of them actually die, mostly because that would be annoying to code.

Plain ol screen
here comes the foot

Romi Jin – Looking Outwards 11

Keiko Uenishi is known for experimenting with “restructuring and reanalyzing one’s relationship with sounds in sociological, cultural and/or psychological environments.” One of her works that stood out to me is an installation entitled CDL, or Car Décalé (Légèrement)/Because Shifted (Slightly). Since Uenishi dabbles with the questioning of ideas of space through what she calls aural perceptions, and her experiments expose people to temporal spatial experiences that are usually new to them.

(Below is an example of one of her sound experiments about found objects.)

CDL is an “experiential/phenomenological sound-performance/installation/method” that uses audio-feedback to redefine a space. Uenishi uses simple spatial parameters as a setting for the experience (i.e. no decorations, no special flooring or walls, no furniture, no columns, etc.) as there should be no unnecessary distractions. The version of CDL above uses found objects (mainly construction materials) to create a part to whole experience — a series of smaller, temporary structures within a larger structure (the room). After attempted experiments, she would add or alter the objects to, in conjunction, alter the spatial experience of this specific CDL.