Nina Yoo Section E- Looking Outwards-02

I admire how it seems to draw me into the projection even though I, myself, am behind a screen. It feels 3D despite its lack of presence in real life. The algorithm generated most likely considers the space of the projection (the room that it is in) and setting patterns within cubes of space, basically trying to explore the space within the cubes as much as possible. Then later on, expanding the cube size to the room itself and then one again shrinking down. The artist behind this program is exploring a 3D sculpture that is able to display the applicabilities of geometric shapes and a presentation(much like a movie). He demonstrates both the simplicity and the complexity of simple shapes into an art form while also experimenting with a “no sound” experience. Even though the room is quiet, the projection seems very loud making the audience experience a more 3D landscape by just focusing on the images and having blank sound in the background.

https://vimeo.com/283848376

Yousuke Fuyuma LIVe at Lacking Sound Festival 2018

Looking Outwards – 02 Min Lee

 

Untitled work by Otto Beckman

The ReCode Project is an online archive of computer-generated art that takes its content from “Computer Graphics and Art”, a magazine that published this art from 1976 to 1978. The project is a collection of works from many different generative artists, such as one work Untitled by Otto Beckman.

The work itself is very mysterious, but what’s admirable about it to me is the artist’s ability to play with empty space and dark space to reflect the distinct style of watercolor painting, a very different medium from computer generation (in 1977, no less).

Part of the beauty of the archive is that no code is shown in any way, which leaves the algorithm of each work of art up to interpretation. I am amazed at how the artist achieved the watercolor look and managed to reflect shadows, but unfortunately cannot guess at how his algorithm works. However, the artist’s skill in perhaps other mediums shines through in his work, and his skills in computer-generated art, to me, opened my eyes to a different medium.

Source: http://recodeproject.com/artwork/v2n1untitled_Beckmann_Otto_06

Shirley Chen-Looking Outwards-02

A Berlin-based photographer Polina Efremova found a unique type of datamosh glitch by accidentally running a new video player on a very old PC. As the videos play on the player, glitches come up and she captured these moments as screenshots and turned these scenes into photographs, producing a series of photographs called “Destruction”. It is interesting that this special type of photography is discovered by accident, and the moments that she captures cannot be reproduced again. The randomness and fault generated by the software configuration eventually lead to an unexpectedly, uncontrollable and unique result. Although the glitch effect can be produced artificially nowadays, I think the most valuable thing from her experience is that new type of art or new method of generating art can be found at some interesting moments in our life, and we need to learn how to find and capture them. Computer is a tool for us to process artworks, but it also can be a generator of art that lead to an unexpected result.


Photograph Produced by Polina Running New Video Player on a Old PC


Photograph Series:”Destruction”

Original Source:
https://creators.vice.com/en_us/article/zmmwk4/polina-efremova-glitched-out-photos-vintage-pc

Rachel Lee Section E Looking Outwards 02

A generative art project I find inspirational is David Wicks’ Innovation Clock. The primary reason I was drawn to this project was its potential for inspiration and further impact beyond aesthetics– the clock extracts real time data from Twitter regarding the discussion of innovative ideas to stimulate conversation and encourage people to create. At a school like CMU, I am constantly inspired by the motivations, passions and compelling projects that my peers are working on, and feel like an artefact like this clock encompasses many attributes that I admire about my classmates and would encourage me to try new things. The programmers of this creation generated an algorithm that extracted up to the minute data from Twitter, presumably posts with hashtags related to innovative ideas or fields. Wicks’ artistic sensibilities are revealed through the lively text functions (hashtags are more hierarchically important with regards to type that Twitter posts) as well as dynamic lighting that reveals a coded map of activity, which reveal his consideration of space, composition and typography, as well as drama and impact.

Innovation Clock by David Wicks, 2015.

Min Jun Kim- Looking Outwards 2

This project is called ‘Immersive: Playing with Fauna’ by Dr. WooHoo! While looking for generative art, I was immediately drawn by the person’s name. I don’t know if that is his actual legal name or his pseudo name, but I just thought that it was a great choice in a marketing standpoint. The art is very unique and innovative to say the least- It is an immersive art experience where an audience can interact with the visual fauna that is floating around. Even though it is in equilibrium when untouched for a while, it enters a dynamic dance engaged by people. I admire the fact that it is so unique yet refined and has great reaction time with minimal latency. It is difficult for interactive software to be so detailed and intricate while having little latency.
I suppose that the algorithms that are in play for this piece involve two parts, one for standard behavior when in equilibrium, and one that is called when there is a stimulus. I felt that the function that is called is very similar to mouseX and mouseY except with camera coordinates. I think that the artist’s true genius (artistic sensibilities) comes into play when he is able to convey fun and excitement into an artform.

Source Link: https://www.behance.net/gallery/44562239/Immersive-Playing-with-Fauna

Min Jun Kim – Project 2 Variable Face

sketch

/*
Min Jun Kim
minjunki@andrew.cmu.edu
15104-B
Project 2
*/



var reee = 150
var faceDiff = 0.8
var faceCol = 250
var eyeSize = 30
var eyeDiff = 0.8
var pupilCol = 1
var browSlant= 10
var mouthFill = 100
var mouthSize = 30
var noseWidth = 10
var noseHeight = 25
var backColor = 100
var browS = 2
var hatHeight=100



function setup() {
    createCanvas(640, 480);
}

function draw() {
	'background design'
	noStroke();
	'green background'
	background(107,209,150);
	'aqua background'
	fill(110,206,251);
	rect(0,0,width*3/4,height);
	'dark blue background'
	fill(97,128,234);
	rect(0,0,width/2,height);
	'regular blue background'
	fill(80,146,247);
	rect(0,0,width/4,height);
	'stripe'
	fill(255,235,250);
	quad(0,230,0,170,170,0,230,0);
	'face color'
	fill(255, faceCol,180);
	'face'
	ellipse(width/2, height/2, reee*faceDiff,reee);
	'eye fill'
	fill(250,250,250);
	'eyes left'
	ellipse(width/2-reee*faceDiff*0.25,height/2, eyeSize, eyeSize*eyeDiff);
	'eyes right'
	ellipse(width/2+reee*faceDiff*0.25, height/2, eyeSize,eyeSize*eyeDiff);
	'eye fill'
	fill(100, 100, pupilCol);
	'pupil left'
	ellipse(width/2-reee*faceDiff*0.25,height/2, 10, 10);
	'pupil right'
	ellipse(width/2+reee*faceDiff*0.25, height/2, 10,10);
	'brow power'
	stroke(1);
	strokeWeight(browS);
	'eyebrow left'
	line(width/2-reee*faceDiff*0.35 ,height/2-reee*faceDiff*0.2+browSlant, width/2-reee*0.1, height/2-reee*0.1);
	'eyebrow right'
	line(width/2+reee*faceDiff*0.35 ,height/2-reee*faceDiff*0.2+browSlant, width/2+reee*0.1, height/2-reee*0.1);
	noStroke();
	'mouth fill'
	fill(255,100,mouthFill);
	'mouth'
	ellipse(width/2,height/2+reee*0.28,10+mouthSize,10);
	'nose fill'
	fill(255,175,123);
	'nose'
	triangle(width/2-noseWidth,height/2+reee*0.15, width/2+noseWidth, height/2+reee*0.15, width/2, height/2+reee*0.15-noseHeight);
	'hat'
	stroke(1);
	strokeWeight(3);
	line(width/2-110,height/2-reee*0.2,width/2+110,height/2-reee*0.2);
	fill(1)
	rect(width/2-reee*faceDiff*0.45,hatHeight,reee*faceDiff*0.9,height/2-reee*0.2-hatHeight);

}

function mousePressed() {
	reee = random(130,230);
	faceDiff= random(0.7,1);
	faceCol = random(190,220);
	eyeSize = random(20,35);
	eyeDiff= random(0.6,1);
	pupilCol = random(1,250);
	browSlant = random(1,12);
	mouthFill = random(1,255);
	mouthSize = random(1,30);
	noseWidth = random(8,15);
	noseHeight = random(20,30);
	browS = random(1,3);
	hatHeight = random(0,100);


}

This is my project 2 variable face. It was actually quite fun to do because I got to see a large number of faces. I wanted to do a British soldier inspired piece. It took a bit longer than I wanted though haha.

Sarah Yae LookingOutwards2 Section B

“Boil Up: Realtime Feeding Frenzy” (2013) is created by Robert Hodgin. His work can be explored on:

http://roberthodgin.com/portfolio/work/boil-up/

Hodgin’s animation installation of the sea and fish pattern amazed me. Although the audience receives a single experience,  there were so many procedural layers involved to create it. Hodgin had to break down his project into multiple steps, which added onto each other. For example, his first step would be the formation of bait by observing group behavior, then he would introduce the smaller predators.

The algorithm that generated the work was apparently “GPU-based” and his work was to be displayed via 4 projectors in a cylindrical room.

Hodgins’s artistic sensibilities manifested in his algorithm when he had to code the design of the fish as well as the sea, to make them look realistic.

Tuna model in “Boil Up: Realtime Feeding Frenzy

JohnLegelis – Looking Outwards – 2

Looking Outwards 02: Generative Art

MECHANICAL PLOTTER DRAWINGS  MACHINE ZEN

BY LIA

Machine Zen by LIA is a generative art project that implements the precision a robotic arm, and the infinite variability a paint brush. This project beautifully combines the deterministic nature of computer generated art and the individual personality of a brushstroke to create art that looks human, a rarity amongst computer generated art. The algorithm that created this series of works likely involved a simple program that instructs the robotic arm (in this case an AxiDraw mechanical plotter) to move in a circle while slowly raising the brush upwards. Many of LIA’s works, similar to this one, are not inherently complex in their synthesis. Instead the mark of her work comes from the highly polished final product whose elegance is derived from the simplicity of the design. I believe LIA could build upon the minimal style she’s already perfected to create more complex pieces that extend beyond “proof of concept” or first-word art.

Demonstration of Mechanical Zen Created by Robotic Arm Holding Brush

Jamie Dorst Looking Outwards 02

For this week’s Looking Outward post, I am choosing to write about Daniel Eden’s Drawing With Numbers project. Eden has created many works of generative art, all created with Processing, p5.js, or OpenFrameworks.

Some of Eden’s pieces, with his captions describing the code behind them:

Pack as many circles as possible within another circle, ensuring they don’t overlap. Draw a line through the middle of each of the sub-circles at a random angle.
“Given an origin and a parallel destination, draw 1000 points of varying transparency between them. Using Perlin noise, calculate a delta vector for both origin and destination, with x coordinates between -0.2 and +0.5, and y coordinates between -1 and +2. Add the delta vectors to the origin and destination. Repeat until either the origin or destination points are at least 80px from the bottom of the canvas.”
“Plot a series of connected points around the center of the canvas, using three-dimensional Perlin noise to vary the radius. Repeat this with an increasing base radius, stepping forward through the Perlin noise function to slightly vary the next shape.”
“One example of the kinds of sketches that would collapse on p5.js and demanded a more powerful medium.” Titled: Fabric

I selected this because I admired that he created these as a way to combat his inability to draw traditionally. He drew inspiration from designs he saw in the real world, then found a way to create them through his computer. I was attracted to the simplicity of the black and white patterns, and how he really focused on making the shapes emulate movement. I think it would be interesting to see the actual code behind it (versus just the pseudocode) to see how complex it is. Some of them seem doable to me, like the circle filled with dashes, while others seem much more complicated. I also found his blog post about how he began creating generative art interesting, because describes how he started out with p5.js which is what we are using in this course.

Daniel Teague – Project01 – Face

face_portrait

function setup() {
    createCanvas(600, 600);
    background(256);
    /*Daniel Teague - 15-104b - dteague@andrew.cmu.edu - project-01"*/;
}

function draw() {
    noFill();
    stroke(200, 152, 105);
    ellipse(300, 250, 150, 200);
    fill(0, 0, 0);
    stroke(0, 0, 0);
    ellipse(275, 200, 40, 25);
    fill(0, 0, 0);
    stroke(0, 0, 0);
    ellipse(325, 200, 40, 25);
    stroke(0, 0, 0);
    line(235, 200, 255, 200);
    stroke(0, 0, 0);
    line(345, 200, 365, 200);
    stroke(0, 0, 0);
    line(295, 200, 305, 200);
    stroke(256, 0, 0);
    noFill();
    arc(300, 275, 75, 75, 0, PI);
    stroke(0, 0, 0);
    arc(300, 335, 75, 150, 0, PI);
    stroke(0, 0, 0);
    line(300, 405, 300, 455);
    stroke(0, 0, 0);
    line(285, 420, 315, 420);
}

I can honestly say this was a lot of fun to do. Playing around with the code to get the results I wanted was interesting and funny at times. I almost had a Kermit the Frog at one point, because the eyes were on the top of the head instead of inside it. Looking forward to seeing what we will need to do in the future.