mmirho – Looking outwards – Section A

“Fractal Effervescence,” by David April, is a project that has impressed me greatly. The image, which I will add here, is a computer generated fractal, an image or figure that is constructed of many parts, and each part is exactly the same as the entire figure. In a sense, if you look closely at any piece of the figure in the image, you will find its the same image, just smaller.

The program used to generate this kind of art is based on recursion, which I hope to learn more about within this course. Recursion is a type of programming that makes the entire function into a loop, in a way, allowing the function to call itself to process the information given.

I’m unsure of the date of this art, though I do know it was created by David April, and the title is “Fractal Effervescence”. The artist is very interested in the concept of fractals and works to find new forms.

aerubin-LookingOutwards-02-Section_C

Utilizing an algorithm, a computer-generated jazz improvisation was created in the form of a melody over a walking bass line. I surprising found it very pleasing to listen to in comparison to what I would expect a computer to compose as it lacks humanity and emotion which is essential for music, composition, and art. It also sounds like jazz music as the interaction between the progression of chords and melody emulate works of famous jazz artists.

Although it is undeniably jazz improvisation, I felt that it lacked variation in both rhythm, range, and complexity. As a classically trained music performance major, I do not know everything there is to know about the genre of jazz, but when I listen to improvisation, it is not typically just straight triplets and quarter notes. In live improvisation, there is customarily some aspect of “extended techniques” (an unconventional method of playing an instrument such as glissando or pizzicato on a string instrument) utilized in the performance. That said, this is one of the reasons why performers, composers, and artists can never be replaced by solely technology as it lacks the innate creativity and range of emotions humans possess.

The description of the video gives a brief explanation of the code and how it operates. The author of the program set pre-defined chord changes to occur which is the basis of jazz music. I also suppose that the author preset most of the rhythms as there is not much variety. I believe the algorithm provided a set number of notes to choose from depending on each chord. The computer most likely randomly selected notes to play on the preset rhythm in some order. Overall, it is amazing to see what kind of art can be partially generated by artificial intelligence, although I do not believe it will ever be enough to completely replace an artist’s craft.

ICD/ITKE Research Pavillion 2015/2016

Built Pavilion Street View

The institute of Computational Design (ICD) and the institute of Building Structures and Structural Design (ITKE) make Biomimetic Investigation into Shell Structures every year.

It is a holistic process in which the design is based on nature patterns, and the fabrication is made with Robotics arms.

The pavilion is the first of its kind to employ industrial sewing of wood elements on an architectural scale. It is part of a successful series of research pavilions which showcase the potential of computational design, simulation and fabrication processes in architecture.

Based on both the biological principles as well as the material characteristics, the material system was developed as a double-layered structure similar to the secondary growth in sand dollars. The building elements consist of extremely thin wood strips. Instrumentalising the anisotropy of wood, these strips are custom-laminated so that the grain direction and thickness corresponds with the differentiated stiffness required to form parts with varying radii.

Aerial View
A computational tool calculates the necessary plywood veneer layout in order to elastically bend a strip into its desired curvature

 

 

 

 

 

 

 

This project explores robotic sewing to not only join the individually bent plywood strips that form a segment, but also to prevent potential delamination. An industrial robot is employed for both assisting during the assembly and bending of the strips that make up one elements, and then locking the pre-assembled segment in shape by sewing them with a stationary industrial sewing machine. During fabrication the robot first moves the segment through the sewing machine so that the strips are connected.

The computational design tool incorporates all biological principles while following architectural and structural constrains.

This type of project falls into the pure research and innovative morphogenetic design based architecture in which the generation of the geometry follows an Agent Based System in which some growing rules are applied.

 

rsp1-Section B – Looking Outwards-02

Image of an interaction of a person with the leaves of the plant, and the light behind it glowing softly in response to the touch.

 

 

Two people interacting with the Lumifolia artwork.

http://www.digitalarti.com/productions/lumifolia-interactive-garden/

What drew me to this project was the fact that it integrate nature with technology. I was intrigued with the beginning title of “interactive garden”, because it was something that I have not heard of until now. My initial thoughts were that an interactive garden just required people to just be in the space, maybe even walk through it, and experience the greenery that is essentially just static. However, with further research, I found that the trees themselves are actually interactive.

Created by duo Scenocosme for the Parisian Airports Company, this interactive garden “questions sensitivity, artistic, and musical relationships with the plants and the environment” according to its official website. The subtle electrostatic induction from the human body activates a soft light that illuminates the plant itself while its leaves are being caressed. The closer the proximity of the touch, the brighter the lights become.

Below is a video visually documenting the process of this artwork.

I found it a refreshing find because it was an idea that was very unique and unusual. I believe that this artwork can be developed even further and be used for a greater purpose such as public installation that would help to improve life in and around dark and uninteresting occupied spaces.

ctv-Looking_Outwards-02

The typography work (and every piece) by Q Shim is a visual orchestration of parts, wholes, shapes, and textures. He is a professor in the School of Design at CMU, and previously studied around the world. This video, in particular, fascinates me because much of his work brings digital concepts into the physical (non-digital world). He sets parameters, rules, and variables to create his pieces. He then curates those variables to create visually engaging pieces of art and design. Although a lot of his generative art is controlled by computer, in one piece, he creates stamps by hand, then places those stamps along a grid and follows an algorithmic process.

hyt-Looking-Outward-02: Deep Learning Drawing Machine

Shantell Martin and Sarah Schwettmann working on the “Mind the Machine” prototype. 2017
Template products of the “Mind the Machine”

For most people in the world, creative processes and ideas usually are associated with words such as “unique,” “special,” or  “innovative.” However, recently I have discovered a British artist who crafted something that might be contrary to people’s conventional connotations.

Shantell Martin, recently residing in New York, collaborated with computational cognitive neuroscientist Sarah Schwettmann to create a systematic “drawing machine” trained by deep neural leaning. What’s interesting about this algorithm-based drawing is that it completes its learning through Shantell Martin’s own three-hundred drawings, then extract common, recurring elements and tempts to rearrange the patterns for new creations. According to the artist herself, as she is more a improvised, “stream of conscious” kind of artist, this deep learning tool has enabled her to understand her “system of thought” and artistic style more.

Personally, I find the experimental art project intriguing since it’s almost like a prototype machine for the future world –dissembling the artists’ elements of style and ideologies and streamline-producing seemingly hand-drawn artworks to the audience. I think this project worth pondering and reflecting for all of us as we step into the creative industry in later days of our lives, to think about how we can utilize machines to a greater use and at the mean time maintain our own uniqueness and integrity.

hyt-Project-02: Minion Faces

hyt-02-Minions

// helen tsui
// 15-104 section d
// hyt@andrew.cmu.edu
// project-02-minions-variation

var eyeSize = 50;
var mouthSize = 30; 
var bodyWidth = 140;
var bodyHeight = 130;
var eyeballSize = 40; 
var mouthAngle = 100;
var x = 0; //rgb value randomization


function setup() {
    createCanvas(480, 640);   
}

function draw() {
    angleMode(DEGREES); // angle unit change
    background(x, 194, 239); // blue and pink-ish backgrounds randomizing
    
// body
    stroke(0);
    strokeWeight(4.5);
    fill(255, 221, 119); // yellow
    arc(width / 2, height / 2, bodyWidth, bodyHeight, 180, 360, OPEN); // bald head part
    rect(width / 2 - bodyWidth / 2, height / 2, bodyWidth, bodyHeight); // actual body

//pants?
    fill(68, 100, 133); // denim blue
    arc(width / 2, height / 2 + bodyHeight, bodyWidth, bodyHeight - 30, 0, 180, OPEN);

// eyes    
    var eyeLX = width / 2 - bodyWidth * 0.23; // Left eye's x position
    var eyeRX = width / 2 + bodyWidth * 0.23; // Right eye's x position
    fill(256); // white
    ellipse(eyeLX, height / 2, eyeSize, eyeSize); // 
    ellipse(eyeRX, height / 2, eyeSize, eyeSize);

// eyeballs
    fill(60, 40, 40); // brown
    noStroke();
    ellipse(eyeLX, height / 2, eyeballSize, eyeballSize);
    ellipse(eyeRX, height / 2, eyeballSize, eyeballSize);

// mouth
    fill(0);
    arc(width / 2, height / 2 + bodyHeight - 50, mouthSize, mouthSize, 0, mouthAngle);

// hands (referenced from p5.js angleMode)
    var a = atan2(mouseY - height / 2, mouseX - width / 2);
    fill(255, 221, 119); // yellow
    
// left hand rotate! 
    translate(width / 2 - bodyWidth / 2, height / 2 + bodyHeight);
    push();
    rotate(a);
    rect(0, - bodyHeight / 12, bodyHeight / 3, bodyHeight / 6); // arms
    fill(0); // black
    ellipse(bodyHeight / 3, 0, bodyHeight / 5, bodyHeight / 5); // black hands
    pop();

// right hand rotate! 
    translate(bodyWidth, 0);
    push();
    rotate(a);
    rect(0, - bodyHeight / 12, bodyHeight / 3, bodyHeight / 6); // arms
    fill(0); // black
    ellipse(bodyHeight / 3, 0, bodyHeight / 5, bodyHeight / 5); // black hands
    pop();

 }


// randomization when mouse clicks on the canvas
function mousePressed() {
    bodyWidth = random(100, 180);
    bodyHeight = random(90, 180);
    eyeSize = random(30, 50);
    eyeballSize = eyeSize - 12; 
    mouthSize = eyeSize; 
    mouthAngle = random(10, 220); 
    x = random(81, 225); // background rgb value change
}

Now that we have entered week 2, there are many more functions and attributes that I have began to explore in p5.js, and based on those I decided to make a minion with animation-like facial features for my program. I didn’t particularly plan out the sketches on Illustrator, but rather drew out primitive shapes like ellipses and rectangles through trial and error. One of the more difficult process was the moving hands when the mouse hovers over, but once I figured out the coordinates and center for rotation it became much easier. Overall, I really enjoyed creating the project!

eeryan-Looking-Outwards-02

chronomops from Tina Frank on Vimeo.

This generative art, created in 2005 by Tina Frank interested me because of the combination of flashing images and jarring music engaging and thought provoking. It reminded me of the randomized animations we looked at in class, but used in a way that was visually appealing and planned. The neon colored lines, rapidly reappearing in parallel and perpendiculat sequences, in combination with the increasingly fast and nonsensical music combine in a way that is jarring, yet still visually compelling. Occasionally your eyes will try to find sense in the rapidly appearing lines and shapes – for example the appearance of the rotating cubic form in the midst of linear chaos around the 40 second mark. The ability to generate these patterns and shapes rapidly and randomly allow an artistic interpretation of chaos using digitally generated images and sound.

creyes1-LookingOutwards-02

IBM’s The Cognitive Photobooth from Justin Au on Vimeo.

Back in April 2017, IBM hosted an “Art with Watson” exhibit in New York City, featuring a photobooth powered by the eponymous AI that would create a data visualization of the user’s personality in the form of a portrait. With IBM being a large company, I’m surprised that this is the first time I’m seeing this project, but there’s something charming and admirable about creating a warm, human experience with artificial intelligence.

The photobooth asks questions such as the user’s favorite artist or to tell them about their childhood friend, then using text-to-speech and tone analyzers, figures out the user’s personality traits to then be integrated into their portrait. While the personality analysis is entirely up to the AI, the portraits do bear the mark of their creators – icons representing each traits are premade, it’s the algorithm that determines where to place them, in what quantity, and at what size.

Usually personality tests feel somewhat arbitrary, requiring to choose between bubbles that say “Agree” or “Somewhat Agree,” but IBM’s Cognitive Photobooth seems to encourage its users to speak freely on topics that one wouldn’t expect a computer to understand, and it’s admirable to see that human element being worked into technology for an experience that’s not only wonderful, but welcoming.

Check out the project on Behance.

Shots of IBM’s Cognitive Photobooth

amui1-LookingOutwards-02

For this Looking Outwards, I was inspired by this project called “Human After All,” created by a photographer, Jan Kriwol, and a digital artist, Markos Kay. This generative art piece showcased 3D renderings of virtual blood vessels with particle simulations. They stripped down the human body to just the essentials, and placed them in the context of everyday situations. This project aimed to battle social barriers, caused by race or gender, in various environmental contexts.

I really admire this project because of its creative collaboration of 2 mediums and because of the message the artists work to convey. I admire this project because it is completely new and not like anything I have seen before. Also, this combination of digital and real is extremely creative, and I aspire to create something one day that does the same. “Human After All” inspires me because the artists use such a creative way to demonstrate equality.

Link to full project here