//SRISHTY BHAVSAR
//15-104 PROJECT 05
//SECTION C
// COLORS
var w = 255 // white
var lbrown = (196, 164, 132); // light brown
// lengths
var s = 50 //sqare
function setup() {
createCanvas(200, 20);
background(194,197,201);
text("p5.js vers 0.9.0 test.", 10, 15);
}
function draw() {
createCanvas(600,600);
background(194,197,201); //light blue
// RED DIAMOND LOOP
push();
translate(300,-300);
rotate(radians(45)); // rotates squares to be diamonds
for( var x = 0; x < 1200; x+= s/2) {
for( var y = 0; y < 1200; y+= s/2){
reddiamonds(x,y);
}
}
// FLOWER DIAMOND LOOP
pop();
push();
translate(265,-300);
rotate(radians(45));
noFill();
for( var i = 0; i < 2000; i+= s) {
for( var j = 0; j < 2000; j+=s) {
flowerdiamonds(i,j);
//square(i,j,s);
}
}
pop();
}
function reddiamonds(x,y) {
translate(x,y); // origin moves along row
push();
stroke(183, 113, 121, 70); // light red
strokeWeight(2);
noFill();
square(x,y,50);
pop();
translate(-x,-y); // origin moves along row
}
function flowerdiamonds(i,j) {
// lacy white dot rim of elipses that trace the diamond
noFill();
stroke(w);
strokeWeight(1);
translate(i,j);
// create 4 lacy rims that create a square
push();
for (var x = 0; x < 60; x +=10) {
for(var y = 0; y <10; y += 10) {
ellipse(x,y, 6, 4);
}
}
rotate(radians(90));
for (var x = 0; x < 60; x +=10) {
for(var y = 0; y <10; y += 10) {
ellipse(x,y, 6, 4);
}
}
translate(0, -50);
for (var x = 0; x < 60; x +=10) {
for(var y = 0; y <10; y += 10) {
ellipse(x,y, 6, 4);
}
}
translate(50,50);
rotate(radians(-90));
for (var x = 0; x < 60; x +=10) {
for(var y = 0; y <10; y += 10) {
ellipse(x,y, 6, 4);
}
}
pop();
//FLOWER STEM
push()
translate(-4,-30);
rotate(radians(-40))
noFill();
stroke(w);
strokeWeight(1)
curve(6, 30, 59, 50, 60, 80, 40, 40);
pop()
//FLOWER PETALS
push()
strokeWeight(1);
fill(196, 164, 132); // dark blue
translate(6,6);
ellipse(10,18,13,9);
rotate(radians(72));
translate(6,-30);
ellipse(10,18,13,9);
translate(-1,-67);
rotate(radians(72));
ellipse(10,18,13,9);
rotate(radians(72));
translate(-23,-71);
ellipse(10,18,13,9);
rotate(radians(72));
translate(-8,-77);
ellipse(10,18,13,9);
pop()
translate(-i,-j);
}
Over the last 12 years of MCU movies being created, Marvel worked with many VFX studios such as Weta Digital, Framestore, and industrial light and magic. Almost every 3D Computer Graphics was used in the films. They used Maya, 3ds Max Modo, in addition to Zbrush and Mudbox for sculpting. To create textured painting works, Mari and Substance Painter. Nuke is used with after effects for compositing 3D projections.
To create Thanos, Digital Domain worked with Marvel Studios to create effects shots using Masquerade. 513 shots were created by over 340 Digital Domain artists. Masquerade is a facial capture application that is based on computer machine learning algorithms. The system was worked on for 3 to 4 months before filming to develop and test. Masquerade has the ability to capture a high resolution image of an actor’s face at a rate of 40-50 frames per second.
The actor Josh Brolin who played Thanos. For Digital Domain, it was important for Thanos’s movements to be very organic and realistic. Thus, Mocap cameras were used. The actor Josh wore a Mocap suit and helmet with cameras that had motion capture dots to capture his movements. Digital Domain’s factual capture identified the smallest details such as wrinkles and curvatures of Josh’s face. From here, the animation team could enhance features of the face like eyes, until Josh’s face was transformed into Thanos’s purple face.
This project and artwork interests me because I had no idea that so many programs and machine learning algorithms are used in movies that contain real humans to create fake characters. Rather than going through the struggle of using prosthetics or other costumes to create a villain like Thanos, they were able to create an animated character that can be then utilized throughout the film.
The project I found is a performance called, “Puform; White Box.” This performance was programmed by Jean Sébastian Rousseau, and Peter Dines. The music is done by Alain Thibault, and visuals are done by Yan Breauleaux. The performance consists of three white rectangle screens, angled together to create a wall, displaying black parametric and geometric visuals transcend, twist, break, and many other transformations. When I first watched this video, the first thing that struck me was the way the audio sounded in different ways wearing earbuds and without. When I wore my airpods, I was able to hear the movement of the sound spatially because of its spatial audio feature. This made me think not only about the computational aspects of the audio within the project, but also the technology we use to interpret it.
The music of the performanced matched the visuals tightly, creating a surreal and daunting experience. The visual artist correlated sharp breaks with the musician’s staccato notes, and created vibrations and faster tempo, based on the speed of the visuals. The darkness of the exhibit room allowed the transforming visuals to stand out as they were contrasted amongst a white background.
The main software technique used by programmers is called white box testing. This is a testing technique in which the software’s internal structure, design, and coding are tested to verify input outflow, improve design, usability and testing. White Box is a new software based on an old way of generating A/V compositions in real time and is a new piece in a cycle that began in with Black Box, which exhibits inputs, outputs and transfer functions. Puform uses two layers mixed with their video tapes. Using Quartz composer compositions, the programmers can easily change the relationship between the music and video, as the piece is constructed with a database of clips using Lemur.
In 2015, a startup called “Eyebeam” showed many of its computational fashion pieces at New York fashion week. Computational fashion aims to touch upon many themes such as aesthetic, ergonomics, and intellectual property. What I admire about computational fabrication within fashion, is that it is extremely innovative and predictive of the future. Because traditional garments are made of fabric, they are fluid in nature. Today fluidity has become a popular style in design and architecture. Architects such as Zaha Hadid have been inspired by the fluidity of fashion pieces and reflected fluidity in their architecture.
However, the three main issues computation fashion desires to fix are flexibility, recharge-ability, and affordability. 3D printing has become increasingly popular for designers when modeling. But one of the biggest downsides of a 3D printed model is that it lacks malleability and flexibility. Designers at the company have found that by printing on different materials, they can manipulate it with interlocking springs to make naturally stiff material, loose like fabric or textile. Designer Bradley Rothenberg prints on nylon, polymers, and sometimes metals. He has used Python for the program Rhino in the past, but now uses C++ to allow himself to create more advanced structures. By increasing and decreasing his code and varying the geometric properties, he can control the material properties better.
Fashion technologies need to work throughout the day, and thus an important factor for computational fashion designers is recharge-ability. Eyebeam’s project director advised against having to plug a garment piece into your smartphone because it is inconvenient. Instead, professor Dan Steingard of Princeton University has been exploring energy options such as body heat, wind up solar, and bendable batteries. The third important factor is affordability. The minimum printing resolution for 3D printing is 500 microns. Because the resolution is not nice enough yet, there will have to be significant investments made in fashion technology.
One of the first buildings that caught my attention when I was younger was the Walt Disney Concert Hall by Frank Gehry in downtown Los Angeles. I remembered being taken back by its cluster of large metal winged walls that stood out amongst its surrounding buildings. As I walked by the building, I noticed how whimsical, symphonic, and extravagant it was. Today, I admire how fitting these characteristics are to its function of being a hall for orchestras and bands. The building itself was designed using a C++ software package designed and used by aerospace engineers called the CATIA. Through this software, Gehry was able to achieve impeccable acoustics within the concert hall.
In 2018, the L.A Philharmonic Light show had an installation performance which transformed the facade of the Walt Disney Concert Hall at night. The installation was designed by Refik Anadol and Google Arts and Culture. Made up of deep neural connections, Anadol and Google created a data universe that translated data points from the LA Philharmonic’s digital archives into projections of light and color. The installation was designed with a parametric data sculpture approach where music was sorted into thematic compositions by machine learning algorithms. Inside the concert hall, visitors were able to interact with mirrored walls that showcased the philharmonic’s archives. Anadol’s light show is a great example of how visual generative art combined with audio and a computational structure can encapsulate a visceral and immersive experience.