After browsing through the Looking Outwards posts by my classmates, Maggie’s 3D Computer Graphics post on the Amigos Project by Zigor Samaniego caught my attention. Amigos Project is a series of art by Samaniego started in 2016 using tools such as Photoshop, Pixologic Zbrush, Maxon Cinema 4D, and Wacom Cintiq. As Maggie pointed out, these computer-generated “monsters” are super cute, a reason why I wanted to learn more about it. As they also mentioned, it’s really interesting and impressive how Samaniego was able to achieve such realistic textures using 3D rendering tools to create the adorable “realistic” creatures. Along with that, I am also fascinated by how Samaniego made certain choices while programming and designing to be able to maintain the creatures’ “cuteness” despite making them super realistic (especially since the “more real” something appears, the less “cute” it becomes).
Category: SectionA
Project-09: Portrait
For this week’s project, I chose to draw a picture of me when I was a baby holding up a peace sign. I tried to draw the portrait by mimicking short paint strokes. To do so, instead of making random dots, I drew random short lines whose directions can be changed by pressing the mouse (there is a total of 4 directions). I also thought it would be interesting to see how the different directions of the same shape can alter the way the portrait was drawn so ran the code several times with the direction being different each time.
let img;
var lineType = 1;
function preload() {
img = loadImage("https://i.imgur.com/dy7iC57.jpg");
}
function setup() {
createCanvas(480, 480);
background(255);
img.resize(width, height);
img.loadPixels();
frameRate(50);
}
function draw() {
var x = floor(random(img.width)); // random x position
var y = floor(random(img.height)); // random y position
var pixel = img.get(x, y); // single pixel from image (color)
stroke(pixel);
strokeWeight(3);
if(lineType == 1) { // slanted lines starting from left
line(x, y, x + 7, y + 7);
}
else if(lineType == 2){ // horizontal lines
line(x, y, x + 7, y);
}
else if(lineType == 3){ // slanted lines starting from right
line(x + 7, y, x, y + 7);
}
else { // vertical lines
line(x, y, x, y + 7);
}
}
function mousePressed() {
if(lineType == 1) {
lineType = 2;
}
else if(lineType == 2) {
lineType = 3;
}
else if(lineType == 3) {
lineType = 4;
}
else {
lineType = 1;
}
}
Project 9: Portrait
I really wanted to implement a little bit of what we did two weeks ago, so I created a heart and infinity code, and used those as pixels. I have four different “portraits” each time the mouse is clicked. The first one randomly draws hearts and draws the text “This Is Me” with your cursor. The second one randomly draws the text “Sarah” and draws infinity signs with you cursor. The third one randomly draws infinity signs and draws the text “This Is Me” with your cursor. The fourth one randomly draws random sizes from 10 to 25 of the text “Sarah” and draws infinity signs with you cursor. Additionally, every click increases (to a certain number) sizes of all the pixels, and changes the transparency of the background and decreases (to a certain number), going back and forth between the two.
// Sarah Luongo
// sluongo
// sluongo@andrew.cmu.edu
// Project
// This code aims to draw a self-image with pixels.
var oI; // 'Original image'
var nP = 1; // 'New portrait'
var t = 210; // 'Transparency'
var tS = 10; // 'Text size'
var hS = .3; // 'Heart size'
var iS = 15; // 'Infinity size'
incT = 51; // Variable to increase size of transparency
incH = .1; // Variable to increase size of hearts
incIT = 1; // Variable to increase size of texts and infinity sign
// Loads image
function preload() {
var ImageURL = "https://i.imgur.com/UJthwzP.jpg";
oI = loadImage(ImageURL);
}
function setup() {
createCanvas(480, 480);
oI.resize(width, height); // Resizes original image
background(59, 17, 21, t); // Redish
oI.loadPixels();
frameRate(60); // Rate pixels are drawn
t = 0;
}
function draw() {
// Generates random locations for the pixels within image size
var pX = floor(random(oI.width));
var pY = floor(random(oI.height));
var cP = oI.get(pX, pY); // 'Color picker' based on location of pixel
var cM = oI.get(mouseX, mouseY); // Color selected based on mouse location
noStroke();
if (nP == 1) {
// Draws heart pixels randomly
fill(cP);
heart(pX, pY);
// Draws text pixels w/ cursor
fill(cM);
textSize(tS);
text("This Is Me", mouseX, mouseY);
} else if (nP == 2) {
fill(cP);
textSize(tS);
text("Sarah", pX, pY);
fill(cM);
infinity(mouseX, mouseY);
} else if (nP == 3) {
fill(cP);
infinity(pX, pY);
fill(cM);
textSize(tS);
text("This Is Me", mouseX, mouseY);
} else {
fill(cP);
textSize(random(10, 25));
text("Sarah", pX, pY);
fill(cM)
heart(mouseX, mouseY);
}
}
// Heart Curve
// https://mathworld.wolfram.com/HeartCurve.html
function heart(pX, pY) {
var da = .01; // How round the "curve" is
// Creates the heart curve
beginShape();
for (var t = 0; t <= TWO_PI; t += da) {
// The parametric equations found on the website commented above
var x = (16*(pow(sin(t), 3))) * hS;
var y = (13*cos(t)-5*cos(2*t)-2*cos(3*t)-cos(4*t)) * -hS;
vertex(x+pX, y+pY);
}
endShape();
}
// Infinity Curve
// https://en.wikipedia.org/wiki/Lemniscate_of_Bernoulli
function infinity(pX, pY) {
var da = .01;
// Creates the infinity curve
beginShape();
for (var t = 0; t <= TWO_PI; t += da) {
// The parametric equations found on the website commented above
var x = (iS*cos(t))/(1+(pow(sin(t), 2)));
var y = (iS*sin(t)*cos(t))/(1+(pow(sin(t), 2)));
vertex(x+pX, y+pY);
}
endShape();
}
function mousePressed() {
if (nP == 4) {
clear();
background(59, 17, 21, t);
nP = 1;
} else {
clear();
background(59, 17, 21, t);
nP += 1;
}
// Increase size of each pixel symbol
t += incT;
tS += incIT;
hS += incH;
iS += incIT;
// Decrease size after certain point
if (t == 255 || tS == 30 || hS == 1.5 || iS == 30) {
incT *= -1;
incH *= -1;
incIT *= -1;
} if ( t == 1 || tS == 10 || hS == .3 || iS == 15) {
incT *= -1;
incH *= -1;
incIT *= -1
}
}
LO-9
For this week’s LO, I decided to revisit my friend Sean’s LO from Week 3, in which he wrote about BLOOMS, a group of sculptural computer fabricated pieces by John Edmark that was created around 4 years ago.
Sean mostly covered all the basics of the project, but upon more research, I was surprised to find out that the forms of the pieces are actually each based on a different algorithm, such as the Fibonnaci sequence or the golden rectangle/angle. Additionally, while Sean described the process that Edmark uses to create videos pretty accurately, I was surprised to learn that the timing between each frame is created using progressive rotations of the golden ratio, also known as phi. This angle and shape is mostly commonly seen in sunflower centers and pinecone-like forms in nature. I found it interesting that everything from the form to the frame timing ties in together to create nature-like forms in the final product: some of the forms even look like pinecones or something that could be found in a cellular structure of something.
LO – 9
I decided to look at Helen Cheng’s looking outwards from week 2. It was on Robert Hodgin’s ‘Traffic’ piece. Being a driver myself, I found this piece very interesting and agreed on many of the points Helen mentioned in her blog post. The simulation is busy, and the cars have an “aggressiveness” to them like humans do. I can’t tell you how many times I’ve been overwhelmed by cars running reds, blocking the intersection, etc. People don’t always make the smartest decisions on the road, and this simulation captures it pretty well. The thing I find the most interesting, however, is the attempt to make a computer act human. Even with the “aggressiveness” coded in, this project doesn’t make accidents happen and the traffic still flows more seamlessly than traffic in real life. I know it would be much more difficult to code and he wanted to keep things pretty simple, but it somewhat works in a way. What I mean by that is the whole idea of self-driving cars. Once self-driving cars are at their finest, and everyone is using them accidents should be minimal and traffic will probably look quite similar to this. It was very interesting to think about how code can be manipulated to show human flaws, and on the flip-side, give us insight into what automation has the potential for.
As always, here’s the link to Robert’s website:
And two videos of his simulation:
LookingOutwards-08
Anouk Wipprecht is a Dutch-based fashion tech designer who works in the emerging field of electronic wearable installation, a combination of visual designs and interactive technology. Her project “Spider” employs microcontroller such as sensors and moveable joints to make the garments move and breathe with the wearer’s actions. Wipprecht researches on how machine learning and biomimicry are able to manipulate the texture and the function of daily outfits. By partnering up with tech companies such as INTEL, Autodesk and Audi, she fuse futuristic concepts into high fashion and evoke the audience senses towards their surroundings. In the project, she extracts the parking sensors from Audi cars to put on the clothes with geometric design elements, putting more focus on shape, look and feel of things into mechanical object while empowering the wearer with its functionality.
Artist Website: http://www.anoukwipprecht.nl/about-me-shift#bio
LO-08
Yuri Suzuki is a London based designer who works with sound. Before getting into design and technology, Suzuki was a musician and a DJ. When he started to get into design, he wanted to keep his musical background in his career, so he got into sound design. He is now a partner at Pentagram, which is the largest independently owned design studio in the world, and is also on faculty at the Royal College of Art in London.
Suzuki’s work makes people think about how they perceive sound in the world around them. One of my favorite projects that he introduced in his presentation was the Sound of Earth. The project is a large black globe with grooves that mimic the Earth’s topography. A needle plays the globe like a record and as it passes over each country, it plays sounds and music that Suzuki recorded on his phone while visiting the country.
Suzuki’s presentation is a brief overview of several of his projects. He introduces the project, then shows a video to demonstrate it. The video component was definitely an effective presentation method because he covered so many projects that there is no way that he can bring them to his presentations. Each overview was so brief that I don’t feel that I got an amazing idea of the process behind each piece through the presentation, but it certainly gives the viewer a great idea of the scope of his work.
Yuri Suzuki’s website can be accessed here: https://yurisuzuki.com/
Suzuki’s presentation at the Eyeo Festival (2015)
LO-08
I did some research on Mohit Bhoite, an engineer who specializes in taking two-dimensional circuits and expanding them into the third dimension. He holds a degree in robotics from the University of Pennsylvania, and currently works as a hardware engineer for the company Particle. His work involves creating functional sculptures using circuits, often built to be artistic and interactive, such as his handheld gaming console, which uses no screens, simply a button to register touch and an LED array to serve as a board for the game. His work is often simple and almost organic, and I find his creativity with usually very cold and “boring” objects to be refreshing. Bhoite’s presentation involves breakdowns of his work and his creative process, along with the challenge of translating circuits from a basic, flat object into a sculpture that is not only functional but also aesthetically pleasing. He goes into detail about the circuit design and how he makes them work, with many detailed diagrams and videos which he explains to the audience. I find his use of a presentation to be very effective even with very little information on the page. He relies more on high quality images and videos and his own explanation rather than simply having all his talking points on slides to be read.
Looking Outwards : 08
Meejin Yoon is an architect and designer, and has been recently appointed as the Dean of Architecture at Cornell University. She graduated from the same with a Bachelor in Architecture, and then completed her graduate studies at Harvard’s GSD. Her work focuses on the intersection between space, technology, and materiality, and is most often acclaimed for its innovative and engaging characteristics.
Yoon’s consideration towards public engagement in her projects, makes them more interactive as she prioritizes human engagement within different spatial qualities of architecture. Working at the intersection of architecture, art, landscape and technology, Yoon’s innovative and interactive light and sound installations across the United States for public spaces often incorporate alternative energy sources, therefore, putting them at the forefront of each of these fields.
During her talk at the Eyeo Festival, she focuses on her projects regarding interactive public spaces– especially topics such as responsive and interactive technology, smart materials, and the public engagement process. As an Architecture student myself, I found her talk interesting as she began to break down these vast topics into simpler bits, focusing primarily on projects which utilize technology in depth. Examples of her own work and the process she used as well as the iterations made, helped clearly explain her design objectives.
Looking Outwards – 08
Mike Tucker is an interactive designer and developer currently living in London. For the past five years he has worked at Magic Leap, based in Florida, with the official title of Interactive Director, Designer, and Developer. He is somewhat of a quirky guy, who likes to refer to himself in the third person. He attended Virginia Commonwealth University earning a Bachelor of Fine Arts degree in Graphics Design. Mike is working on developing the future of Spatial Computing and hopes the next wave of spatial designers will question expectations of the media, and have the opportunity in designing a mixed reality future.
While initially his art pieces attracted me, learning about how he works inspired me. His search for more reminds me of an entrepreneur I interviewed last year. He has this entrepreneurial spirit, which I admire as I attempt to minor in innovation and entrepreneurship and am very driven to the entrepreneurial world. He started out drawing, but he wanted something more. He was on a quest to find the perfect medium for creating. He went through various platforms, such as websites, mobile apps, etc. until he stumbled upon virtual reality. My favorite piece from him to date is Tónandi, where music and virtual reality interact. With the Icelandic artist Sigur Rod and Magic Leap, he created a work of art where tone spirits (the translation of Tónandi) inhibit your space and together form a music soundscape. You the “player” interact with the virtual creatures to evolve the soundscape. I love this because as I have mentioned in previous looking outwards posts, I am very interested in seeing how the music industry can change with computer interaction.
I will close with highlighting his presentation. I appreciated him starting from the beginning, after giving a general overview, and breaking down what he was going to talk about in his presentation. There were also many visuals, static and moving, to help explain and visualize his work. While he stumbled over his words a bit, I was definitely more engaged by seeing all the images and videos. Even with his being awkward on stage he was very clear and articulate by outlining his talk and providing engaging material to keep the audience from nodding off. I would definitely take these techniques into consideration for future presentations.
Here’s a link to his website:
Here’s the lecture, you’ll have to search it on the page four of the website because apparently it is private:
https://vimeo.com/channels/eyeo2019/page:4
Here’s a video of the project I like most:
And here’s a link to the Magic Leap’s website featuring Tónandi:
https://world.magicleap.com/en-us/details/com.magicleapstudios.tonandi