hi mysketch names clair
function setup() {
createCanvas(200, 200);
background(0,0,0);
}
function draw() {
noStroke();
fill(0,0,255);
ellipse(100,100,100,70);
}
[OLD FALL 2017] 15-104 • Introduction to Computing for Creative Practice
Professor Roger B. Dannenberg • Fall 2017 • Introduction to Computing for Creative Practice
hi mysketch names clair
function setup() {
createCanvas(200, 200);
background(0,0,0);
}
function draw() {
noStroke();
fill(0,0,255);
ellipse(100,100,100,70);
}
//Laura Rospigliosi
//Section C
//lrospigl@andrew.cmu.edu
//Assignment-01
function setup() {
createCanvas(600,600)
background (125,155,245);
}
function draw() {
background (125,155,245);
//hair
strokeWeight(0)
fill (110,72,39)
rect(140,140,320,400,70);
strokeWeight(0)
fill (110,72,39)
rect(140,190,320,350,20);
strokeWeight(0)
fill (95,60,39)
rect(180,190,240,350);
//shirt
strokeWeight(0)
fill (0,0,0)
rect(130,500,340,280,70);
//neck
strokeWeight(0)
fill (254,213,192)
rect(257,300,90,230,40);
strokeWeight(0)
fill (224,192,176)
rect(257,300,90,170,50);
//face
strokeWeight(0)
fill (254,213,192)
rect(160,160,280,280,70);
//eyes
strokeWeight(0)
fill (109,120,72)
rect(230,240,20,35,10);
strokeWeight(0)
fill (109,120,72)
rect(350,240,20,35,10);
//nose
fill (224,192,176)
triangle(285, 320, 320, 320, 303, 305)
/*
//inner eyes
strokeWeight(0)
fill (0,0,0)
rect(235,245,10,10,10);
strokeWeight(0)
fill (0,0,0)
rect(355,245,10,10,10);
*/
//mouth
fill (246,180,211)
//arc(300, 360, 70, 70, 0, HALF_PI+HALF_PI);
arc(mouseX, mouseY, 70, 70, 0, HALF_PI+HALF_PI);
}
This is Intel’s new light show creation that utilizes 100 flying drones, a project which was started in 2014 and is now displayed nationwide in venues and events such as Coachella, Disney World, and the 2017 Super Bowl.
Intel’s CEO Brian Krzanich first observed Ascending Technologies’ LED drone in 2014, which inspired him to kickstart a project that uses the already existing technology to generate a light show that was artistic and entertaining, demonstrating that technology and art could be complementary. The field team for this specific project consisted of a choreographer, 11 crew members, and 4 pilots that each controlled its own airfield of 25 drones. The choreographer for the show, with the help of others, developed Intel’s own animation software that writes 3-D algorithms to place the drones in the sky and script the colors that are displayed in the show.
This new creation removes the presumption that drones are solely used for photography or surveillance, and further promotes the integration of arts and technology through a show that could be enjoyed by everyone.
More about Intel’s light shows:
https://www.intel.com/content/www/us/en/technology-innovation/aerial-technology-light-show.html
Hi, my name is not Bob or John. It’s Isadora.
function setup() {
//create canvas
createCanvas(600, 600);
background(200);
}
//draw is executed every frame 60 times per second
function draw() {
fill (255,255,255)
noStroke()
//if the mouse is on bottom
if(mouseY>300){
rect(300, 250, 55, 55);
}
else {
ellipse(300, 250, 55, 55);
}
}
This art piece is called the Hylozoic Veil, a living construction of sensors, chemical beakers, and delicate acrylic links and fronds. Created by Philip Beesley, Hayley Isaacs, and Benjamin Wiemeyer, and displayed around the globe, it is a marriage of science, computer programming, and art. It is currently displayed at three stories high in The Leonardo, a museum located in Salt Lake city, Utah. When guests enter the museum, the Hylozoic Veil senses human presences and reacts, unfurling plantlike fronds, or stretching and compressing subtly. It even has sections attached to beakers, which filter and collect chemicals like Carbon Dioxide, much like the processes of several living organisms. The project itself was, according to Beeler, modeled off of protocells: “prototype cells that use inorganic ingredients combined into cell-like forms.” Its construction took the work of several volunteers, as well as the writing of entirely new programs in order to manage the complex chemical processes that make the project tick, including the millions of microsensors. The project itself is not meant to be seen merely as a sculpture, but as an environment. It breathes as guests engage with it, and lives on when the museum closes for the day.
This piece inspires me because it embodies the marriage of art and science that I aspire to in my career, as well as the ingenuity and delicacy that went into it’s construction. It kindles thoughts of a future where, should the forests on the earth ever disappear, something as eerily beautiful as this may replace it. It is exciting to think about what future projects this may inspire, and what other artistic “environments” may arise.
Philip Beesley’s Website and his own post about the Veil
TRANSFORM is a project lead by Professor Hiroshi Ishii of the MIT Tangible Media Group. The project play with the design and technology to transform the still furnitures into a dynamic multi-functional machine driven by computing data and energy. By utilizing a set of sensors and numerous kinetic shapes, the table display can easily achieve a variety tasks facilitating daily desk activities. For instance, the shapes can easily shift and hold objects such as cups, fruits and office supplies, and it also move them to places as desired. The impressing aspect to me is that they managed to transform the stereotypical fixed furnitures into such tangible and interactive display, which can support a range of practical uses. On the other hand, I also admire the organic motion, which was inspired by natural interactions among wind, water and sand. The organic motion in some activities also stimulates a emotional response of the views, which does not seems artificially stiff. The gesture interaction are also beautifully done as it is more naturally for users to activate certain tasks with simple hand movement above the display. Although I understand the successful project demo video contributes a great part of our impression, in my opinion it is still a brilliant attempt to introduce tangible interactions into our daily still furnitures, and it also contribute to the exploration of the future interactive smart home environments.
Hey Y’all
1 2 3 |
Homogenizing and Transforming World teamLab, 2013, Interactive Installation, Endless, Sound: Hideaki Takahashi |
Although first installed in Japan, 2013, Homogenizing and Transforming World has traveled to Hong Kong and is currently being exhibited in the National Gallery of Singapore. It’s an interactive installation that consists of large, white balls that create sound when a person touches them. When someone interacts with a ball, the ball emits a sound and new color that changes/affects the balls around it until all the balls in the room have changed. The balls have data-collecting sensors and are wirelessly connected to each other. This installation was inspired by the nature of the internet and how everyone is able to contribute and share that overtime affects communication and expands knowledge to others. Like the digital world, this installation is ever-changing and dependent on user interaction.
What I find fascinating about this installation is how simple it is for representing a large concept. It absolutely embodies the essence of the internet and how it is always changing, how quickly our actions can influence our environment and how expansive and infinite the technological world is. I love how the colors and sounds of the balls are calculated responses to the interactions of the person and with the past interaction responses. They’re not making default, automatic colors or sounds. They are truly being altered as more time and interactions occur.
My only critique (and I’m not sure if this may be a feature that documentation just didn’t show) is that it seems like this installation responds to one’s touch at a time. A participant might have to wait till after a change to occur to see how their touch influences the space; rather than having multiple people touch various balls and that form of interaction changes the space in a certain way. In reality, the internet has constant contributors dumping stuff online at once and this installation would be stronger conceptually if it embodied this characteristic as well.
Source:
teamlab’s color-changing floating spheres in singapore respond to human touch
http://www.wetheurban.com/post/161822428609/teamlabs-color-changing-spheres-respond-to-human
http://cuteandkids.com/enjoy/museums-art/art-human-touch-teamlab/
One of the things that I am looking for as the advancement of technology and interactive design continues is the influence that it may have on clothing. Back in 2015, there was an article about a company, Print All Over Me, that was trying to give programmers a creative outlet and prompt people who aren’t in a creative field to explore that side of things.
They push for collaboration and the companies collaborations with companies like Processing Foundation, SoSoLimited, and LIA, all push this extra customization in hopes to give creators and consumers a new platform of design. When this article was released in 2015, the feature of uploading your own code and having it made wasn’t available yet, but instead was a bit more primitive where you could customize the clothing via keywords, number keys, etc. and the code would look through the internet for an image that was fitting.
Although I am not sure how long this whole project has taken, looking at their website in 2017, they seem rather successful and has honed into their niche group pretty well. It will be interesting to see how much further this can be pushed with the inclusion of more technological advances.
Hi my name is Thomas
function setup() {
createCanvas(300, 300);
background(200, 200, 200);
noStroke();
fill(135);
ellipse(150, 150, 100, 70);
text("p5.js vers 0.5.12 test.", 10, 15);
}
function draw() {
}