Vicky Zhou-Looking Outwards-01

Cooper Hewitt Interactive Pen

Cooper Hewitt Smithsonian Museum of Design, located in New York, New York, has an interactive pen component experience. This interactive pen allows the user to engage with certain exhibits, manipulate objects across interactive screens, and save exhibits that users find interesting and want to revisit later over time.

I personally had the pleasure of visiting the Cooper Hewitt with my friend this past summer, and was incredibly inspired by the uniquely engaging atmosphere. Unlike conventional museums where the most interactive component is an audio device that guides you around the exhibits, Cooper Hewitt’s interactive pen allows you to interact with certain exhibit pieces, create drawings and patterns on touch screen surfaces, and curate a collection of exhibits that a visitor finds particularly engaging and wishes to revisit over time.

The original origin of the Cooper Hewitt pen originated from Local Projects working with Diller Scofidio + Renfro, and then later expanded upon by the Cooper Hewitt team. This project allows not only future museums, but other areas of entertainment, such as theaters and amusement parks, to adopt and select certain points of the Cooper Hewitt pen that most engage users and apply it to their own respective businesses and environments.

Tanvi Harkare – Looking Outwards – 01

A computational project that I found interesting is Treepedia, an interactive map that shows you the green canopy areas created by trees and other greenery in many cities around the world. It was created by the Senseable City Lab at MIT, consisting of 5 individuals, and is completely open-source. They have also partnered with World Economic Forum and Global Shapers Community to help bring awareness for their project. For this map to work, the Senseable City Lab created “Green View Index”, which uses Google Street Views instead of satellite images to compare green canopy coverage in different cities. Their hope is to bring a better understanding of all the climate issues Earth is going through currently, such as global warming and air pollution. Additionally, users along with local authorities will hopefully help act and bring more green landscape into major cities. Users can also compare cities side by side and see how much green canopy space they have versus another city. Although it is not available in Pittsburgh yet, some cities that are currently available include Los Angeles, Boston, NY, plus 24 other major cities. In the future, the Senseable City Lab wants to expand their map to cover more places across the globe.

Map of green canopy space in Boston, MA

I find this project interesting because of the way that they approach the issue of not having enough trees in cities to help fight environmental issues. It’s something different that I personally haven’t seen anywhere else before; an easy to understand map that conveys all the information it needs to. One of my favorite aspects of the project are the comparisons between all the different cities. This seems like a very efficient way for cities to know how well their initiatives to create more green canopy spaces are working, if any further action was to be taken. The more cities that are represented on this interactive map, I believe the more successful this project will be in the future.

Comparison of green canopy space between 4 cities

Visit the Treepedia website here.

 

Jennifer Kong-LookingOutwards-01

Demo of the Manual Input Workstation (2004) by Golan Levin and Zachary Lieberman

The Manual Input Workstation was made by artists, Golan Levin and Zachary Lieberman, in 2004. The time it took to complete this project can’t be determined. The project is interactive through hand movements that creates shapes with an analog projector and a video projector. There was a custom made program. They were inspired by another collaboration they did in 2003, Messa di Voce, which was an interactive art installation where audio was interpreted into interactive visual feedback. Messa di Voce was the inverse of Manual Input Workstation. Messa di Voce used audio to create visuals and Manual Input Station used visuals to create audio. In some ways, Manual Input Workstation was a precursor to virtual reality. It blended the line of virtual space and ‘real’ space.

Messa di Voce (2003) by Golan Levin and Zachary Lieberman, a similar project

I personally really loved this project because of how whimsical it is. Interactive installations were one of my favorite genres because it invited everyday people to engage with art and they did just that. I was amazed by the capability and complexity of code in 2004. I can imagine the top down design but have no idea how to implement it! These two aspects make the project admirable.

 

Sources: Official Website for Project

Documentation for Project by Golan Levin

Xiaoying Meng-LookingOutwards-1

(The ABC of Architects: a list of the most important architects and their buildings)

The ABC of Architects is a motion graphic video introducing some of the most famous architects and their iconic buildings in an alphabetical order. It was created by Andrea Stinga and Federico Gonzalez. The creators did a good job trying to include architects/architecture from countries around the world. They also have other architectural videos and graphics on OMBU Architecture, such as this one about Pritzker winners. I find the ABC of Architects intriguing and informative. The motion graphics simplify the forms and represent the characteristics of the architectures very well, making it easier for people to remember the buildings. The graphics move according to the music. This feature can better engage the audience and make the video more interesting. My only complaint with this feature is that sometimes the graphics move too fast. It might be difficult for people to follow. I think the same type of motion graphics can be used to create educational videos about building constructions and design ideas.

Elena Deng Portrait

sketch

/* Elena Deng
Section E {
  edeng1@andrew.cmu.edu
  Assignment-01
}



*/

function setup() {
    createCanvas(350, 450);
    background(255,255,255);
}

function draw() {
  //hair
    fill(62,36,12);
    ellipseMode(0,0,0);
    ellipse(215,172,282,267);
    rect(215,175,280,334);
    ellipse(219,197,208,67);

  //jacket
    fill(140,96,57);
    ellipseMode(CENTER);
    ellipse(196,484,265,366);


  //face
  noStroke();
  fill (217, 173, 128);
  ellipseMode(CENTER);
  ellipse(208,213,244,280);
  ellipse(197,260,259,204);
  ellipse(331,251,35,57);

  //hair pt 2
  fill(62,36,12);
  ellipseMode(CENTER);
  ellipse(223,97,208,67);


  //eyebrows
    noStroke();
    fill(62,36,12);
    ellipseMode(CENTER);
    ellipse(225,182,86,30);
    ellipse(103,190,46,26);

  //eyebrowspt2
    noStroke();
    fill(217, 173, 128);
    ellipse(235,194,104,30);
    rect(165,165,25,30);
    ellipse(108,200,44,29);
    rect(117,171,18,39);

//eye
  noStroke();
  fill(0,0,0);
  ellipseMode(CENTER);
  ellipse(225,217,77,39);
  ellipse(98,215,50,32);

  fill(217, 173, 128);
  ellipse(225,225,77,35);
  ellipse(100,222,47,32);

//nose
  noStroke();
  fill(0,0,0);
  ellipseMode(CENTER);
  ellipse(132,253,35,39);

  fill(217, 173, 128);
  ellipse(132,260,35,26);

//mouth
  noStroke();
  fill(0,0,0);
  ellipseMode();
  ellipse(135,307,60,40);

  fill(217, 173, 128);
  ellipse(130,318,59,40);

//hand
  noStroke();
  fill(217, 173, 128);
  ellipse(75,375,93,134);
  ellipse(96,408,100,38);

//jacketpt2
  fill(140,96,57);
  rect(37,425,75,82,20);

//little splotches of joy
  fill(217,185,152);
  ellipse(330,251,21,35);
  ellipse(238,259,98,59);
  ellipse(159,151,62,36);
  ellipse(81,362,70,76);

//windowpane
  fill(173,215,237,63);
  noStroke();
  rect(0,0,296,456);
  fill(255,255,255,63);
  rect(0,0,305,456);



}

i call this piece: no escape from studio. pictured below (or above) is a design student (possibly me), trapped within the depths of margaret morrison

actually really enjoyed this project! i usually procrastinate until the last minute but once i started to work on the portrait i found it hard to stop.

Jenni Lee – Looking Outwards – 01

Demo of VR automative design

Seymourpowell’s virtual reality tool for automative design enables designers in different locations to collaboratively participate in design projects. It acts as an augmented reality 3D drawing modelling tool for full-sized sketches of automobiles. Collaborators are able to dial into the project through a VR headset, the 3D sketch then projecting in front of them, allowing them to fully experience the design.

I admire this project, as not only will this tool allow for efficient virtual collaboration and advanced sketching features, but it will open paths for future designers by allowing them to understand how their car designs will exist in physical environments in various conditions. As I’m interested in products that intersect art and technology, this virtual reality tool is especially intriguing to me because it will be used across engineering, marketing, and design fields.

Jenny Hu — Looking Outwards 01

Slow Games by Ishac Bertran is one object of a series that reflects on people’s use of technology. It is a game based on classic moves in video games, but with a reduced frequency to one move a day. The technology and program used are unclear.

The slow speed to the game is the main differentiating factor for me. Instead of fast-paced and reactive choices, the game tests patience and memory coordination. It is a project that uses technology and computation to create a direct contrast of our main technological framework today.  It points to a different future where the value of technology takes a very different tone.

The full project page can be found here. 

 

Alice Fang-LookingOutwards-1

Demonstration of Sandbox AR

This augmented reality sandbox, developed by a team at UC Davis, is an interactive topographical map that changes based on the shape of the real sand in the sandbox. Elevation, water, and contour lines are projected in real time as the sand shifts and changes. I think this project is great from a design standpoint in bringing physical interaction with the augmented reality as a tool for education. Users can change topography with their own hands, and the projection provides a deeper understanding into the unique landscape, as well as simulate water flow into the valleys and surfaces created. I think it would be interesting to consider how the project would integrate in exhibition, for example if it were displayed at the Carnegie Museum of Natural History.

To the best of my knowledge, the team developed custom software, which is available free to download, for anyone who wants to build their own sandbox AR(!). I’ve also seen a few videos of science teachers unaffiliated with UC Davis who built the sandbox AR for their own classrooms, and I think having an open source for something like this is a great resource to educators, but also people who are interested in AR.

Oliver Kreylos, UC Davis

Helen Reynolds – Looking Outwards – 01

Cat Explorer in use

The Cat Explorer is a virtual reality demo that allows the user to explore the anatomy of a cat.

The user moves their hand around to navigate the virtual setting, revealing the virtual cat’s skin, muscles, organs, vascular system, and skeleton. I admire this program because I see it as providing a window of opportunity to using virtual reality for interactive education in fields (such as healthcare or life sciences) that I wouldn’t have expected. The demo is engaging, cute, and deceivingly informative, which I believe makes for an effective learning experience. I hope to explore code and design to create interactive programs that are engaging and informative like this one.

See more here: https://www.thisiscolossal.com/2018/06/virtual-reality-cat-explorer/

More Cat Explorer!

Kade Stewart-LookingOutwards-01

“Twitch Plays Pokemon” was a project created by an anonymous Australian programmer in 2014. The project ran on the video game-streaming site Twitch, meant to be played by any user who visited the stream alongside thousands of other users. Commands for the game were typed into the chat, allowing any user to influence the gameplay. The programmer wrote a script in Python to take the commands typed in chat and send them to an emulator running the game. They also created a program in JavaScript that tallied the moves so that those in chat knew what others were doing.

Twitch Plays Pokemon example gif

The programmer stated that they initially created the project as a social experiment. While there were some slight changes to make certain inhibitory actions less frequent, I admire that Twitch Plays Pokemon brings together a huge amount of unrelated people for a single purpose. Thousands of people were all working towards the purpose of completing the game. No matter all of the small moments in the game where it seemed like nothing would happen, Twitch chat was able to beat the game. This extreme crowd-sourcing has only become recently plausible technologically, and it illustrates how we can bring people together with technology.