elizabew – looking outwards – 01 – SectionE

This interactive installation and multi-sensory experience, New Spring, is a collaboration between architect Azusa Murakami and artist Alexander Groves. The video showcases the installation’s beautiful design and artistic standing by using amazing filmography to demonstrate the way the installation works and how it is interacted with.

“Inspired by the famous cherry blossom festival in Japan, the installation is designed to create a special moment that brings people together. A fleeting shared experience that evokes a sense of the changing seasons.” — Studio Swine

What I love about this installation is the feeling of “out of this world”, almost like a fantasy. It is both incredibly pleasing to look and practically pulls you in to try and experience it. The mist-filled bubbles look both heavy and delicate at the same time (and can be held using special gloves according to the website).

I personally feel that if it was able to capture all five senses, it would be an even stronger piece (perhaps if the mist of each bubble smelled different and the bubbles themselves would taste different).

For further reading on this installation: New Spring

The overall design of this installation is inspiring (while only using minimal resources!), and the video itself is alluring as well. I hope to create designs that can capture the attention of both the visual and physical senses as well one day.

aboyle-Looking Outwards-01

“Dream Garden” is a phone project that invites people to decorate an urban space by submitting 7-word poems, which the project refers to as dreams. Participants can then use Layar, a free augmented reality app, to see a floating garden of dreams in the space around them.

Above: Video describing project

I think this project is super cool because it combines poetry and computer science, a combination that is relatively rare but definitely interesting. I admire both the interdisciplinary nature of the project and the inclusivity of it. Anyone can submit a dream and have it displayed, so the garden really represents the people who live near the urban space.

The artists behind this project are Matt Roberts, Terri Witek, and Michael Branton. Roberts is a new media artist, Witek is an author and creative writing teacher, and Branton helped design and found the Computer Science major at Stetson and still works there as a Computer science teacher. Their bios and a description of the project can be found at http://inthedreamgarden.com/.

I hope that this project can inspire others to use augmented reality in creative ways, as well as consider the place of poetry and other writing in technology.

thlai-LookingOutwards-01

Oskar & Gaspar is a collective team of visual artists and multimedia professionals (originally two Portuguese twins) who specialize in video mapping and 3D projection. I first saw them on America’s Got Talent and was intrigued by their work.

One of their projects involved video mapping projections on tattoos, which resulted in tattoos being beautifully brought to life. They were fascinated by tattoos and used the human body as a canvas in a unique way that could be demonstrated live and have a deeper impact that seeing it on a screen. The process involved scanning then animating the tattoos, then projecting the animations directly onto the original models. They first showcased their work at the first live tattoo video mapping event in Lisbon, Portugal.

In the video description, they emphasize that no post production was used, so all the effects seen were also seen live. Many have critiqued their performance, saying it isn’t interesting enough, and I agree that the performance would be improved with the actor moving, even if it is difficult to sync the projection.

There are tons of previous projects of projection mapping that may have inspired Oskar and Gaspar, so the concept is not new. They took the concept of projection art and put their own special twist on it. This sort of project opens up infinite opportunities of using the human body to portray artwork in a technologically advanced way, and I can see performance artists of any sort using it to enhance their stage presence in a more visual way.

selinal-Project-01

sketch

//Selina Lee
//selinal@andrew.cmu.edu
//Project 01

function setup() {
    createCanvas(600,600); //yellow background
    background(250,250,120);

    //create hair and face
    strokeWeight(0); //hair
    fill(50,50,30);
    arc(300,225,250,175,PI,TWO_PI);

    strokeWeight(0);
    fill(50,50,30);
    rect(175,225,250,250);

    strokeWeight(0); //neck
    fill(210,180,160);
    rect(260,350,80,140);

    strokeWeight(0); //green shirt
    fill(140,200,180);
    arc(190,570,200,200,HALF_PI,PI+HALF_PI);

    strokeWeight(0); 
    fill(140,200,180);
    arc(410,570,200,200,PI+HALF_PI,HALF_PI);

    strokeWeight(0); 
    fill(140,200,180);
    rect(190,470,220,300);

    strokeWeight(0); //chest
    fill(210,180,160);
    arc(300,470,220,50,TWO_PI,PI);

    strokeWeight(0); //face
    fill(210,180,160);
    ellipse(300,300,210,260);

    stroke(90,80,50); //left brown eye
    strokeWeight(7);
    fill(0);
    ellipse(260,275,22,22);

    stroke(90,80,50); //right brown eye
    strokeWeight(7);
    fill(0);
    ellipse(340,275,22,22);

    stroke(220,150,160); //mouth
    strokeWeight(5);
    fill(0);
    arc(300,335,100,100,TWO_PI,PI);

    strokeWeight(0); //tongue
    fill(220,140,150);
    rect(275,335,50,50);

    strokeWeight(0);
    fill(220,140,150);
    arc(300,385,50,50,TWO_PI,PI);

    strokeWeight(0); //teeth
    fill(256);
    arc(300,335,60,30,TWO_PI,PI);

    strokeWeight(0); //left eyebrow
    fill(50,50,30);
    quad(240,240,250,250,280,252,285,242);

    strokeWeight(0);
    fill(50,50,30);
    triangle(240,240,250,250,220,255);

    strokeWeight(0); //right eyebrow
    fill(50,50,30);
    quad(360,240,350,250,320,252,315,242);

    strokeWeight(0);
    fill(50,50,30);
    triangle(360,240,350,250,380,255);

    stroke(0); //nose
    strokeWeight(1);
    noFill();
    arc(300,310,20,15,TWO_PI,PI);

    stroke(50,50,30); //brown hair strand
    strokeWeight(5);
    bezier(300,170,200,200,270,280,200,300);

    


  







    
}

function draw() {
    if (millis() > 2000) {
        osc.stop();
        noLoop();
    }
}

hqq – LookingOutwards-01

Hi guys! One piece of computationally-driven art that excites me is Pele-Mele and Boite-Noire by Olivier Ratsi and Martin Messier. It was on display at the Wood Street Galleries in Downtown Pittsburgh during the summer of 2016. I was particularly inspired by the way that the pieces warp the perception of space to turn a white box gallery into an extremely disorienting field of projections. It uses an optical process called anamorphosis (honestly, I had no idea what this was called until I read about it in this review), which gives people in the space the ability to warp their perception of the piece based on their position and optical height, with any combination of those two variables yielding a different composition.

I’m also particularly inspired by the process that Ratsi and Messier followed in creating the piece. They developed a computing process that took Renaissance styles of anamorphosis and regenerated them within specific spaces. The piece was the result of a fully developed and customized software generation that Ratsi created himself. This expresses the potential that more antiquated artistic principles can have on computational art and design created today.

Rachel Karp-LookingOutwards-1

I had known about theater artist Annie Dorsen for years, but my first direct exposure to her work was Yesterday Tomorrow , which I saw in New York in 2016.

Natalie Raybould, Jeffrey Gavett and Hai-Ting Chinn in “Yesterday Tomorrow.” Credit Sara Krulwich/The New York Times

Yesterday Tomorrow is a performance by three vocalists who sight-sing a score created live by a computer algorithm that transforms the Beatle’s song “Yesterday” into the musical Annie’s song “Tomorrow” over the course of about an hour. Each time the algorithm is run, the evolution from “Yesterday” to “Tomorrow” is unique. It is the third in Dorsen’s trilogy of what she calls algorithmic theater, “in which customized, algorithm-driven computer software controls the transformation of dramatic content in real-time.”[1]

The performance involved a number of creators, some typical for a musical theatrical work (director, musical director, lighting designer, sound designer, production manager), but others not always found in the credits, including a lead computer programmer, Pierre Godard, and a video systems designer, Ryan Holsopple.

The idea built on Dorsen’s previous algorithmic works and arose more specifically from Dorsen’s research into evolutionary computation. As she explained in an interview with BOMB Magazine, “I was learning about evolutionary computation, and I had a thought: You could use an algorithmic tool to slowly and unpredictably turn one thing into another. And then the very next thought was to turn the song ‘Yesterday’ into the song ‘Tomorrow.’ It was that automatic.”[2]

In another interview on website Esoteric.codes, Dorsen describes the process by which she and her team landed on the specific types of algorithms to use. (Sadly I couldn’t find the total number of people involved in the programming development team.) At first she wanted to use a genetic algorithm, in which the computer would transform “Yesterday” to “Tomorrow” by learning. But she and a programmer found that that method did not ensure the computer would reach “Tomorrow.” So instead, Dorsen worked with Godard and music director Joanna Baile to land on migration algorithms through which “Yesterday” shifted to “Tomorrow” through 30 steps (a number they also arrived at through experimentation; personally I think a few fewer steps might have made for a more compactly satisfying experience). Each element of performance has its own migration algorithm, meaning that the rhythm, lyrics, and melody migrations are generated independently.[3] Within all this structure, a lot of randomness is allowed, ensuring the performance is unique each time the program is run. The randomness has a direct tie to John Cage, whom Dorsen cites as an influence.[4]

To me, Dorsen’s use of algorithm in theater points to the coming ubiquity of including advanced technology across the theatrical field, as everyone and everything, theater included, transforms from yesterday to tomorrow.

A video excerpt from Yesterday Tomorrow

For more information, check out the full Esoteric.codes interview, which features a detailed explanation of the algorithms used as described by programmer Godard, this review from the New York premiere that I saw, and Dorsen’s 2012 essay about algorithmic theater.

Citations:
[1] Hallet, Nick. “Annie Dorsen.” BOMB Magazine, 12 Jan. 2016, www.bombmagazine.org/article/7164111/annie-dorsen.
[2] Hallet, Nick. “Annie Dorsen.”
[3] “A look at Algorithmic Theatre with ‘Yesterday Tomorrow’ creator Annie Dorsen.” esoteric.codes, 23 Feb. 2016, www.esoteric.codes/post/139854787758/a-look-at-algorithmic-theatre-with-yesterday.
[4] Hallet, Nick.

 

eeryan-Looking Outwards – 01

 

A piece of the pie chart is an interactive robotics installation that was created by Annina Rust. Viewers can look up different professions within the arts and tech industries on a monitor, a pie chart of the gender ratio in that profession is then printed and moved by a robotic arm and attached to a cookie. The pictures of these pie charts are automatically photographed and tweeted. I thought that the unique approach this artist/ inventor took to give a platform to examine the unequal ratio of men to women in colleges, labs, and the workplace in the tech world was interesting, and it’s reference to baking, considered traditionally a “women’s role” drew my attention. She began this project in 2013, and it is still on display. To my knowledge, she did not use custom software in creating this project. When I saw her speak she mentioned that while she isn’t currently updating the project, she has considered creating a version using something other than robot arms for assembly in the future. The only critique I would have of this work is that the pie charts are not actually edible, making the final product of the installation seem less unified.

The final product

 

ghou-lookingoutwards-01

teamLab – Flower Forest: Lost, Immersed, and Reborn 

This summer, I was fortunate to travel to the Beijing 798 art district where I saw the project Flower Forest: Lost, Immersed and Reborn. This project is an immersive, interactive digital installation.

This piece is one of the many large scale digital art pieces by the group teamLab and sound designer Hideaki Takahashi. This group refers to themselves as ultra-technologists. Their mission includes expanding art using digital forms and developing relationships between people and art with these installations. These artists are also inspired by a collective building process bringing together artists, programmers, animators, mathematicians, engineers, architects, web and print graphic designers, and editors.

This is what inspired me to consider IDeATe as it is such a shared experience between many subjects. This project resets the boundary between physical design and architectural space. I admire these ultra-technologists for their ability to express the change in our environment and art. This will provide the future with limitless possibilities of designing spaces and users/viewers will be able to experience art more directly.

Looking Outwards-01

Nike recently built a running stadium in Singapore where participants on the track ran next to an LED wall. After one lap, a digital avatar of the participant’s body from the previous lap is shown and the participant can run against the same pace or try and beat it. What I admire about this project is the interdisciplinary approach to designing and configuring the abstract concept of racing against and trying to beat one’s best self. This was a large-scale project with the key members being a creative director, creative technologist, art director, and tech and building teams. Aste Gutierrez is credited with the idea and creation of what this project is known as, the Nike Unlimited Stadium. A radio-frequency identification software was designed and personalized for the project which allowed for the tracking of each individual on each step on each lap. I see this project as a catalyst to combine the worlds of virtual gaming and physical gaming where the negative characteristics of both are taken away.

NIKE Unlimited Stadium in Manila, Singapore

 

svitoora – 01 Looking Outwards

Laurie Anderson (Generative Portrait) Diana Lange

Laurie Anderson (Generative Portrait) Diana Lange

This generative project is created alone by the artist, specifically with Processing. To the best of my knowledge, the author uses an image process module to help define the nodes by which the segments are connected. Diana Lange is inspired by nature and nature’s process. To me, this portrait reminds of a handmade nail and thread algorithmic portrait. The future where this portrait points to is one where our identity our digitalized. What does it mean to have our identity digitized, and can one truly have an identity in a digitalized age?