BrandonHyun-LookingOutwards-08

Heather Knight is a student who is currently conducting her doctoral research at Carnegie Mellon’s Robotics Institute and running Marilyn Monrobot Labs in NYC, which creates socially intelligent robot performances and sensor-based electronic art. She earned her bachelor and masters degrees at MIT in Electrical Engineering and Computer Science and has a minor in Mechanical Engineering. I’m inspired by her work because she creates robots that interact with human and also use robotic intelligence to create performances that are interesting.

Footnote on the Video: Heather Knight’s Ted talks with the Marilyn Monrobot which creates socially intelligent robot performances and sensor-based electronic art.

 

Her work also includes: robotics and instrumentation at NASA’s Jet Propulsion Laboratory, interactive installations with Syyn Labs, field applications and sensor design at Aldebaran Robotics, and she is an alumnus from the Personal Robots Group at the MIT Media Lab.

When she talks she talks very fluently and it was nice to see the actual robot interacting with the audience. I find that very powerful and a good way to present.

 

 

 

Looking Outwards 08 – Yugyeong Lee

James George is a media artist and programmer who generate films, interactive installations, and software products advancing the art of the moving image. He created a new medium for cinematic expression through incorporating codes to bring volume to his films. In his presentation at Eyebeam Art + Technology, he talks about photography. In times where “ten percent of all photos were taken in the last twelve months,” he explores what that would mean for photographers in terms of their contribution to the data base. This leads to fascination with humans in digital space. One of the projects that reflects that interest was Depth Editor Debug which depicts fragments of candid photographs placed into a three dimensional space. By utilizing data from video game controller in combination with custom software, this project captures unique data from a depth-sensing camera and visualize people existing in their natural way in the virtual environment. Now the project has been developed to create films with the same concept and allow users to visualize data online. James George is admirable in the way he visualize and transform a typical photograph into an interactive form represented in physical space reflects his inquiry on humans in digital space. He also presented in a step by step process that brings the viewers to engage with the topic through the use of quotes and examples that reflect the purpose of these projects.

lecture: https://vimeo.com/channels/eyeo2015/134973504

website: http://jamesgeorge.org/

photographs of Depth Editor Debug project

atraylor – Looking Outwards 08 – Section B

For this post, I watched Chris Sugrue‘s lecture. She is a visual artist, designer and programmer who creates digital interactive installations. She has a Masters of Fine Arts in Design from Parsons School of Design and currently teaches at Parsons Paris, in France. Sugrue’s art is self described as playful, and mainly comprises of light and interaction.

Someone using Delicate Boundaries.

I’m inspired by her work because it focuses on immersing the viewer/user in the interaction and in the world that she has created. Her project Delicate Boundaries is interesting to me because it bridges the gap between the screen and the real world through an illusion where bugs emerge from the screen and onto your arms when it’s touched. I like the life that she gave to the virtual bugs as they swarm your hand desperate to escape. Her other projects, like Memory of Form and Matter and Base 8, also use illusions like Pepper’s ghost to integrate digital imagery in to the physical world.

A still from Memory of Form and Matter

 

jamieh-LookingOutwards-08

(Above is Kate Hollenbach’s lecture at INST-INT 2014)

Kate Hollenbach is an artist and programmer who develops interactive systems and technologies that incorporate the human body, human gestures as well as the environment’s physical space. Her experiences came from being an interface designer and product developer from a computer science undergraduate background. She was previously the Director of Design and Computation at Oblong Industries, where she oversaw the Mezzanine project. One of the projects she took part of while at Oblong that I thought was the most interesting was “Tamper”. It uses multiple screens that play videos and the Tamper system allows the user to edit videos (cinematic design) through gestures alone, which is sensed through wearing a glove. What I love about this project is that this type of technology always seemed so surreal and only seen in movies. However, it is possible through sensors and data collection. It is making the imagined possible. And then from the Tamper project, it led to other projects that are more developed, such as the G-Speak (eg. allows more than one human interaction with the system) or the Rummage (eg. photo sorting). All of her projects think about the 3D space, transforming two-dimensional things like videos and photos into an object within a XYZ plane. As her works are very complex and may not be easily understood or visualized through words alone, they are explained through demonstration videos of someone interacting with the system.

 

(Below is a demonstration of what Tamper does)

dnam-LookingOutwards-08

While we center art on a lot of traditional style painting, drawings, and some new technology generated visual arts, Sissel Tolaas work is very interesting. In fact, it stinks. Sissel Tolas focus on scent and its ability to bring back memories. Tolaas has collected over 7000 different real scents. She believes that scent is one of the strongest image and memory provoking sense that we have. In her presentation, she showcases the variety of the scents she has collected and how they collect the scents in the first place. While Sissel Tolaas does not have a seperate artist page, she does have the smell memory kit infosite.

nahyunk1 -Looking Outwards 08

https://vimeo.com/channels/eyeo2012

Jake Barton, a designer who focuses on audience engagement upon his ideas through storytelling, talks about unity in different representations of an idea and finding a new filter(a mode) for people to perceive things around the world using creative and technological expressions. He also indicates that through these new modes of looking at the world in a different way, we are permitted to look at the “…’probable’ to the ‘plausible’ to the ‘possible'”(28:49). Certain works such as the interactive doodle screen which people could draw line figure and the screen searches paintings that contain similar curvature of that line, Jake encourages different disciplines such as coding, art and media to collaborate and help people to find the new mode of perception. Towards the beginning of his speech, he lays down a mediation before starting his lecture by how this lecture won’t define the word creativity. Such usage of speech skills in his presentation seemed to be successful for the audience who could possibly hold different ideas about such topic.

https://localprojects.net/
Founder of local projects: Jake Barton

rsp1-LookingOutwards-08

Poisonous Antidote

Mark Farid. Image courtesy of the artist.

https://creators.vice.com/en_us/article/4xq99d/london-artist-turns-entire-online-public-portraits

London-based artist Mark Farid explores Poisonous Antidote in an online gallery (Gazell.io) where he offers up his various online presences as 24-hour public portraits over the course of 31 days. He uses data from emails, text messages, phone calls, Skype conversations, and other platforms are then used as fodder for an abstract, ever-evolving 3D-printed sculpture made of four unique parts, each portraying a week of Farid’s life. In his project, he focused on how ones internet personality is established in the form of passwords and other inputs that one might have on the internet about themselves.

From Farid himself, “I’m interested to see how I self-censor and how I change my actions because everything is being broadcast live,” says Farid. “Will I stop saying certain things to certain people? Will I try to look more interesting and fun, so will I go on different websites?”

As his visualization, Farid decided to  3D print the data that he gathered into a solid object, almost like a graph.

3D print of Farid’s data

jiaxinw-LookingOutwards 08

Kate Sicchio

Photo of Kate Sicchio

Kate Sicchio is a choreographer, media artist, and performer. She is currently a Visiting Assistant Professor in Integrated Digital Media at New York University. Kate’s Ph.D. focused on the use of real-time video systems within live choreography and the conceptual framework of ‘choreotopolgy’ a way to describe this work. She works on exploring the interface between choreography and technology. Her works show a very interesting relationship between dance and technology.

I admire the way she tries to re-think the choreography of dance and transform it into another technical form.  “Hacking the Body” is one of her projects, in which she and other artists explored the interaction between dances and wearable technology. In “Hacking the Boday 2.0”, two wearables were designed to put on two dancers body when they performed. The wearables transformed the movements from dancers to signals and turned them into sounds. This research was described as “using the concept of hacking data to re-purpose and re-imagine biofeedback from the body.” This project showed an impressive possibility of how to combine live-performance and real-time technology together. It is a great inspiration for people to think about the messages from the human body in a different way.

She usually performs her works with dancers with wearables or other devices together. She used visual and audio feedback as the outward performance for the technical part. By watching the live performance with the real-time technical feedback, the audience can get a sense of connection between these two aspects.

This is the eyeofestival page and speech video for Kate Sicchio.

Kate Sicchio

 

Here is the video of “Hacking the Boday 2.0”

If you want to know more, please go to her website: http://blog.sicchio.com/biog/

Project-07-Curves

sketch.js

//Hanna Jang 
//Section B 
//hannajan@andrew.cmu.edu 
//Project-07

var numberPoints=200; 

function setup() {
	createCanvas(420, 420); 
	frameRate(15); 
}

function draw() {
	 
//map the color background changes using r, g, b  and mouse
	var r=map(mouseX, 0, width, 0, 255); 
	var g=map(mouseY, 0, height, 0, 90); 
	var b=map(mouseX, 0, width, 0, 250); 
	background(r, g, b, 30); 
	
	//stroke
	stroke(240, 174, 231); 
	strokeWeight(0.3); 
	
	translate(width/2, height/2); 
	drawEpitrochoid(); 
}

function drawEpitrochoid() {
	var x; 
	var y; 
	var h=100; 
	var theta; 
	
//variables to help modify change using mouse 
	var a=map(mouseX, 0, width, 0, 20); 
	var b=map(mouseY, 0, height, 0, 40); 
	
//draw outer epitrochoid 
	beginShape(); 
	for(var i=0; i<numberPoints; i++) {
		theta=map(i, 0, width/2, 0, 360); 
		x = (a + b) * cos(theta) - h * cos(((a + b)/b) * theta);
       y = (a + b) * sin(theta) - h * sin(((a + b)/b) * theta);
 
 vertex(x, y); 
	}
endShape(); 
}

I started this project by looking through the different mathematical curves on the Mathworld curves site. I stumbled upon a specific curve named Epitrochoid that I really liked. It reminded me of a shape of a flower, which is why my project is themed around pink, to give it a pink flower look.

I wanted the background to also change as according to the mouse movement, which is why I also made maps for the different R, G, B variables. It was a bit hard to understand how I wanted the mouse to interact with the different variables of the functions, but once I made maps to form relationships with the mouse and variables a and b, I was set.

(Shown above are examples of different Epitrochoid patterns and sizes according to the movement of mouse)

rgroves – Curves – Section B

sketch

var nPoints = 250;
var rotations = 30;
var shift = 0; //initialize displacement of curve, which changes whith the mouse
var a = 0; //initialize a used in the curve function
var red = 50;
var g = 0;
var b = 0;


function setup() {
    createCanvas(480, 480);
    frameRate(10);
}

function draw() {
    background(250);
    stroke(250);
    translate(width/2, height/2);
	var s = (TWO_PI)/rotations;
    for(j = 0; j < rotations; j += 2) {
		drawConchoid(j * s);
	}
	for(j = 1; j < rotations; j += 2) {
		drawConchoid(j * s);
	}
}

function drawConchoid(r) {
	push()
	pickColor();

	rotate(r);
    var shift = map(mouseY, 0, height, -100, 100); //displaces the curve along the rotated x axis using mouseY
	translate(shift, 0);
	beginShape();
	for(i = 0; i < nPoints; i++) {
        var theta = map(i, 0, nPoints, 0, TWO_PI);
	    var a = map(mouseX, 0, width, -5, 4);

        var r = 70 * ((1/(cos(theta))) + (a * cos(theta)));
        var x = r * cos(theta);
        var y = r * sin(theta);
        vertex(x, y);
	}
	endShape(CLOSE);	
	pop();
}

function pickColor() {
	var red = random(60, 200);
	var g = random(50, 150);
	var b = random(80, 250);
	var col = color(red, g, b, 20);
	return fill(col);
}

The curve I chose for the project was the Conchoid of de Sluze. I had a LOT of problems with this code! I wanted to rotate the curve around the center many times to create this complex mandala pattern. While I think that should have been really simple to do with a for loop, nothing I did worked. I kept the for loop that I wrote in the code but it’s commented out. I still do not understand why it doesn’t work!! I also didn’t want the colors to be flashing but I couldn’t figure out how to fix the random values.

Edit: I figured out the problem with the for loop so I updated my post!