nsridhar@andrew.cmu.edu – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2017 An exploration of skilled human gesture and design, Spring 2017. Fri, 12 May 2017 04:41:36 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.25 RoboZen Project Report https://courses.ideate.cmu.edu/16-455/s2017/347/robozen-project-report/ https://courses.ideate.cmu.edu/16-455/s2017/347/robozen-project-report/#respond Wed, 10 May 2017 16:59:47 +0000 https://courses.ideate.cmu.edu/16-455/s2017/?p=347 Continue reading RoboZen Project Report ]]>

 

RoboZen

Cecilia Ferrando, Cy Kim, Atefeh Mhd, Nitesh Sridhar

5/10/2017

 

Abstract:

Our human-machine workflow involve using a sandbox as a zen drawing template. By using sand as the drawing medium, it is easy to create and reset patterns. Modeling sand is on one side a predictable action, on the other it also entails an unpredictable component due to the complex behavior of granular materials.

The workflow was initially designed as follows:

As a pattern or shape is being drawn, Motive captures the motion of the hand tool moving through the sand. Grasshopper takes in a raw CSV file as an input and it preprocesses it in the following way:

  • points are oriented in the correct way with respect to the Rhino model
  • the points that are not part of the drawn pattern (for example the ones recorded when the hand was moving in or out the area) are dropped
  • curves are interpolated and smoothed out

The preprocessed input curve (or curves, in the case of a multiple curve), is passed to a GhPython component that dispatches the input according to determined parameters that are intrinsic to the input. In particular, the code checks whether the input curves are:

  • closed, open or self-intersecting
  • long or short with respect to the dimensions of the box
  • positioned with large or small relative angle with respect to the center or the box
  • complex or simple

Depending on the nature of the input, the curves are dispatched to the final implementations in the following way:

Within the mandala/radial symmetry mode, the software reflects and rotates the source curve into the form of a mandala with rotational symmetry that has as a center the center of the box. Differently, the spiral mode rotates the figure with respect to the endpoints of the curve, with a variable scaling factor that decreases the size of the input curve as it is patterned outwards.

The user might choose to draw multiple figures on the canvas. A maximum of four figures is allowed by the system. In this case, a tweening mode interpolates between multiple curves by computing a number of intermediate instances between consecutive curves. These patterning modes will also treat closed curves as objects, aiming to avoid them rather than intersecting with them or trying to cover them. In this way the patterning is reminiscent of the rake lines that are seen in zen gardens which avoid the rocks and small areas of greenery that are located within the garden.

When the offset mode is activated by a closed curve, the implementation follows the same structure as the offset command in Grasshopper, except that a zig-zag transformation is applied as the figure is offsetted outwards. Also, the offset distance is variable.

 

Objectives:

The purpose of the Zen Robot Garden is to create an experience where the user can view the robot drawing patterns in the sand both as a meditative experience (such as creating a mandala or a zen garden pattern) and as an educational experience to learn how symmetries, scaling and other geometric operations can entail very different results based on a set of different input parameters. There is a component of surprise in seeing what the final sand pattern looks like, and this is due to two elements:

  • the hard-to-predict aspect of the completed pattern
  • the self-erasing and complex behavior of the sand as it is being modeled

As the users continues to explore the constraint space of the sandbox curves, they can implicitly infer the parametric rules and therefore learn how to bring out different patterns.

The ABB 6-axis robot uses a tool with a fork-like head to act within the same sandbox as the user, in order to create small, rake-like patterns in the sand similar to those that can be seen in a true zen garden. The collaboration between the robot and human user allows the user to create complex forms out of simple shapes and drawing movements. Watching the robot perform the task puts the user in a relaxing state similar to the meditative function of a zen garden.

 

Implementation:


We chose to use a sandbox because sand is easy to work with and draw in but also simple to reset. Additionally it creates a collaborative workspace for the robot and human user, allowing them to explore the medium together. Watching the robot slowly transition through the sand, displacing old curves in place of new ones, creates a relaxing atmosphere and echoes the impermanence and patience shown in true zen garden pattern making.

We started by having a rigid body for the sandbox as well as the drawing tool to help capture the plane of the box. We also captured it as a work object for easy use with the robot. The drawing tool was used in repeated takes to capture the movement of the user’s hand as a curve that mirrored what they had drawn in the sand.

Outcomes:

We were able to create a logical system that allows the user to discover different outputs based on their input drawings without having to open up settings or menus. The transformations and output parameters used to create the final curve patterns are directly based off of properties of the input curves, which creates a process that seems opaque and surprising at first, but can be uncovered through repeated use.

The most difficulty we had was in terms of taking a curve output from Motive’s motion capture and bringing it into Grasshopper, as some of the capture settings can vary between takes and it was difficult to create a parametric solution that covered all the curves while still leaving them unchanged enough to be analyzed and reinterpreted for the robot output.

Contribution:

All the group members collaborated in shaping and fine-tuning the initial concept for the workflow.

Cecilia and Atefeh designed the Grasshopper script which takes the motion capture data and extracts the user’s input curves. They also coded the script that generates the curve output based on the parametric properties of the input.

Cy created the HAL script which takes the output curves and generates the robot’s motion paths from the curves. She also CNC routed and assembled the sandbox.

Nitesh designed and 3D printed the tooltip for the robot, and helped set up the work objects and robot paths, and supplied sand.

Inspiration:

Sisyphus – Bruce Shapiro

Procedural Landscapes – Gramazio + Kohler

Photo Documentation:

Drawing a user input curve for the robot to work with

 

Test run – You can see how the variation in the tool height compared to the sand level creates deeper or lighter lines in the sand

 

The final curve pattern as a whole

 

Close-up of the tool we designed to create variation based on the depth of the tool in the sand

 

 

 

 

]]>
https://courses.ideate.cmu.edu/16-455/s2017/347/robozen-project-report/feed/ 0
Robot Zen Garden https://courses.ideate.cmu.edu/16-455/s2017/305/robot-zen-garden/ https://courses.ideate.cmu.edu/16-455/s2017/305/robot-zen-garden/#respond Mon, 03 Apr 2017 22:45:07 +0000 https://courses.ideate.cmu.edu/16-455/s2017/?p=305 Continue reading Robot Zen Garden ]]>  

 

Written Description:

The initial steps in this workflow involve using a sandbox as a zen drawing template. By using sand as the drawing medium, it makes it easy to create and reset patterns. As a pattern or shape is being drawn, Motive captures the motion of the hand tool moving through the sand. Grasshopper isolates the curve and then based on an analysis of that curve, picks a mode for the output and generates it.

Within the mandala/radial symmetry mode, grasshopper reflects and rotates the source curve into the form of a mandala with rotational symmetry. Additionally there will be a mode for a spiral repetition mode with scaling that increases outwards from the center, and a mode for sectioning and patterning areas between closed curves. These patterning modes will also treat closed curves as objects, aiming to avoid them rather than intersecting with them or trying to cover them. In this way the patterning is reminiscent of the rake patterns that are seen in zen gardens which avoid the rocks and small areas of greenery that are located within the garden.

The ABB 6-axis robot will use a tool with a fork-like head to act within the same sandbox as the user, in order to create small, rake-like patterns in the sand similar to those that can be seen in a true zen garden. The collaboration between the robot and human user will allow the user to create complex forms out of simple shapes and drawing movements, and then watching the robot perform the task in a relaxing state similar to the meditative function of a zen garden.

]]>
https://courses.ideate.cmu.edu/16-455/s2017/305/robot-zen-garden/feed/ 0
Sand Sculptures: Sketching 3D Shapes from 2D Sand Drawings https://courses.ideate.cmu.edu/16-455/s2017/282/sand-sculptures-sketching-3d-shapes-from-2d-sand-drawings/ https://courses.ideate.cmu.edu/16-455/s2017/282/sand-sculptures-sketching-3d-shapes-from-2d-sand-drawings/#respond Mon, 13 Mar 2017 05:28:54 +0000 https://courses.ideate.cmu.edu/16-455/s2017/?p=282 Continue reading Sand Sculptures: Sketching 3D Shapes from 2D Sand Drawings ]]> Hybrid Skill Workflow Diagram:

 

Input:

Transformation:

Output:

 

Written Description:

The first step in this workflow involves using a sandbox for the drawing template. By using sand as the drawing medium, it makes it easy to create and reset patterns. Once a pattern is drawn it can either be modified or saved to be used to design the final sculpture. The Kinect depth camera will analyze the drawing pattern as a curve and Grasshopper will use that to generate a sculptural volumetric form.

Grasshopper will calculate and display the form, updating it as you draw, clear, and redraw curves in the sand. This will be based off of revolving or sweeping through the curves to create a vase-like form. After adding enough curves and adjusting the spacing and look of the curves, you can finalize the form for fabrication.

The final form could be fabricated in a multitude of ways from the 3D model grasshopper generates. We have decided to use the ABB 6-axis robot to fabricate the finalized form out of sand.

]]>
https://courses.ideate.cmu.edu/16-455/s2017/282/sand-sculptures-sketching-3d-shapes-from-2d-sand-drawings/feed/ 0
Cake City: Interactive Cake-based Urban Design for Children https://courses.ideate.cmu.edu/16-455/s2017/234/symmetric-piping-the-sweet-taste-of-symmetry/ https://courses.ideate.cmu.edu/16-455/s2017/234/symmetric-piping-the-sweet-taste-of-symmetry/#respond Wed, 01 Mar 2017 14:07:26 +0000 https://courses.ideate.cmu.edu/16-455/s2017/?p=234 Continue reading Cake City: Interactive Cake-based Urban Design for Children ]]> 1. Hybrid Skill Workflow Diagram

2.Annotated Drawings

 

Input:

Transformation:

 

Process:

Left: Input, in which cardboard forms that represent buildings are arranged on a 2′ x 2′ table and whiteboard surface under a Kinect. Green spaces (green), roads (purple), and bodies of water (blue) are drawn in with markers. A boundary (cyan) for the cake is also drawn to indicate where to cut. The Kinect tracks the RGB values of the pathways and records the depth of each object in relation to the table.

Upper right: Process, in which an ABB robot uses a caulk gun with different tips and colors of icing to pipe the different recorded features onto a large 18″ x 24″ sheet cake using the data it received from the Kinect. The data is now at 1/4 the size of the initial input setup as it is piped onto the cake. Icing materials are carefully chosen to ensure that the icing stands up when stacked and is solid enough to withstand minimal movement.

Lower right: Output, in which the finished cake is cut from the larger sheet. The process is repeated for different setups which turns out more cakes of different shapes and sizes.

Materials: A cake, pipping caulker, and ABB robot
Details: Caulking Gun
Details: ABB Robot
Robot pipes out features
Finished Block
The end product: A City of Cakes!

3. Written Description

 

Remember playing with blocks as a child to make buildings and cities?  Now imagine if those cities were actually made of cake.  Our motivation with this project is to introduce children to urban planning concepts in a fun, interesting, and edible way, while valuing the deep artistry the goes into cake decorating.

First a child defines the space by using specially shaped markers.  A child arranges blocks to make buildings on a table as if they are city buildings, arranging them as if they were one block of a city.  Sidewalks, green spaces, and water elements can be drawn in colored dry erase markers.  A depth capture camera looking down from overhead captures the arrangement, shape, and color of the features. An ABB then pipes the features using conventional cake piping techniques onto a single cake, in the process reducing its dimension by a factor of ~4–6. Once there are enough blocks created in this way, they can themselves be arranged in the same space to form a cake city consisting of cake blocks.

]]>
https://courses.ideate.cmu.edu/16-455/s2017/234/symmetric-piping-the-sweet-taste-of-symmetry/feed/ 0
Creating 3D Textile Joinery from Gesture https://courses.ideate.cmu.edu/16-455/s2017/238/creating-3d-textile-joinery-from-gesture/ https://courses.ideate.cmu.edu/16-455/s2017/238/creating-3d-textile-joinery-from-gesture/#respond Wed, 01 Mar 2017 01:43:52 +0000 https://courses.ideate.cmu.edu/16-455/s2017/?p=238 Hybrid Skill Workflow

Physical Workcell Diagram

 

]]>
https://courses.ideate.cmu.edu/16-455/s2017/238/creating-3d-textile-joinery-from-gesture/feed/ 0
Physical Graffiti – Graffiti Data Analysis https://courses.ideate.cmu.edu/16-455/s2017/165/physical-graffiti-graffiti-data-analysis/ https://courses.ideate.cmu.edu/16-455/s2017/165/physical-graffiti-graffiti-data-analysis/#respond Mon, 20 Feb 2017 05:53:25 +0000 https://courses.ideate.cmu.edu/16-455/s2017/?p=165 Continue reading Physical Graffiti – Graffiti Data Analysis ]]>

We brought in Matt Constant as our expert for this project. He is a graffiti artist and we used the Motive motion capture system to track the position and orientation of his spray can as he created his artwork.

We first created a fixture that held 3 reflective motion tracking spheres in order to allow the software to track the spray can as a rigid body. Additionally we attached one of the spheres to his finger to act as a toggle to track whether the spray can button was being held down or not.

First iteration of fixture
First iteration of fixture
Final iteration

 

We then had him test our modified spray can to make sure the attachment points would not get in the way of his craft, and tracked the position of the can while he sprayed an art piece onto paper. We were able to track his motion in 3D and see more of the spatial process that is lost when translating this motion into a 2D art piece.

Matt’s graffiti and a test piece

Once we took the motion capture data into Grasshopper, we were able to uncover patterns and shapes that were hidden in the final product due to the way the spray paint layers on the sheet of paper. The path Matt takes through the paper to create his work is revealed clearly when stepping through each frame of the motion capture data. We also analyzed it for 3D characteristics including height and orientation of the spray can while spraying, speed of motion of the can, and the tightness or looseness of corners and curves.

Once we had the speed, height, and orientation of the can we represented the data visually as a path of cones showing the spray from the can (as seen on the video.) The cones show the distance of the can from the paper, and the density of cones in one area shows the speed Matt was moving at that point in the drawing. We also added ellipses to represent the thickness of the paint lines on the paper.

Finally we used these ellipses and cones to create a 3D surface shell that represents this data as one fluid structure.

Digital model showing 3D motion and the physical characteristics of painting with a spray can

 

 

 

 

 

 

 

By looking at the way Matt varies his speed and orientation of the spray can we can see his skill and control over the can, evident in the curvature of his lines as well as the size and thickness of the paint in his drawing.

Matt’s graffiti with variable line weights and intentional size changes
Amateur graffiti without line control or consistency
]]>
https://courses.ideate.cmu.edu/16-455/s2017/165/physical-graffiti-graffiti-data-analysis/feed/ 0