Uncategorized – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2018 An exploration of skilled human gesture and design, Spring 2018. Mon, 14 May 2018 07:53:05 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.25 Touch & Melt: Tactile Abstraction and Robotic Heat-Forming https://courses.ideate.cmu.edu/16-455/s2018/782/touchmelt/ https://courses.ideate.cmu.edu/16-455/s2018/782/touchmelt/#respond Mon, 14 May 2018 03:58:52 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=782 Continue reading Touch & Melt: Tactile Abstraction and Robotic Heat-Forming ]]> by Hang Wang & Varun Gadh

Abstract

Touch & Melt explores human-machine collaborative fabrication in a process that leverages an innate human skill and a functional robotic skill. The ability to find and focus on engaging physical facets of objects and unique textures on object surfaces – and relatedly, the ability to easily generate an intricate pseudo-arbitrary path of travel on and about an object – is a distinctly human one. The ability to move in the precise and consistent manner needed for many forms of fabrication is an ability firmly belonging to machines.

Using MoCap (Motion Capture) technology to collect tactile scanning data (following the human end-effector path), this fabrication methodology generated an abstracted version of the form of the scanned object. The abstraction seeks out highlighted features of particular tactile importance by finding regions in which the highest amount time has been spent.

Next, the process uses a histogram of touch density to generate contours for a robotic arm to follow. Finally, the robotic arm manipulates a piece of polystyrene plastic under a hot air rework station; its motion follows the generated contours. The resulting melted plastic is an abstracted representation of the human interpretation of the target object.

Objectives

The project objectives were as follows:

  1. To observe the tendencies of human tactile scanning; what kinds of edges, forms, textures, and other facets are of the most tactile importance
  2. To test the hypothesis that, when scanning the same object, different users would generate different outcomes when using the system
  3. To find the appropriate material, stock thickness, heat-applying robot tool, contour order, temperature, air pressure,

Process

For the purposes of explanation, this description will follow the scanning of a single object (pictured below) by two different users.

Pictured: the scanned object

Using either a marker or a set of markers (depending on the software and physical constraints) mounted to a glove or finger, a user scans a target object (in this case the face of one of the project creators).

The MoCap system records the scan and collects three-axis position data.

The position data is then exported and parsed through a Python script into a set of points in 3D space to be represented by Grasshopper in Rhino.


The 3D point set is flattened onto a single plane and overlaid upon a grid of squares. The point densities over each square are mapped to the corresponding squares and a heat map representing touch density is generated:


In this heat map, the gradient green-yellow-red represents an ascending touch density value range.
Once the touch density values have been mapped onto a grid, each grid square is raised to a height correlated to the touch density value it represents and a surface is patched over the raised squares.

From this new smooth surface, a set of contours (below) is extracted by slicing the surface at an interval set by the user. (For a deeper understanding of how the contour generation works, read up on the Contour function in Rhino; the two actions rely on the same principle).

These contours are broken up into sets of paths for the robot arm to follow:


The process retains a fair amount of legibility from collected data to robot path.


The robot arm guides the polystyrene stock under the heat gun along the contour paths.


The polystyrene is mounted to a clamp on the robot. The robot arm guides the polystyrene stock under the heat gun along the contour paths.

 

After several tests (and a bit of singed plastic) we were able to find the fabrication process that is the effective balance of expressiveness and information retention!

The problem of reaching that effective fabrication process, however, was non-trivial. One of the factors in the manufacturing process that required testing and exploration was contour following order.

As we wanted to maximize the z-axis deflection of the material due to heat (in order to have the most dramatic and expressive output possible), we initially believed that we should address concentric contours in an in-to-out order. This would minimize the distance between the heat gun and each subsequent contour. However, we learned that – as our contours are relatively close together – the inner rings would experience far too much heat and hole would form in the material, distorting the rest of the material in a way that we viewed as non-ideal for preserving the contour information.  As such, we thought it wise to travel out-to-in to decrease the amount of heat experienced by the inner contours.

When we tested out-to-in order, however, the points in the material at which the inner contours would be had traveled too far vertically away from the heat gun to be effectively melted. Finally, we settled on addressing each layer of contours in the order they were sliced. For example, the outermost contour in the diagram below (1) would be followed first. Next, the second smaller concentric contour along with the small contour that is of equivalent concentricity (2) would be followed. The subsequent round of contours would include those contours marked (3). This continues until the final layer of concentricity is reached. This heating order proved to be the most effective because it was an effective balance of summed heat over regions that were meant to be deformed, but not enough concentrated heat in a small place to cause large holes to appear.

 

Outcomes

When different users scan the same object, results can very dramatically in both path and touch density. For example, two volunteers who were relatively unfamiliar with technical aspects of the system scanned the same object (the face of one of the project members) and approached the scanning in completely different ways; the speeds, features of primary focus, and scanning goals of the participants varied dramatically. Seen below, the paths are structurally different and repetitive within their own patterns.

In terms of investigating what physical facets are the most engaging, we were able to glean information primarily about faces as that was our chosen object set of interest. Generally speaking, the nose tip, nose edges, jawline, and lower forehead seem to be the areas of primary interest. This seems to be due to the clearly defined curvature of those features. Areas of relatively inconsistent or flat topography (i.e. a plane or a jagged surface) don’t seem to be of particular tactile interest, while edges and and relatively long curves seem to call attention to themselves.

After a variety of tests, we discovered the optimal output parameters were as follows:

  • Hot air rework station at 430 ˚C, 90% air pressure
  • 1/16″ Polystyrene Plastic
  • Heat gun (end of hot air rework station) 1.25″ from surface of polystyrene
  • 5mm/s travel speed
  • A level-of-concentricity contour ordering pattern (see final paragraph of Process section for more information)

Acknowledgements

We would like to thank Professors Garth Zeglin and Joshua Bard for their guidance and assistance throughout this project. We would also like to thank Jett Vaultz, Ana Cedillo, Amy Coronado, Felipe Oropeza, Jade Crockem, and Victor Acevedo for volunteering their time.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/782/touchmelt/feed/ 0
Robot’s Cradle https://courses.ideate.cmu.edu/16-455/s2018/794/robots-cradle/ https://courses.ideate.cmu.edu/16-455/s2018/794/robots-cradle/#respond Sun, 13 May 2018 20:15:21 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=794 Continue reading Robot’s Cradle ]]>

Stephanie Smid & Sarika Bajaj

Human Machine Virtuosity, Spring 2018

— — —

ABSTRACT

Robot’s Cradle was a project focused on creating a hybrid analog and digital design workflow for experimental weaving. Inspired by peg loom weaving or string art, our workflow involved transforming human drawing (tracked using motion capture software) into a weaving pattern using Grasshopper and Python that was then weaved by a robotic arm fitted with our custom string tool. By the end of our project, we were able to achieve the discreet stages of this workflow and start investigating the edges of this workflow, specifically what constitutes better use of the workflow and what is a “finished design” from the workflow.

OBJECTIVES

In order to actualize this workflow, we aimed to 1) reliably capture human drawing using the motion capture system 2) properly process the motion capture data to create a set of tangent line approximations 3) convert the tangent lines into a string path the robot could actually follow 4) have the robot use a custom tool to wind the given pattern onto a peg board. The ultimate objective of this process was to achieve a workflow such that designers could create finished weaving pieces from our system.

IMPLEMENTATION

In order to create this workflow, we began with setting up the motion capture system, using a graphite pencil outfitted with motion capture markers that would allow us to track its motion. After setting up some initial bounding conditions, we were able to pretty easily track the line the pencil was drawing, divide up the line into a set of points, and derive tangent lines from each of those points in Grasshopper.

We then transitioned over into working on the physical parts of the project, namely procuring string, creating a base peg board, and creating a robot tool that could hold a thread spool and dispense string. The string we settled on using was wax coated, which made the winding process a bit simpler as the thread itself was sturdier than normal and needed less tensioning for winding. We had to do several iterations of the peg board, experimenting with which peg shape would best encourage string to stay wound as well as what distance between pegs best kept the integrity of the design while not so close as to prevent winding. Finally, our robot tool was the most complex physical part of our system, where we had to iterate how to best keep the tension throughout the system (a problem we solved using a felt washer to tighten the turning of the spool) as well as combat the tension points around the edges of the string dispensing tube. After settling on final designs for the peg board and robot tool, we went back to write the Python code needed to turn our tangent lines into an actual robot string path that we then converted into HAL code for winding. After getting the robot to successfully wind, we iterated back and worked on making our entire system more connected into an actual workflow. The main change we made in this process was that our initial design involved using projection for user feedback that we replaced with a separate screen instead.

OUTCOMES

We were successful in creating a complete workflow for experimental weaving, which meant we were able to explore what a “finished” design might look like from our system. Thus, we had several users come in and test our system to see how they interacted with our system and what some of the better results might look like. We found that the “finished” designs often had areas of clear density through the system, such that the initial drawing was clearly visualized in the string, and users would often redo their initial drawing when the string pattern produced was too abstract and not as clearly visualized in string density. The best example of a finished piece from the workflow is visualized below, where several layers (each with distinct, clearly identifiable patterns) are layered upon each other to create a final piece.

CONTRIBUTION

Stephanie primarily worked on the Rhino and Grasshopper portion of the code, as well as generating the final HAL code for winding. Sarika focused on the Python string winding algorithm and the point generation for maneuvering around each peg. Both Stephanie and Sarika created iterations of the peg board as well as the robot tool that held the thread spool and dispensed thread.

The two of us would like to thank Professor Josh Bard and Professor Garth Zeglin for their mentoring and advice throughout this project. We would also like to thank Manuel Rodriguez for their assistance in debugging some of our Python and Grasshopper integration issues and Jett Vaultz for allowing us to borrow their graphite pencil.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/794/robots-cradle/feed/ 0
Robot’s Cradle: Shortest Path Prototype https://courses.ideate.cmu.edu/16-455/s2018/645/robots-cradle-shortest-path-prototype/ https://courses.ideate.cmu.edu/16-455/s2018/645/robots-cradle-shortest-path-prototype/#respond Mon, 02 Apr 2018 03:08:58 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=645 Continue reading Robot’s Cradle: Shortest Path Prototype ]]> Stephanie Smid and Sarika Bajaj

current progress | example artifacts | next steps

Our project involves creating a hybrid workflow that combines human drawing skill and robotic assembly for peg loom weaving. Through this workflow, artists should be able to draw patterns with specified string densities that a robot will solve for and manually string around a loom.

The specific processes involved in our workflow are detailed below and involves using motion capture data to capture drawing that is then processed using Grasshopper to finally RAPID code that will control a robot.

For our shortest path prototype, our goals included 1) properly tracking human drawing using the motion capture system 2) processing the motion capture data in Grasshopper to create a viable string pattern 3) using the viable string pattern to hand wind the pattern on our custom made peg board.

Since Stephanie’s first project had involved tracking human drawing using the motion capture system, we were able to use the same rig for this project and were able to get the pen tracked rather quickly. Moreover, after some initial tests where we experimented with different x, y, and z bounding box cut offs as well as time sampling, we were able to get a reasonably smooth curve from the motion capture system. 

Using Grasshopper, we were able to split up the motion capture curve into distinct points. We then drew tangent lines from each of those points that were then pulled to the closest approximate peg (in our case we had 13 per side). Our final step involved simply printing out the Grasshopper visualization and using the printout to hand wind the string around the pegs.

The artifacts from our shortest path prototype are illustrated below:

In terms of our next steps, the major next problem will be integrating the robot into our current workflow; instead of hand winding as we are right now, we will have to work to creating a system that will enable the robot to wind the string around the pegs. For the robot winding, we will also have do another iteration of our peg board, as we noticed our acrylic frame was not able to withstand the force of the string tension (often without a foam core base, the frame would warp); thus, we will probably create a sturdier wood iteration that will be able to handle the force. Moreover, we would need to do some further Grasshopper processing to identify the most viable path for the string to follow, as that is currently an approximation we are doing automatically as humans observing the grasshopper processing. Finally, we also need to figure out a better way to track the pencil in relation to the frame itself to ensure that our frame of reference for any drawing is correct; while in our tests, the frame of reference seemed to be reasonable, it is possible there was some offset between physical drawing and the virtual model.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/645/robots-cradle-shortest-path-prototype/feed/ 0
Project Proposal: Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/#respond Thu, 08 Mar 2018 01:14:09 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=621 Continue reading Project Proposal: Agent Conductor ]]> Manuel Rodriguez & Jett Vaultz

 

Agents conductor is a design and fabrication method hybrid between digital and physical worlds. It is a negotiation between two living parts, one physical, one digital. The former refers to a human, the latter, to a set of agents with their own behaviour. Is it possible to tame them?

Sketch of proposed physical workcell

Design skills combine the ability to direct the set of agents as you will, with some constraints given by a list of rules and digital tools with the creativity of dealing with a dynamic entity that will move around. It will be closer to train a dog rather than getting better at some fabrication skills as sculpting.

Hybrid Skill Workflow

In order to follow and understand your own movements with the piece to fabricate, a projector will be necessary, as it will be where the agents live. Through streaming motion capture, the actor’s movement are recorded and projected, having full interaction with the agents.

The degree of manipulation of the behaviour of the agents is to be determined. Nevertheless, a basic setup will consist of number of agents, sped, degree of separation vs cohesion, alignment, repulsion or attraction. These variables are expected to be controlled by a number of different rigid bodies. How to combine those is up to the user.

The resulting geometry is also variable. From a curve made out of all the tracked location of the agents (with variable thickness), to a surface formed by lofting each of the mentioned curves, or more volumetrical bodies made out of shapes associated with the motion of the agents.

The algorithms will be written in Grasshopper Python. Agents behaviours, interaction with the user, resulting geometry and fabrication files.

Fabrication will happen by means of additive manufacturing, being robotic spatial printing the chosen medium. The thickness and degree of fabricability will rely on the development of the tool. Therefore some coding in HAL and Rapid will be necessary at this stage.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/feed/ 0
Deform + Reform – Human-Machine Tactile Caricature https://courses.ideate.cmu.edu/16-455/s2018/597/deform-reform-human-machine-tactile-caricature/ https://courses.ideate.cmu.edu/16-455/s2018/597/deform-reform-human-machine-tactile-caricature/#respond Wed, 07 Mar 2018 15:42:03 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=597 Continue reading Deform + Reform – Human-Machine Tactile Caricature ]]> The purpose of this project is to integrate machine skill and human skill to respond to, and generate in abstract, a tactile caricature of an object. We aim to explore fabrication that relies on fully robotic active skills and a pair of human skills (one of which is innate, and the other of which would be developed by this process).

The innate human skill of viscerally exploring an object with hands will be applied to, initially, a base object. The human task will be considered “3D hand-scanning”, or “hand scanning”, but the genuine human, non-robot-replicable skill is tactile sensing and tactile opinion. This is a very visceral, innate execution of sensing that human beings can rely on to react to their environment. The motion of the human scan will be tracked with motion capture markers, and this will allow us to collect data on what affordances are of particular tactile interest. This process would also help develop the human skill of physical awareness of 3D objects (also known as 3D visualization when applied to drafting or CAD modeling).

Human actor tactilely explores an object (represented by a generic model)

With this path data, we can learn which features are the most tactilely significant, and this knowledge can be applied to robot arm aluminum deformation.

 

 

Model of robot in action deforming aluminum sheet

Finger motion along a subject is replicated by the robot “finger” motion along a deformed sheet.

 

Model of aluminum sheet post-deformation

If possible, we’d like to explore implementing a “recursive” approach: the user explores a base object, the sheet is deformed, the next human exploration is conducted on the deformed sheet, and either the same sheet – or a different sheet – is subsequently deformed. This echoing deformation could occur several times, and the final result would be a deep tactile caricature of the object and its children.

The technical context of this piece references photogrammetry – the process of generating a 3D model by combining many photos of a single subject. This project pays homage to photogrammetry by using dynamic data to create 3D visualizations of the object, but incorporates physical and tactile action both in the input and in the output.  The cultural context of this piece explores how caricature, which is the act of isolating the outstanding features of a subject and disproportionately representing them, can be applied to tactile sensing in an object.

 

Hybrid Skill Diagram – the process of conversion from human skill to physical output

The implementation of this project will rely on the Motive Motion Capture system to collect hand scan data. The immediate feedback on which the human scanner will rely will be derived from Rhino. This hand scan data will be sent to Grasshopper, where it may need to be cleaned up/smoothed by a Python script, and then will be converted to robot arm control data in, HAL (a Grasshopper extension), and Robot Studio. An aluminum sheet will be held tightly in a mount, and the robot arm will deform the aluminum by pushing it down according to processed scan trajectory data.

 

Deform + Reform was inspired in part by this project from the CMU School of Architecture.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/597/deform-reform-human-machine-tactile-caricature/feed/ 0
Project Proposal: Robot’s Cradle https://courses.ideate.cmu.edu/16-455/s2018/577/project-proposal-robotic-weaving/ https://courses.ideate.cmu.edu/16-455/s2018/577/project-proposal-robotic-weaving/#respond Wed, 07 Mar 2018 00:34:14 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=577 Continue reading Project Proposal: Robot’s Cradle ]]> Stephanie Smid and Sarika Bajaj

project context | motivation | scope | implementation

Our project involves creating a hybrid workflow that combines human drawing skill and robotic assembly for peg loom weaving. Through this workflow, artists should be able to draw patterns with specified string densities that a robot will solve for and manually string around a loom.

The inspiration for this project originally came from the fabrication processes of peg loom weaving and string art; the first is the base of most weaving techniques and involves weavers densely binding and patterning string across a row of pegs, and the latter is a more experimental result of peg loom weaving where artists focus more on creating patterns via the density of the string instead of the string itself. The closest case study for this project is the Silk Pavilion Project from the MIT Media Lab, where researchers used a CNC machine to manually attach string to a metal frame lined with pegs and then placed silkworms on top to fill in the structure.

Our spin on these experimental weaving techniques is to create a workflow where a person’s drawings gets processed and translated into a string density pattern that a robot weaves on a set of pegs. The person will first draw on the empty peg board, detailing gaps where no string is to be weaved and then detailing areas of higher density and patterns. The MOCAP set up in the design lab will track the stylus which the person is using to draw and will be used to create a real-time projection of the string patterning created by the drawing. Once the person is satisfied with the projected string pattern, the robot will then use string, covered in UV curing resin, to create the pattern on the peg board. The final step will simply involve using UV light to curve the final string shape and to remove the pattern from the peg board. The initial peg boards will be flat, two dimensional sheets; however, if we are able to successfully create this two dimensional workflow, then we will start transitioning to three dimensions using peg boards with slight curves and angles to further distort the string patterning. Below are early diagrams exploring how the system can work.

The following is a visual of the weaving system:

 

The following is an explicit workflow diagram detailing the process:

 

]]>
https://courses.ideate.cmu.edu/16-455/s2018/577/project-proposal-robotic-weaving/feed/ 0
PanCap https://courses.ideate.cmu.edu/16-455/s2018/511/pancap/ https://courses.ideate.cmu.edu/16-455/s2018/511/pancap/#respond Mon, 19 Feb 2018 06:38:03 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=511 Continue reading PanCap ]]> PanCap

Varun Gadh, Hang Wang

02-19-2018

Please find our video here.

Abstract

The initial goal of PanCap was to use motion capture technology to better understand, and digitally simulate, the process of flipping a pancake. We have been able to achieve both of these goals. A tertiary goal – to try to glean information from the Motion Capture that could help teach a robot to flip a pancake – sprung up as a longer-term motivation. This goal is certainly in progress, and understanding the data has helped us narrow down the set of circumstances that will allow a robot pancake flip to occur.

Objectives

Our overarching objective – to better understand and digitally simulate the process of flipping a pancake – broke up into two discrete goals. First, to leverage the position & orientation data capture in the MoCap recordings to find specific border values between each phase of motion of the pancake; this allows us to break up the pancake flip motion into particular physical circumstances. Second, we want to understand more quantitatively what set of skills is necessary to correctly flip a pancake. This could extend to understanding the physical realities of the flip with the potential end goal/motivation being a robotic pancake flip. We would need to and understand the breadth of design that could achieve this goal.

Implementation

 

As shown above, we chose a pan with a greater lever arm to simulate the motion that a robot arm might make (as the arm may be acceleration-gated but could have a greater degree of rotational acceleration), and mount these motion capture markers on the edge of the pan and pancake to generate the rigid body in MOCAP software.
We measured the size of the pan and the pancake and modeled it in Rhino.
By using the Grasshopper, we can extract the data from the motion capture software (csv file) and get the trajectory of the pan and the pancake.

Outcomes

Likely as a result of the nature of the testing and placement of the markers, in the flip we chose to analyze, one marker position disappears from the data at one particular moment. The process of correcting this data required us to understand the data output format in a greater degree of depth than anticipated. Additionally, in order to correct for noise/inaccuracy, exponential smoothing functions were applied to the proper variable, first derivative, and second derivative values before graphing. Finally, for graphing purposes, the change in values along the axis perpendicular to the “up” and “forward” motions (in the imaginary plane of the pan’s motion) were ignored.

The raw data collected, the digital simulations, and the finite difference results teamed up to allow us to generate a more complex physical understanding of the flip. The process of flip can be broken up into five distinct physical stages:
1) Pan and pancake are still relative to ground
2) Pan is accelerating forwards (away from flipper) and up (away from ground) relative to ground. Pancake is not moving relative to pan, kept still by force from the pan increasing static friction.
2.5) (This may or may not occur. Not only could there be some degree of variation, but also the data was not conclusive.)  Pan continues to accelerate upwards but accelerates backwards while maintaining a forwards velocity. This breaks the static friction between the pancake and the pan.
3) Pan is accelerating downwards and accelerating backwards. Pancake begins to leave pan. Theoretically, this could be over an infinitesimal period However, in practicality there may be some period in which the pan and the pancake remain roughly together, potentially due to surface adhesion or flexibility on the part of the pancake combined with slow downward acceleration on the part of the pan.
4) Pan moves wherever it needs to (typically forwards) to catch the pancake. Pancake is in projectile regime of motion.
5) Pancake and pan collide.
Vertical and forward values of the pancake over frames.

Contribution

-Hang: Grasshopper, animations, videos, original data, co-expert

-Varun: Correcting data, graphs, physical analysis, co-expert

 

Video

]]>
https://courses.ideate.cmu.edu/16-455/s2018/511/pancap/feed/ 0
Diabolo Skill Analysis https://courses.ideate.cmu.edu/16-455/s2018/501/diabolo-skill-analysis/ https://courses.ideate.cmu.edu/16-455/s2018/501/diabolo-skill-analysis/#respond Mon, 19 Feb 2018 05:53:52 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=501 Continue reading Diabolo Skill Analysis ]]> Manuel Rodriguez Ladron de Guevara, Ariana Daly, and Sarika Bajaj
February 19, 2018

As an introduction to the motion capture system and skill analysis workflow, our team chose to investigate the skill of juggling a diabolo, an hourglass shaped circus prop that is balanced on a string connected to two sticks. After some initial investigation and testing, we found that the main skill behind the diabolo is balance, which is controlled in two ways: 1) by maintaining the speed of the diabolo via a windup technique 2) changing the offset between the two hands, which changes the tilt of the diabolo. In order to examine these movements further, we created a set of jigs that allowed us to track the diabolo’s position and spin as well as the position and angle of the two sticks. We then called in an expert diabolist, Benjamin Geyer, who we recorded using our diabolo set up. After analyzing his movements, we found that the diabolo wind up consists of several sharp tugs, that temporarily create a huge burst in velocity and gradually accelerates the diabolo. The offset hand positioning maintains balance in a linear fashion; as the offset in one direction changes, the diabolo tilts accordingly. Through this experimentation, we gained insight into the primary skills required in juggling a diabolo.

Project Objectives

The goal of this project was to explore the skill of diabolo juggling, using motion capture as a quantitative analysis tool. Specifically, we focused on identifying the mechanics of balancing a diabolo, in terms of creating enough rotational inertia as well as coordinating the offset between the hands for balance.

Creating the Tracking Jigs for the Diabolo

 

In order to track the diabolo using the motion capture system, we created a set of jigs that would attach the motion trackers to the relevant components. For the sticks, we created a set of acrylic holders with three markers each, arranged in a triangle formation such that the centroid was located on the axis of the sticks. For the diabolo, we created two cardboard circles with three markers, that we pushed into the two side hollow of the diabolo. The creation of these circular holders took some experimentation as our initial attempts either prevented the diabolo’s initial rotation or ruined the diabolo’s internal balance. However, these final press fit holders worked a suitable form factor for trials.

Motion Capture Process

To act as our diabolo expert, we recruited Mr. Benjamin Geyer, from the CMU club Masters of Flying Objects, who graciously volunteered his time to be recorded by our motion capture system. He performed four main tricks with the diabolo: 1) balancing the diabolo in place 2) double wrapping the string around the diabolo 3) changing the spin direction of the diabolo 4) throwing and catching the diabolo using the string. Additionally, for some additional testing and exploration, we captured some motion capture data of our own practices with the diabolo, specifically of the diabolo wind up and balancing.

Motion Capture Analysis

The motion capture data provided us a great deal of insight into the mechanics of spinning and balancing the diabolo. In order to explore the mechanics of spinning the diabolo, we focused on the normal velocity, rotational velocity, and rotational acceleration of the diabolo, as represented by the following equations:

  1. normal velocity = normal displacement/time
  2. rotational velocity = rotational displacement/time
  3. rotational acceleration = rotational velocity/time

By analyzing these three metrics, we were able to identify clearly when the juggler would jerk the diabolo string. This jerking motion would cause a sudden sharp increase in the velocity of the diabolo, which would then settle down into a gradual increase of rotational velocity – some rotational acceleration.

When analyzing the balance of the diabolo based on the hand positioning offset, we found a linear correlation; the hands’ offset would directly affect the more the diabolo’s angle of inclination.

Reflections

The main challenges we faced on this project could be directly linked back to the complexity of the diabolo motions. Creating a series of marker sets proved to be challenging because of the diabolo’s hollowed shape, which meant that the motion tracking software would often not be able to see all of the six markers on the diabolo at once. Additionally, using a diabolo requires a good deal of space, which meant, during the video capture session, many times the diabolo or the sticks would be out of the motion capture’s field of view. This required us to become inventive in our solutions, one such solution being shortening the string of the diabolo. Moreover, these issues combined meant that our motion tracking data had to contend with a decent number of dropped markers, which created an additional challenge with analyzing the data.

Team Contributions

The team that executed this project was Manuel Rodriguez Ladron de Guevara, Ariana Daly, and Sarika Bajaj. Manuel focused on the recording, processing, and analyzing of the motion capture data using Grasshopper and Python code. Sarika and Ariana both worked on creating marker jigs for the diabolo and sticks, recruiting Mr. Geyer to be our diabolo expert, the breakdown of the diabolo skill analysis, and the creation of the final video and documentation.

Acknowledgements

We would like to thank Mr. Benjamin Geyer for acting as our diabolo expert and taking time out of his day to talk with us and perform for our motion capturing session. We would also like to thank Prof. Zeglin and Prof. Bard for their insight and help in

]]>
https://courses.ideate.cmu.edu/16-455/s2018/501/diabolo-skill-analysis/feed/ 0
Gesture Drawing Analysis https://courses.ideate.cmu.edu/16-455/s2018/504/gesture-drawing-analysis/ https://courses.ideate.cmu.edu/16-455/s2018/504/gesture-drawing-analysis/#respond Mon, 19 Feb 2018 05:25:19 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=504 Continue reading Gesture Drawing Analysis ]]> Gesture Drawing Analysis

Stephanie Smid, Jett Vaultz

Feb. 18, 2018

https://vimeo.com/256356880

 

ABSTRACT

Drawing machines typically rely on constant pressure, use pressure for line thickness, or have no varying thickness at all. In freehand gesture drawing, an artist uses much more than just pressure to create varying lineweights. Tools such as pencils that have a variety of contact points with the paper afford an artist multiple angles with which to draw.  Taking this as a starting point, is it possible to use motion capture to analyze how a skilled artist uses angles to produce dynamic lineweights?

 

OBJECTIVES

From a technical standpoint, our main goal was to use this project as a way to become familiar with the motion capture system, from capturing raw data to understanding ways in which to analyze it. We also hoped to boost our knowledge of Grasshopper and Python. In regards to studying an expert’s gestural drawing, our main goals were:

To Observe:

  1. how our expert makes use of a drawing tool’s unique affordances,
  2. varies their drawing style based on line types,
  3. and instinctively holds and angles a drawing tool in such a way as to produce various lineweights

And to Analyze:

  1. drawing data in an attempt to quantify an expert’s movements,
  2. correlations between the pencil’s tilt and the line thickness,
  3. and explore if a standard range of motion can be seen throughout various line types

 

IMPLEMENTATION

The pencil we chose had the unique property of being only graphite, meaning its possible range of contact points with the paper was much greater than that of a regular pencil. We hoped the affordance given by this tool would encourage our expert to explore a larger range of pencil orientations.  To be able to accurately capture our expert’s drawing skills we had to create a rigid body marker set for our graphite pencil. We chose to 3D print the mocap marker holder in order to keep the weight of the holder as light as possible, as we didn’t want a top-heavy tool to potentially bias our artist’s positioning of the pencil. 3D printing also allowed us to make a very strong friction-fit connection between the pencil and the marker holder.

shown is a close-up of our graphite pencil tool with a custom-fit 3D printed marker holder

 

The expert who graciously agreed to draw for us was Rebecca Lefkowitz, a first-year Urban Design graduate student in the School of Architecture. In order to control our data, we studied two simple drawing exercises with two sub-categories for each:

  1. A Straight Line
    1. a thin straight line with as little line thickness variation as possible
    2. a varied straight line with fluctuation in line thickness
  2. A Curved Line
    1. a thin curved line with as little line thickness variation as possible
    2. a varied curved line with fluctuation in line thickness
This diagram shows our first attempt at understanding which elements of our skill to analyze

 

Through limiting the drawings to simple, basic shapes, this allowed Rebecca to focus more on the quality of her lines rather than the overall effect of the drawing. After obtaining the motion capture data we went into Grasshopper and began analyzing the four line types based on the pencil’s tilt and rotation. The tilt analyzes how far Rebecca angles her pencil off of a completely vertical pencil orientation. The rotation analyzes the direction in which Rebecca holds the pencil in regards to the path of travel for the line being drawn.

 

OUTCOMES

the photo above shows the drawing resulting from all our our capture sessions with Rebecca, our expert

Once getting into Grasshopper, we initially had a hard time getting our capture data to parse correctly. After circumventing some bugs we were able to develop a definition that allowed us to quantify the tilt and rotation of Rebecca’s lines. While we already expected for higher tilt angles to produce thicker lineweights, what we found surprising was the variety of combinations where similar tilt angles were used with different rotation values. Moreso, after watching our analysis animation data, we found that even between different line types Rebecca tended to stay within a specific zone of values. To show this, the image below shows Rebecca’s “zone cone” when using the graphite pencil:

the image shows Rebecca’s “zone cone”, the area inside which she held the graphite pencil most consistently throughout all of her line drawings

 

With future testing, it would be interesting to see how different artists’ “zone cones” vary based on personal styles. The zone will also change depending on the specific drawing tool used. This project could allow for manipulations of this cone in a way that could reshape the tip of the drawing tool itself, rather than the zone resulting from the tip. For even further development, our analysis could provide new techniques for end effectors that would allow drawing machines to move away from pressure-reliant systems and integrate tilt and rotation into the creation of more dynamic lineweights and complex drawings.

 

CONTRIBUTION

Stephanie: provided the mocap creation, analysis methods, Grasshopper definition, and blog post

Jett: provided the graphite pencil, mocap creation, analysis methods, Python scripting, and video editing/post processing

]]>
https://courses.ideate.cmu.edu/16-455/s2018/504/gesture-drawing-analysis/feed/ 0