Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2018 An exploration of skilled human gesture and design, Spring 2018. Mon, 14 May 2018 07:53:05 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.25 Touch & Melt: Tactile Abstraction and Robotic Heat-Forming https://courses.ideate.cmu.edu/16-455/s2018/782/touchmelt/ https://courses.ideate.cmu.edu/16-455/s2018/782/touchmelt/#respond Mon, 14 May 2018 03:58:52 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=782 Continue reading Touch & Melt: Tactile Abstraction and Robotic Heat-Forming ]]> by Hang Wang & Varun Gadh

Abstract

Touch & Melt explores human-machine collaborative fabrication in a process that leverages an innate human skill and a functional robotic skill. The ability to find and focus on engaging physical facets of objects and unique textures on object surfaces – and relatedly, the ability to easily generate an intricate pseudo-arbitrary path of travel on and about an object – is a distinctly human one. The ability to move in the precise and consistent manner needed for many forms of fabrication is an ability firmly belonging to machines.

Using MoCap (Motion Capture) technology to collect tactile scanning data (following the human end-effector path), this fabrication methodology generated an abstracted version of the form of the scanned object. The abstraction seeks out highlighted features of particular tactile importance by finding regions in which the highest amount time has been spent.

Next, the process uses a histogram of touch density to generate contours for a robotic arm to follow. Finally, the robotic arm manipulates a piece of polystyrene plastic under a hot air rework station; its motion follows the generated contours. The resulting melted plastic is an abstracted representation of the human interpretation of the target object.

Objectives

The project objectives were as follows:

  1. To observe the tendencies of human tactile scanning; what kinds of edges, forms, textures, and other facets are of the most tactile importance
  2. To test the hypothesis that, when scanning the same object, different users would generate different outcomes when using the system
  3. To find the appropriate material, stock thickness, heat-applying robot tool, contour order, temperature, air pressure,

Process

For the purposes of explanation, this description will follow the scanning of a single object (pictured below) by two different users.

Pictured: the scanned object

Using either a marker or a set of markers (depending on the software and physical constraints) mounted to a glove or finger, a user scans a target object (in this case the face of one of the project creators).

The MoCap system records the scan and collects three-axis position data.

The position data is then exported and parsed through a Python script into a set of points in 3D space to be represented by Grasshopper in Rhino.


The 3D point set is flattened onto a single plane and overlaid upon a grid of squares. The point densities over each square are mapped to the corresponding squares and a heat map representing touch density is generated:


In this heat map, the gradient green-yellow-red represents an ascending touch density value range.
Once the touch density values have been mapped onto a grid, each grid square is raised to a height correlated to the touch density value it represents and a surface is patched over the raised squares.

From this new smooth surface, a set of contours (below) is extracted by slicing the surface at an interval set by the user. (For a deeper understanding of how the contour generation works, read up on the Contour function in Rhino; the two actions rely on the same principle).

These contours are broken up into sets of paths for the robot arm to follow:


The process retains a fair amount of legibility from collected data to robot path.


The robot arm guides the polystyrene stock under the heat gun along the contour paths.


The polystyrene is mounted to a clamp on the robot. The robot arm guides the polystyrene stock under the heat gun along the contour paths.

 

After several tests (and a bit of singed plastic) we were able to find the fabrication process that is the effective balance of expressiveness and information retention!

The problem of reaching that effective fabrication process, however, was non-trivial. One of the factors in the manufacturing process that required testing and exploration was contour following order.

As we wanted to maximize the z-axis deflection of the material due to heat (in order to have the most dramatic and expressive output possible), we initially believed that we should address concentric contours in an in-to-out order. This would minimize the distance between the heat gun and each subsequent contour. However, we learned that – as our contours are relatively close together – the inner rings would experience far too much heat and hole would form in the material, distorting the rest of the material in a way that we viewed as non-ideal for preserving the contour information.  As such, we thought it wise to travel out-to-in to decrease the amount of heat experienced by the inner contours.

When we tested out-to-in order, however, the points in the material at which the inner contours would be had traveled too far vertically away from the heat gun to be effectively melted. Finally, we settled on addressing each layer of contours in the order they were sliced. For example, the outermost contour in the diagram below (1) would be followed first. Next, the second smaller concentric contour along with the small contour that is of equivalent concentricity (2) would be followed. The subsequent round of contours would include those contours marked (3). This continues until the final layer of concentricity is reached. This heating order proved to be the most effective because it was an effective balance of summed heat over regions that were meant to be deformed, but not enough concentrated heat in a small place to cause large holes to appear.

 

Outcomes

When different users scan the same object, results can very dramatically in both path and touch density. For example, two volunteers who were relatively unfamiliar with technical aspects of the system scanned the same object (the face of one of the project members) and approached the scanning in completely different ways; the speeds, features of primary focus, and scanning goals of the participants varied dramatically. Seen below, the paths are structurally different and repetitive within their own patterns.

In terms of investigating what physical facets are the most engaging, we were able to glean information primarily about faces as that was our chosen object set of interest. Generally speaking, the nose tip, nose edges, jawline, and lower forehead seem to be the areas of primary interest. This seems to be due to the clearly defined curvature of those features. Areas of relatively inconsistent or flat topography (i.e. a plane or a jagged surface) don’t seem to be of particular tactile interest, while edges and and relatively long curves seem to call attention to themselves.

After a variety of tests, we discovered the optimal output parameters were as follows:

  • Hot air rework station at 430 ˚C, 90% air pressure
  • 1/16″ Polystyrene Plastic
  • Heat gun (end of hot air rework station) 1.25″ from surface of polystyrene
  • 5mm/s travel speed
  • A level-of-concentricity contour ordering pattern (see final paragraph of Process section for more information)

Acknowledgements

We would like to thank Professors Garth Zeglin and Joshua Bard for their guidance and assistance throughout this project. We would also like to thank Jett Vaultz, Ana Cedillo, Amy Coronado, Felipe Oropeza, Jade Crockem, and Victor Acevedo for volunteering their time.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/782/touchmelt/feed/ 0
Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/741/agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/741/agent-conductor/#respond Mon, 14 May 2018 01:10:53 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=741 Continue reading Agent Conductor ]]> by Manuel Rodriguez & Jett Vaultz
13 May, 2018
https://vimeo.com/269571365

Abstract

Agent Conductor is an interactive fabrication process where one can control virtual autonomous agents to develop organic meshes through MoCap technology and 3D spatial printing techniques. The user makes conducting gestures to influence the movements of the agents as they progress from a starting-point A to end-point B, using real-time visual feedback provided by a projection of the virtual workspace. The resulting artifact is a 3D printed version of the organic tunnel-like structure generated based on the agents’ paths.

Objectives

Our goals for this toolkit were to be able to fabricate an organic and structurally sound truss that could most likely be used for decorative and non-structural architectural elements, given the weakness of the material.

We wanted to develop a few key features:

  1. Generate a virtual 3D mesh that is printable with the spatial printing tool
  2. User control of the path the agents take to get to the target to some degree
  3. User control of the diameter/size of the mesh at any given point
  4. Responsive view of the virtual workspace for real-time feedback

Implementation

With this list of key control elements in mind, we wanted the interaction to be simple and intuitive. For affecting the path the agents take to the target, it seemed most effective to control this with swiping gestures in the direction the user would like the agents to move, like throwing a ball or swinging a tennis racket.

We originally had a flock of autonomous agents for generating the truss, each with their own individual movements. However, we wanted to avoid the flock making any translational movements when each gesture force is applied, so we eventually changed the structure of the agents from a cluster of boids to a single autonomous frame with an adjustable number of satellites that represents the agents. With this method, we can ensure smooth rotations as well as avoid most of the self-intersecting issues that come with having a flock of agents generating the truss lines. The truss surface would lose some of its organic nature, but in its place we gain ease of control and greater printability with controlling a frame.

We needed the user to be able to understand what they were seeing, and be able to change the view to gain meaningful visual feedback. We added a rigid body for the camera to be controlled by the user’s position, so that it would be easier to see the shape and direction of the movement of the agents on the screen. The camera eases over to the user’s position, as opposed to having jarring camera movements matching every motion of the user.

After the conducting process, the generated virtual 3D mesh is printed with the spatial printing tool, using the fabrication method detailed here.

 

Possible structures:

With this system, users can create a few different kinds of tunnel-like structures based on the type of conducting movements. Without any influence or interaction from the user, the agents will move smoothly from the starting point to the finish. This results in mesh that looks much like a uniform pipe:

 

The distance between the user’s left and right hand determines the diameter of the mesh, from about 10cm to half a meter. Varying this can either cinch or create balloons in the pipe:

 

The user’s right hand can affect the direction the agents progress in, by making repeated swiping gestures in the desired direction. This should cause the agents to move in that direction, creating curves in the mesh:

 

The start and end positions can also be set anywhere to match the physical space if needed, as shown above. To generate a more smooth and dense mesh, one can increase the number of agents and reduce the pace at which the truss segments are generated. Likewise, to get a more open and low-poly mesh, the number of agents can be reduced to as low as three with a higher pace.

More agents at a shorter pace.
Fewer agents at a longer pace.

See here for a chart of possible variations one can generate with our toolkit.

Outcomes

3D printing one of the generated structures. *Speed augmented—actual speed is 3mm/s

Overall, the toolkit creation was successful, and we were able to generate mostly-printable meshes through this conducting process. It was interesting watching users work with our toolkit, and how they would interpret the instructions to create the virtual structures. This helped us work on making the interaction more intuitive and usable, as well as develop a better explanation for the toolkit and how to use it.

In regards to generating the virtual 3D mesh, there are still some self-intersecting issues that we have not been able to completely sort out, because of the way we have implemented the gestural interaction. Additionally, there is a twisting issue that sometimes occurs at the base of the structure; this becomes more visible with a shorter pace.

Manuel and his thesis partner were able to develop the tool such that we were able to print one of the generated structures, shown below. The fabrication was mostly successful, although there were  a few small issues with the extrusion and air flow that required repeated intervention to ensure that the plastic would stick and hold its shape.

 

Contribution

Manuel: GH/Python algorithm (gestural interaction, truss generator, agent definition), user interface, 3D printing

Jett: GH/Python algorithm (gestural interaction, camera motion, agent definition), user interface, blog post, video

Video

]]>
https://courses.ideate.cmu.edu/16-455/s2018/741/agent-conductor/feed/ 0
Robot’s Cradle https://courses.ideate.cmu.edu/16-455/s2018/794/robots-cradle/ https://courses.ideate.cmu.edu/16-455/s2018/794/robots-cradle/#respond Sun, 13 May 2018 20:15:21 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=794 Continue reading Robot’s Cradle ]]>

Stephanie Smid & Sarika Bajaj

Human Machine Virtuosity, Spring 2018

— — —

ABSTRACT

Robot’s Cradle was a project focused on creating a hybrid analog and digital design workflow for experimental weaving. Inspired by peg loom weaving or string art, our workflow involved transforming human drawing (tracked using motion capture software) into a weaving pattern using Grasshopper and Python that was then weaved by a robotic arm fitted with our custom string tool. By the end of our project, we were able to achieve the discreet stages of this workflow and start investigating the edges of this workflow, specifically what constitutes better use of the workflow and what is a “finished design” from the workflow.

OBJECTIVES

In order to actualize this workflow, we aimed to 1) reliably capture human drawing using the motion capture system 2) properly process the motion capture data to create a set of tangent line approximations 3) convert the tangent lines into a string path the robot could actually follow 4) have the robot use a custom tool to wind the given pattern onto a peg board. The ultimate objective of this process was to achieve a workflow such that designers could create finished weaving pieces from our system.

IMPLEMENTATION

In order to create this workflow, we began with setting up the motion capture system, using a graphite pencil outfitted with motion capture markers that would allow us to track its motion. After setting up some initial bounding conditions, we were able to pretty easily track the line the pencil was drawing, divide up the line into a set of points, and derive tangent lines from each of those points in Grasshopper.

We then transitioned over into working on the physical parts of the project, namely procuring string, creating a base peg board, and creating a robot tool that could hold a thread spool and dispense string. The string we settled on using was wax coated, which made the winding process a bit simpler as the thread itself was sturdier than normal and needed less tensioning for winding. We had to do several iterations of the peg board, experimenting with which peg shape would best encourage string to stay wound as well as what distance between pegs best kept the integrity of the design while not so close as to prevent winding. Finally, our robot tool was the most complex physical part of our system, where we had to iterate how to best keep the tension throughout the system (a problem we solved using a felt washer to tighten the turning of the spool) as well as combat the tension points around the edges of the string dispensing tube. After settling on final designs for the peg board and robot tool, we went back to write the Python code needed to turn our tangent lines into an actual robot string path that we then converted into HAL code for winding. After getting the robot to successfully wind, we iterated back and worked on making our entire system more connected into an actual workflow. The main change we made in this process was that our initial design involved using projection for user feedback that we replaced with a separate screen instead.

OUTCOMES

We were successful in creating a complete workflow for experimental weaving, which meant we were able to explore what a “finished” design might look like from our system. Thus, we had several users come in and test our system to see how they interacted with our system and what some of the better results might look like. We found that the “finished” designs often had areas of clear density through the system, such that the initial drawing was clearly visualized in the string, and users would often redo their initial drawing when the string pattern produced was too abstract and not as clearly visualized in string density. The best example of a finished piece from the workflow is visualized below, where several layers (each with distinct, clearly identifiable patterns) are layered upon each other to create a final piece.

CONTRIBUTION

Stephanie primarily worked on the Rhino and Grasshopper portion of the code, as well as generating the final HAL code for winding. Sarika focused on the Python string winding algorithm and the point generation for maneuvering around each peg. Both Stephanie and Sarika created iterations of the peg board as well as the robot tool that held the thread spool and dispensed thread.

The two of us would like to thank Professor Josh Bard and Professor Garth Zeglin for their mentoring and advice throughout this project. We would also like to thank Manuel Rodriguez for their assistance in debugging some of our Python and Grasshopper integration issues and Jett Vaultz for allowing us to borrow their graphite pencil.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/794/robots-cradle/feed/ 0
Tactile Abstraction (Shortest Path Prototype) https://courses.ideate.cmu.edu/16-455/s2018/664/tactile-abstraction-shortest-path-prototype/ https://courses.ideate.cmu.edu/16-455/s2018/664/tactile-abstraction-shortest-path-prototype/#respond Mon, 02 Apr 2018 15:59:05 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=664 Continue reading Tactile Abstraction (Shortest Path Prototype) ]]> This project uses MoCap technology to leverage human tactile scanning to the end of generating an abstracted version of the form of the scanned object. It will take advantage of innate human skill of highlighting artifacts of particular tactile importance. It will then recreate an abstracted version of the scan using a hot air rework station and Polystyrene plastic.

At this point in the project, we have a solid understanding of the technical capabilities and limitations of the workflow. The primary areas of work moving forward are selecting the input objects, creating a finger mount for the mocap marker, refining the workflow, defining the parameters of translation from input information to output trajectory, creating the output mechanisms, and evaluating the output.

The choice of input objects will be given additional consideration, as we would like to have a cohesive set that maintains meaning in the process. Additionally, having multiple users scan a single object could provide very interesting insight into how the workflow adapts based on user input. We may 3D print a ring mount for the mocap marker, or use an existing ring and adhesive. The translation will rely on point density (the points being a sampling of those generated by the motion capture process), and may also take into account direction and speed of scan trajectory. Additionally, this data will be converted to robot motion that will likely need to take an “inside-out” pattern – traveling outward from a central point rather than inward from a border. The output mechanism to be created will be (i) a mount for the robot arm to hold the polystyrene sheet and move it within a given bound and (ii) a mount for the hot air rework station to keep the nozzle and box secure and stationary.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/664/tactile-abstraction-shortest-path-prototype/feed/ 0
Prototype: Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/666/prototype-agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/666/prototype-agent-conductor/#respond Mon, 02 Apr 2018 15:18:13 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=666 Continue reading Prototype: Agent Conductor ]]> Manuel Rodriguez & Jett Vaultz

This project is a hybrid fabrication method between a human and virtual autonomous agents to develop organic meshes using MoCap technology and 3D spatial printing techniques. The user makes conducting gestures to influence the movements of the agents as they move from a starting point A to point B, using real-time visual feedback provided by a projection of the virtual workspace.

Screencap from our role-playing test, with an example of the agents’ movements and generated mesh.

 

Workflow diagram

For our shortest path prototype, we discussed what the agents’ default behaviour might look like, without any interaction or influence by the user, given starting and end points A and B. We then took some videos of what the conducting might look like given a set of agents that would progress in this default behavior from the start to end points, and developed a few sketches of what the effects on the agents would be while watching the videos. In Grasshopper, we were able to start developing a python script that dictates the movements of a set of agents progressing from one point to another in the default behavior we had established.

Sketch of the progression of agents from the top plane from the role-playing test down to the second without any kind of influence from the user.

A sketch of a possible structure based on the video, which shows a silhouette of the movements of the agents.

Our next steps moving forward would be to flesh out our python script so that we can generate a plausible 3D mesh as the agents progress from start to finish, first without any interaction. Once we have this working, we’ll work on building a more robust algorithm for incorporating user interaction to create more unique and complex structures. The resulting mesh would then be printed from one end to the other using the spacial printing tools in development by Manuel.

 

]]>
https://courses.ideate.cmu.edu/16-455/s2018/666/prototype-agent-conductor/feed/ 0
Robot’s Cradle: Shortest Path Prototype https://courses.ideate.cmu.edu/16-455/s2018/645/robots-cradle-shortest-path-prototype/ https://courses.ideate.cmu.edu/16-455/s2018/645/robots-cradle-shortest-path-prototype/#respond Mon, 02 Apr 2018 03:08:58 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=645 Continue reading Robot’s Cradle: Shortest Path Prototype ]]> Stephanie Smid and Sarika Bajaj

current progress | example artifacts | next steps

Our project involves creating a hybrid workflow that combines human drawing skill and robotic assembly for peg loom weaving. Through this workflow, artists should be able to draw patterns with specified string densities that a robot will solve for and manually string around a loom.

The specific processes involved in our workflow are detailed below and involves using motion capture data to capture drawing that is then processed using Grasshopper to finally RAPID code that will control a robot.

For our shortest path prototype, our goals included 1) properly tracking human drawing using the motion capture system 2) processing the motion capture data in Grasshopper to create a viable string pattern 3) using the viable string pattern to hand wind the pattern on our custom made peg board.

Since Stephanie’s first project had involved tracking human drawing using the motion capture system, we were able to use the same rig for this project and were able to get the pen tracked rather quickly. Moreover, after some initial tests where we experimented with different x, y, and z bounding box cut offs as well as time sampling, we were able to get a reasonably smooth curve from the motion capture system. 

Using Grasshopper, we were able to split up the motion capture curve into distinct points. We then drew tangent lines from each of those points that were then pulled to the closest approximate peg (in our case we had 13 per side). Our final step involved simply printing out the Grasshopper visualization and using the printout to hand wind the string around the pegs.

The artifacts from our shortest path prototype are illustrated below:

In terms of our next steps, the major next problem will be integrating the robot into our current workflow; instead of hand winding as we are right now, we will have to work to creating a system that will enable the robot to wind the string around the pegs. For the robot winding, we will also have do another iteration of our peg board, as we noticed our acrylic frame was not able to withstand the force of the string tension (often without a foam core base, the frame would warp); thus, we will probably create a sturdier wood iteration that will be able to handle the force. Moreover, we would need to do some further Grasshopper processing to identify the most viable path for the string to follow, as that is currently an approximation we are doing automatically as humans observing the grasshopper processing. Finally, we also need to figure out a better way to track the pencil in relation to the frame itself to ensure that our frame of reference for any drawing is correct; while in our tests, the frame of reference seemed to be reasonable, it is possible there was some offset between physical drawing and the virtual model.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/645/robots-cradle-shortest-path-prototype/feed/ 0
Project Proposal: Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/#respond Thu, 08 Mar 2018 01:14:09 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=621 Continue reading Project Proposal: Agent Conductor ]]> Manuel Rodriguez & Jett Vaultz

 

Agents conductor is a design and fabrication method hybrid between digital and physical worlds. It is a negotiation between two living parts, one physical, one digital. The former refers to a human, the latter, to a set of agents with their own behaviour. Is it possible to tame them?

Sketch of proposed physical workcell

Design skills combine the ability to direct the set of agents as you will, with some constraints given by a list of rules and digital tools with the creativity of dealing with a dynamic entity that will move around. It will be closer to train a dog rather than getting better at some fabrication skills as sculpting.

Hybrid Skill Workflow

In order to follow and understand your own movements with the piece to fabricate, a projector will be necessary, as it will be where the agents live. Through streaming motion capture, the actor’s movement are recorded and projected, having full interaction with the agents.

The degree of manipulation of the behaviour of the agents is to be determined. Nevertheless, a basic setup will consist of number of agents, sped, degree of separation vs cohesion, alignment, repulsion or attraction. These variables are expected to be controlled by a number of different rigid bodies. How to combine those is up to the user.

The resulting geometry is also variable. From a curve made out of all the tracked location of the agents (with variable thickness), to a surface formed by lofting each of the mentioned curves, or more volumetrical bodies made out of shapes associated with the motion of the agents.

The algorithms will be written in Grasshopper Python. Agents behaviours, interaction with the user, resulting geometry and fabrication files.

Fabrication will happen by means of additive manufacturing, being robotic spatial printing the chosen medium. The thickness and degree of fabricability will rely on the development of the tool. Therefore some coding in HAL and Rapid will be necessary at this stage.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/feed/ 0
Deform + Reform – Human-Machine Tactile Caricature https://courses.ideate.cmu.edu/16-455/s2018/597/deform-reform-human-machine-tactile-caricature/ https://courses.ideate.cmu.edu/16-455/s2018/597/deform-reform-human-machine-tactile-caricature/#respond Wed, 07 Mar 2018 15:42:03 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=597 Continue reading Deform + Reform – Human-Machine Tactile Caricature ]]> The purpose of this project is to integrate machine skill and human skill to respond to, and generate in abstract, a tactile caricature of an object. We aim to explore fabrication that relies on fully robotic active skills and a pair of human skills (one of which is innate, and the other of which would be developed by this process).

The innate human skill of viscerally exploring an object with hands will be applied to, initially, a base object. The human task will be considered “3D hand-scanning”, or “hand scanning”, but the genuine human, non-robot-replicable skill is tactile sensing and tactile opinion. This is a very visceral, innate execution of sensing that human beings can rely on to react to their environment. The motion of the human scan will be tracked with motion capture markers, and this will allow us to collect data on what affordances are of particular tactile interest. This process would also help develop the human skill of physical awareness of 3D objects (also known as 3D visualization when applied to drafting or CAD modeling).

Human actor tactilely explores an object (represented by a generic model)

With this path data, we can learn which features are the most tactilely significant, and this knowledge can be applied to robot arm aluminum deformation.

 

 

Model of robot in action deforming aluminum sheet

Finger motion along a subject is replicated by the robot “finger” motion along a deformed sheet.

 

Model of aluminum sheet post-deformation

If possible, we’d like to explore implementing a “recursive” approach: the user explores a base object, the sheet is deformed, the next human exploration is conducted on the deformed sheet, and either the same sheet – or a different sheet – is subsequently deformed. This echoing deformation could occur several times, and the final result would be a deep tactile caricature of the object and its children.

The technical context of this piece references photogrammetry – the process of generating a 3D model by combining many photos of a single subject. This project pays homage to photogrammetry by using dynamic data to create 3D visualizations of the object, but incorporates physical and tactile action both in the input and in the output.  The cultural context of this piece explores how caricature, which is the act of isolating the outstanding features of a subject and disproportionately representing them, can be applied to tactile sensing in an object.

 

Hybrid Skill Diagram – the process of conversion from human skill to physical output

The implementation of this project will rely on the Motive Motion Capture system to collect hand scan data. The immediate feedback on which the human scanner will rely will be derived from Rhino. This hand scan data will be sent to Grasshopper, where it may need to be cleaned up/smoothed by a Python script, and then will be converted to robot arm control data in, HAL (a Grasshopper extension), and Robot Studio. An aluminum sheet will be held tightly in a mount, and the robot arm will deform the aluminum by pushing it down according to processed scan trajectory data.

 

Deform + Reform was inspired in part by this project from the CMU School of Architecture.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/597/deform-reform-human-machine-tactile-caricature/feed/ 0
Project Proposal: Robot’s Cradle https://courses.ideate.cmu.edu/16-455/s2018/577/project-proposal-robotic-weaving/ https://courses.ideate.cmu.edu/16-455/s2018/577/project-proposal-robotic-weaving/#respond Wed, 07 Mar 2018 00:34:14 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=577 Continue reading Project Proposal: Robot’s Cradle ]]> Stephanie Smid and Sarika Bajaj

project context | motivation | scope | implementation

Our project involves creating a hybrid workflow that combines human drawing skill and robotic assembly for peg loom weaving. Through this workflow, artists should be able to draw patterns with specified string densities that a robot will solve for and manually string around a loom.

The inspiration for this project originally came from the fabrication processes of peg loom weaving and string art; the first is the base of most weaving techniques and involves weavers densely binding and patterning string across a row of pegs, and the latter is a more experimental result of peg loom weaving where artists focus more on creating patterns via the density of the string instead of the string itself. The closest case study for this project is the Silk Pavilion Project from the MIT Media Lab, where researchers used a CNC machine to manually attach string to a metal frame lined with pegs and then placed silkworms on top to fill in the structure.

Our spin on these experimental weaving techniques is to create a workflow where a person’s drawings gets processed and translated into a string density pattern that a robot weaves on a set of pegs. The person will first draw on the empty peg board, detailing gaps where no string is to be weaved and then detailing areas of higher density and patterns. The MOCAP set up in the design lab will track the stylus which the person is using to draw and will be used to create a real-time projection of the string patterning created by the drawing. Once the person is satisfied with the projected string pattern, the robot will then use string, covered in UV curing resin, to create the pattern on the peg board. The final step will simply involve using UV light to curve the final string shape and to remove the pattern from the peg board. The initial peg boards will be flat, two dimensional sheets; however, if we are able to successfully create this two dimensional workflow, then we will start transitioning to three dimensions using peg boards with slight curves and angles to further distort the string patterning. Below are early diagrams exploring how the system can work.

The following is a visual of the weaving system:

 

The following is an explicit workflow diagram detailing the process:

 

]]>
https://courses.ideate.cmu.edu/16-455/s2018/577/project-proposal-robotic-weaving/feed/ 0
PanCap https://courses.ideate.cmu.edu/16-455/s2018/511/pancap/ https://courses.ideate.cmu.edu/16-455/s2018/511/pancap/#respond Mon, 19 Feb 2018 06:38:03 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=511 Continue reading PanCap ]]> PanCap

Varun Gadh, Hang Wang

02-19-2018

Please find our video here.

Abstract

The initial goal of PanCap was to use motion capture technology to better understand, and digitally simulate, the process of flipping a pancake. We have been able to achieve both of these goals. A tertiary goal – to try to glean information from the Motion Capture that could help teach a robot to flip a pancake – sprung up as a longer-term motivation. This goal is certainly in progress, and understanding the data has helped us narrow down the set of circumstances that will allow a robot pancake flip to occur.

Objectives

Our overarching objective – to better understand and digitally simulate the process of flipping a pancake – broke up into two discrete goals. First, to leverage the position & orientation data capture in the MoCap recordings to find specific border values between each phase of motion of the pancake; this allows us to break up the pancake flip motion into particular physical circumstances. Second, we want to understand more quantitatively what set of skills is necessary to correctly flip a pancake. This could extend to understanding the physical realities of the flip with the potential end goal/motivation being a robotic pancake flip. We would need to and understand the breadth of design that could achieve this goal.

Implementation

 

As shown above, we chose a pan with a greater lever arm to simulate the motion that a robot arm might make (as the arm may be acceleration-gated but could have a greater degree of rotational acceleration), and mount these motion capture markers on the edge of the pan and pancake to generate the rigid body in MOCAP software.
We measured the size of the pan and the pancake and modeled it in Rhino.
By using the Grasshopper, we can extract the data from the motion capture software (csv file) and get the trajectory of the pan and the pancake.

Outcomes

Likely as a result of the nature of the testing and placement of the markers, in the flip we chose to analyze, one marker position disappears from the data at one particular moment. The process of correcting this data required us to understand the data output format in a greater degree of depth than anticipated. Additionally, in order to correct for noise/inaccuracy, exponential smoothing functions were applied to the proper variable, first derivative, and second derivative values before graphing. Finally, for graphing purposes, the change in values along the axis perpendicular to the “up” and “forward” motions (in the imaginary plane of the pan’s motion) were ignored.

The raw data collected, the digital simulations, and the finite difference results teamed up to allow us to generate a more complex physical understanding of the flip. The process of flip can be broken up into five distinct physical stages:
1) Pan and pancake are still relative to ground
2) Pan is accelerating forwards (away from flipper) and up (away from ground) relative to ground. Pancake is not moving relative to pan, kept still by force from the pan increasing static friction.
2.5) (This may or may not occur. Not only could there be some degree of variation, but also the data was not conclusive.)  Pan continues to accelerate upwards but accelerates backwards while maintaining a forwards velocity. This breaks the static friction between the pancake and the pan.
3) Pan is accelerating downwards and accelerating backwards. Pancake begins to leave pan. Theoretically, this could be over an infinitesimal period However, in practicality there may be some period in which the pan and the pancake remain roughly together, potentially due to surface adhesion or flexibility on the part of the pancake combined with slow downward acceleration on the part of the pan.
4) Pan moves wherever it needs to (typically forwards) to catch the pancake. Pancake is in projectile regime of motion.
5) Pancake and pan collide.
Vertical and forward values of the pancake over frames.

Contribution

-Hang: Grasshopper, animations, videos, original data, co-expert

-Varun: Correcting data, graphs, physical analysis, co-expert

 

Video

]]>
https://courses.ideate.cmu.edu/16-455/s2018/511/pancap/feed/ 0