mprindib@andrew.cmu.edu – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2019 An exploration of skilled human gesture and design, Spring 2019. Sun, 23 Feb 2020 23:44:50 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.21 LIZZEE👩🏻‍🎨FEEDS🌭AMERICA🇺🇸 https://courses.ideate.cmu.edu/16-455/s2019/1296/prepare/ https://courses.ideate.cmu.edu/16-455/s2019/1296/prepare/#respond Tue, 30 Apr 2019 03:05:32 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1296 Continue reading LIZZEE👩🏻‍🎨FEEDS🌭AMERICA🇺🇸 ]]> By: Yi-Chin Lee, Matt Prindible, and Lizzee Solomon
Submission Date: Wednesday, May 1, 2019

Abstract(ion)
The capacity to explore physical artistic work in abstraction through digital transformation and fabrication is conceptually simple, but technically challenging. Our “machine-in-the-loop” workflow is comprised of a laser scanner to generate a point cloud of the artist’s sculpture, software to create a watertight mesh of that point cloud, parametric design software to procedurally transform the mesh, a slicing tool for generating toolpaths for severely damaged 3D models, a 3D printer capable of working through tooling errors, and a vacuum form table to prepare the final model. The tool of intervention for our artist was air drying clay and Super Sculpey in addition to some basic sculpting tools. An oven was also involved. We used this workflow to generate a series of machine “responses” to the artist’s sculpture. The artist then created a response this response. And so on. Our goal was to enable to “dialog” between the artist and machine that helped the artist explore unexpected shapes and gestures. Our technical exploration was successful and, based on this functional experimental workflow and a conversation with our artist, our artistic exploration was also successful.

Objectives
The fundamental goal of this project was to demonstrate a function experimental workflow for generating a transformed physical object from original sculpture—the technical underpinnings rather than the artistic expression. Because of the recursive nature of the workflow, it should also be highly repeatable and somewhat reliable. Working with an artist, however, afforded the opportunity to discuss and critique the value of this workflow as part of her artistic practice. To do this, we needed to generate a form of a high enough artistic value.

Implementation
Given the experimental nature of our workflow, the primary driver of design choices was “Whatever It Takes.” Most of the software and hardware used for this project was not meant to be used this way and, as in the case of most digital fabrication, aesthetic was sacrificed in order to generate a physical form. That being said, in order to have a useful conversation about the value of this workflow in our artist’s practice, we had to generate forms of a compelling artistic value.

The first step in the process was evaluating tools for volumetric scanning of the original sculpture. We tested photogrammetry methods with simple mobile phone cameras, a Microsoft Kinect, and a 3D laser scanner. We picked the EinScan laser scanner not only for its precision, but because it also included the software needed to generate a watertight model from a point cloud.

3D scanning setup

We ultimately used Grasshopper for Rhino to procedurally transform our mesh because of its control over parameterization—we knew early on that we wanted to heavily influence the forms the machine could generate (playing the role of a human artificial intelligence). But we also played with solid modeling programs like Fusion 360 and datamoshing techniques for glitch art.

For pre-processing the generated model, we used Meshmixer, Simplify3D, and Magics. The pre-processor and slicer used depended on its ability to use the models we generated, which were often non manifold or riddled with thousands of inverted normals and naked edges.

During the printing process, we had a chance to test every 3D printer on CMU’s campus. Although IDeATe’s Stratasys printers had a best finish for the cost, the small build volumes made production difficult. And although dFab’s binder jetting printer and the School of Design’s Formlabs 2 had the best finishes overall, they were incredibly expensive (although we’d still like to explore the use of binder jetting for models impossible to print via extrusion). Ultimately, dFab’s Ultimakers with PLA became the workhorse for our projects. The finish and timing was decent (with most models taking about 30 hours to print), but most importantly, the build volume gave us the sizes we needed.

Our artist used either air dry clay (which did not adhere well to the PLA surface) or Super Sculpey for her response (which baked well in the oven with the PLA and no additional slumping). Differential expansion between the materials was a problem (Super Sculpey shrinks quite a bit in the oven, PLA does not).

Hybrid Workflow

Hotdog Collection

Dinner Table Setting

table setting showing the hotdog evolution
Lizzee’s respond
Lizzee’s respond
hotdog slice as side meal

Contribution
These was an equal effort by both. This is the kind of project that demands someone at the computer driving the mouse and keyboard. And someone in the next seat cheering them on. And switching back and forth as we worked through all the technical hurdles.

Outcomes
The biggest failures in our project were build failures during 3D printing (mostly for the lost time) and unrepairable meshes from our procedural transformation (which kept us from printing some of the crazier forms). However, these limitations did not affect us from seeing a distinct change in the artist’s sculpting style (often reciprocating the “crystalline” geometry generated).

Precedent and cited work
Three pieces of existing work were referenced early and often throughout the process. Robin Sloan’s Flip Flop, Navaneeth R Nair’s Wave-form, and Geoffrey Mann’s Cross-fire. As well as a million YouTube tutorials.


the ultimaker clip is cut from the ultimaker official wesite
https://www.youtube.com/watch?time_continue=11&v=otmmihz3Gq8
]]>
https://courses.ideate.cmu.edu/16-455/s2019/1296/prepare/feed/ 0
Lizzee Feeds America: the shortest path to twenty billion hot dogs https://courses.ideate.cmu.edu/16-455/s2019/1220/lizzee-feeds-america_shortest-path/ https://courses.ideate.cmu.edu/16-455/s2019/1220/lizzee-feeds-america_shortest-path/#respond Wed, 20 Mar 2019 02:00:14 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1220 Continue reading Lizzee Feeds America: the shortest path to twenty billion hot dogs ]]> Where we last left off: A proposal and an abstract model of our workflow. This was enough to guide our exploration of each enabling technology (3D scanners, 3D printers, sculpting, and transforming), but, as we expected the real project was in the weeds of each step of the process.

Where we are now: We can now successfully demonstrate two trips through our proposed human-machine sculpture process.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1220/lizzee-feeds-america_shortest-path/feed/ 0
Lizzee Feeds America_Final https://courses.ideate.cmu.edu/16-455/s2019/1129/lizzee-feeds-america_final/ https://courses.ideate.cmu.edu/16-455/s2019/1129/lizzee-feeds-america_final/#respond Mon, 18 Feb 2019 14:22:31 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1129 Continue reading Lizzee Feeds America_Final ]]> What’s the big idea

The objective of the our project is to be able to create an interactive cycle between Lizzee, our collaborating artist and technology. Upon analysis of Lizzee’s different works of arts, we have decided to capture the molding and mass production of her hand sculpted models. Lizzee has a passion for creating models out of clay, then using the same mold to create a vacuum form model. With this process, the only variation she can make from one model type to the other is the color, making this process a little limiting. Lizzee would like to see some variation from model to model, and possibly end up with a model that looks nothing like the model it had originated from. To solve this problem, we plan to work with Lizzee and her models to introduce noise a little at a time, which, depending on her liking, she can add to the noise, or subtract from it.

Hybrid Fabrication Process

Capture the human crafting skill

When technology is incorporated, it can be very easy for the end product to lose the authorship and trademark that is unique to the artist. With this focus in mind, we have decided to go back and forth from Lizzee to the noise generating technology. Using either Kinect, a 3D scanner, photogrammetry or an iPhoneX, which will be determined upon further experimentation, we will capture Lizzee’s clay model to be analyzed in either Autodesk’s Fusion 360 or Rhino.

EinScan-Pro

Kinect

iPhone X

Data Processing

With that information, we will work on a program that given data, will generate noise, deforming or changing the shape of the original model. Also upon further experimentation, the noise will either be implemented through Rhino, Fusion 360, or by creating a Python program that will randomly add and delete data points of the model. Then, using PLA, plaster, resin, or nylon powder, we will 3D print the updated model where noise was incorporated. We would then give Lizzee the new model, allowing to her to make any changes to her liking, still somewhat being in control of what the final product would look like. We would continue this this process until Lizzee is satisfied with the quantity and quality of the most recently produced model.

Parametric modeling with Rhino/Grasshopper

Generative modeling with Fusion 360

Mess with the data points

Print out the hot dog

Since it can be challenging at times to use different software, we did small scale version of the described process on a smaller hot dog made of plaster to ensure that the technology works in the way that we need it to before working with Lizzee’s model. The different forms of technology have given us different types of data, which we can then learn from and implement some sourse of noise to.  In addition to learning how this technology works, we plan to utilize the ABB robot to paint the created models, mostly for immersion printing. Since we plan to make multiple models in a limited amount of time, the robot would be able to add a nice layer of paint, more efficiently than humans can.

With our proposed process, we believe that we will produce an end product that is both appealing to Lizzee and is true to her, as well as including some technological input.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1129/lizzee-feeds-america_final/feed/ 0
Lizzee Feeds America https://courses.ideate.cmu.edu/16-455/s2019/1080/lizzee-feeds-america/ https://courses.ideate.cmu.edu/16-455/s2019/1080/lizzee-feeds-america/#respond Mon, 11 Feb 2019 13:34:17 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1080 Continue reading Lizzee Feeds America ]]> How to make 20 billion hot dogs: An exploration in human-machine virtuosity
by Yi-Chen Lee, Matt Prindible, Lizzee Solomon, and Gabriela Tellez

What’s the big idea?

The objective of the our project is to be able to create an interactive cycle between Lizzee, our collaborating artist and technology. Upon analysis of Lizzee’s different works of arts, we have decided to capture the molding and mass production of her hand sculpted models. Lizzee has a passion for creating models out of clay, then using the same mold to create a vacuum form model. With this process, the only variation she can make from one model type to the other is the color, making this process a little limiting. Lizzee would like to see some variation from model to model, while still being able to recognize which model it had originated from. To solve this problem, we plan to work with Lizzee and her models to introduce noise a little at a time, which, depending on her liking, she can add to the noise, or subtract from it.

When technology is incorporated, it can be very easy for the end product to lose the authorship and trademark that is unique to the artist. With this focus in mind, we have decided to go back and forth from Lizzee to the noise generating technology. Using either Kinect, a 3D scanner, or photogrammetry, which will be determined upon experimentation, we will capture Lizzee’s clay model to be analyzed in either Autodesk or Rhino. With that information, we will work on a program that given data, will generate noise, deforming or changing the shape of the original model. Then, using PLA, resin, or nylon powder, we will 3D print the updated model where noise was incorporated. We would then give Lizzee the new model, allowing to her to make any changes to her liking, still somewhat being in control of what the final product would look like. We would continue this this process until Lizzee is satisfied with the quantity and quality of the most recently produced model.

Since it can be challenging at times to use different software, will do a small scale version of the described process on a smaller cube to ensure that the technology works in the way that we need it to before working with Lizzee’s model. In addition to learning how this technology works, we plan to utilize the ABB robot to paint the created models. Since we plan to make multiple models in a limited amount of time, the robot would be able to add a nice layer of paint, more efficiently than humans can.

With our proposed process, we believe that we will produce an end product that is both appealing to LIzzee and is true to her, as well as including some technological input.

How does it work?



What does it (kind of) look like?

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1080/lizzee-feeds-america/feed/ 0
P1: Wood Gouge—Capturing Human Skill https://courses.ideate.cmu.edu/16-455/s2019/973/p1-wood-gouge-capturing-human-skill/ https://courses.ideate.cmu.edu/16-455/s2019/973/p1-wood-gouge-capturing-human-skill/#respond Fri, 25 Jan 2019 23:35:08 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=973 Mary Tsai and Matt Prindible

Introduction:
For this exercise, we chose to study a simple hand gouge and a basswood blank. The specific motion we are interested in is the gouge digging in and carving up a sliver of the wood. Because of the linear shape of the hand gouge, we had to construct a platform on which the retroreflective markers could be attached. We also added these markers to the wood blank.

Process:
Because of the small workable surface area of the wood block, the hand gouge motions were limited and fairly constricted to movement in one direction. Typically, a wood carver would be able to approach the block from multiple directions in order to make certain types of carving. The range of motion was further constrained keeping the markers in the camera’s field of view.

We then created two rigid bodies (the gouge and the block) in Motive and captured a base pose for orientation and three takes of ten seconds of use.

Rigid Bodies in Motive

We imported the base pose capture data into Grasshopper and baked its points into Rhino. We created models of the tool and block independently and then used the base planes from the motion capture data to orient and position the models. Since the wood block is static, it also served as our ground plane.

Digital Reconstruction of our Tool and Block

We used these models in Rhino to create Grasshopper geometries for each rigid body (Geo). We then fed the motion capture data into Grasshopper and used the orient node to manipulate the position of our Rhino models.

We added a slider to isolate one interesting second of motion: where the tool interacts with the wood and captured the animation.

Analysis:
Based on our motion study and data, we would be able to figure out the velocity and force at which the gouge hits the block (visibly, the point at which the gouge stops moving). This appear obvious, but depending on the level of skill of the craftsperson (or wood type/grain direction) the gouge would either come to a hard stop or continuously move through the wood. In our capture we see the former—the tool coming to a complete stop and building up enough force to overcome the material. Some of this is because of the limited range of motion afforded to the craftsperson during motion capture. Some of the tool rotation observed in our study is a result of the resistance that the tool encountered.

Difference in tool position immediately before and immediately after carving the sliver
]]>
https://courses.ideate.cmu.edu/16-455/s2019/973/p1-wood-gouge-capturing-human-skill/feed/ 0