gtellez@andrew.cmu.edu – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2019 An exploration of skilled human gesture and design, Spring 2019. Sun, 23 Feb 2020 23:44:50 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.21 Lizzee Feeds America_Final https://courses.ideate.cmu.edu/16-455/s2019/1129/lizzee-feeds-america_final/ https://courses.ideate.cmu.edu/16-455/s2019/1129/lizzee-feeds-america_final/#respond Mon, 18 Feb 2019 14:22:31 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1129 Continue reading Lizzee Feeds America_Final ]]> What’s the big idea

The objective of the our project is to be able to create an interactive cycle between Lizzee, our collaborating artist and technology. Upon analysis of Lizzee’s different works of arts, we have decided to capture the molding and mass production of her hand sculpted models. Lizzee has a passion for creating models out of clay, then using the same mold to create a vacuum form model. With this process, the only variation she can make from one model type to the other is the color, making this process a little limiting. Lizzee would like to see some variation from model to model, and possibly end up with a model that looks nothing like the model it had originated from. To solve this problem, we plan to work with Lizzee and her models to introduce noise a little at a time, which, depending on her liking, she can add to the noise, or subtract from it.

Hybrid Fabrication Process

Capture the human crafting skill

When technology is incorporated, it can be very easy for the end product to lose the authorship and trademark that is unique to the artist. With this focus in mind, we have decided to go back and forth from Lizzee to the noise generating technology. Using either Kinect, a 3D scanner, photogrammetry or an iPhoneX, which will be determined upon further experimentation, we will capture Lizzee’s clay model to be analyzed in either Autodesk’s Fusion 360 or Rhino.

EinScan-Pro

Kinect

iPhone X

Data Processing

With that information, we will work on a program that given data, will generate noise, deforming or changing the shape of the original model. Also upon further experimentation, the noise will either be implemented through Rhino, Fusion 360, or by creating a Python program that will randomly add and delete data points of the model. Then, using PLA, plaster, resin, or nylon powder, we will 3D print the updated model where noise was incorporated. We would then give Lizzee the new model, allowing to her to make any changes to her liking, still somewhat being in control of what the final product would look like. We would continue this this process until Lizzee is satisfied with the quantity and quality of the most recently produced model.

Parametric modeling with Rhino/Grasshopper

Generative modeling with Fusion 360

Mess with the data points

Print out the hot dog

Since it can be challenging at times to use different software, we did small scale version of the described process on a smaller hot dog made of plaster to ensure that the technology works in the way that we need it to before working with Lizzee’s model. The different forms of technology have given us different types of data, which we can then learn from and implement some sourse of noise to.  In addition to learning how this technology works, we plan to utilize the ABB robot to paint the created models, mostly for immersion printing. Since we plan to make multiple models in a limited amount of time, the robot would be able to add a nice layer of paint, more efficiently than humans can.

With our proposed process, we believe that we will produce an end product that is both appealing to Lizzee and is true to her, as well as including some technological input.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1129/lizzee-feeds-america_final/feed/ 0
Lizzee Feeds America https://courses.ideate.cmu.edu/16-455/s2019/1080/lizzee-feeds-america/ https://courses.ideate.cmu.edu/16-455/s2019/1080/lizzee-feeds-america/#respond Mon, 11 Feb 2019 13:34:17 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1080 Continue reading Lizzee Feeds America ]]> How to make 20 billion hot dogs: An exploration in human-machine virtuosity
by Yi-Chen Lee, Matt Prindible, Lizzee Solomon, and Gabriela Tellez

What’s the big idea?

The objective of the our project is to be able to create an interactive cycle between Lizzee, our collaborating artist and technology. Upon analysis of Lizzee’s different works of arts, we have decided to capture the molding and mass production of her hand sculpted models. Lizzee has a passion for creating models out of clay, then using the same mold to create a vacuum form model. With this process, the only variation she can make from one model type to the other is the color, making this process a little limiting. Lizzee would like to see some variation from model to model, while still being able to recognize which model it had originated from. To solve this problem, we plan to work with Lizzee and her models to introduce noise a little at a time, which, depending on her liking, she can add to the noise, or subtract from it.

When technology is incorporated, it can be very easy for the end product to lose the authorship and trademark that is unique to the artist. With this focus in mind, we have decided to go back and forth from Lizzee to the noise generating technology. Using either Kinect, a 3D scanner, or photogrammetry, which will be determined upon experimentation, we will capture Lizzee’s clay model to be analyzed in either Autodesk or Rhino. With that information, we will work on a program that given data, will generate noise, deforming or changing the shape of the original model. Then, using PLA, resin, or nylon powder, we will 3D print the updated model where noise was incorporated. We would then give Lizzee the new model, allowing to her to make any changes to her liking, still somewhat being in control of what the final product would look like. We would continue this this process until Lizzee is satisfied with the quantity and quality of the most recently produced model.

Since it can be challenging at times to use different software, will do a small scale version of the described process on a smaller cube to ensure that the technology works in the way that we need it to before working with Lizzee’s model. In addition to learning how this technology works, we plan to utilize the ABB robot to paint the created models. Since we plan to make multiple models in a limited amount of time, the robot would be able to add a nice layer of paint, more efficiently than humans can.

With our proposed process, we believe that we will produce an end product that is both appealing to LIzzee and is true to her, as well as including some technological input.

How does it work?



What does it (kind of) look like?

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1080/lizzee-feeds-america/feed/ 0
Hammer Motion Capture Analysis https://courses.ideate.cmu.edu/16-455/s2019/1010/hammer-motion-capture-analysis/ https://courses.ideate.cmu.edu/16-455/s2019/1010/hammer-motion-capture-analysis/#respond Mon, 28 Jan 2019 13:31:07 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1010 Continue reading Hammer Motion Capture Analysis ]]> By: Yi-Chin Lee and Gabriela Tellez

Introduction:

The objective of this exercise was to convey and analyze the movement of a handheld tool.  We analyzed the movement of hammering a nail into a piece of wood. The advantage of of tracking the motion of a hammer is that since this motion requires a great amount of force, the reactive force between the object being hammered and the hammer is more visible.

Process:

Using retroreflective markers and the Motive motion capture software, we were able to track and record the position of the striking area and the path of the hand held tool while in motion. Due to the fact that three points are required to create a plane in Rhino, we placed three markers on each of the vertices on a truss, which we attached to the top side of the hammer to be tracked, as shown in the image above.

Motive motion capture software tracking hammer and striking area.

With Grasshopper, we were able to match and orient the recorded motion to our hammer model as shown below.

Analysis:

Upon analysis, we found that the motion of hammering an object is comparable to the shape of a parabola in the negative direction, starting at its maximum value. Similarly, the derivative of the position function would be the velocity, starting at a lower velocity and speeding up in the negative direction with a constant acceleration. It was evident through the distance between each plane captured in each time frame that as the hammer got closer to the desired striking area, its velocity increased. Yet, at the striking point, the planes became more dense due to the fact that motion had ended, meaning the velocity became zero.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1010/hammer-motion-capture-analysis/feed/ 0