Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2019 An exploration of skilled human gesture and design, Spring 2019. Sun, 23 Feb 2020 23:44:50 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.21 3D Frit Patterned Sculpture https://courses.ideate.cmu.edu/16-455/s2019/1364/3d-frit-patterned-sculpture/ https://courses.ideate.cmu.edu/16-455/s2019/1364/3d-frit-patterned-sculpture/#respond Sun, 12 May 2019 05:40:16 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1364 Continue reading 3D Frit Patterned Sculpture ]]> By Gerardo, Peter, and Olivia

Abstract

Our project aim was to transform Genevieve’s artwork into rasterized graphics to etch acrylic sheets and combine them into layered sculptural images. Instead of trying to incorporate the robotic arm in order to increase scale it was more important to Genevieve that we capture her method of layering paint in a new way. By combining our knowledge of the laser cutter and Genevieve’s knowledge of layering and blending we were able to create a unique workflow that she could use for other projects. The implementation of this method would be to 1) take various frames of her artwork, 2) run them through our frit pattern algorithm on Grasshopper, and 3) extract them as DXF files to later etch and paint for visibility.

Goals/Objectives

We aimed to transform Genevieve’s 2D art into something more 3-dimensional and immersive by transforming her images through an algorithm and translating them onto paneled sheets of acrylic. The spaced out sheets of acrylic added depth to the piece and our algorithm added a perspective shift which added more interactivity and immersive qualities to the piece.

Implementation/Design

As a team we wanted to implement Genevieve’s drawing routine into 3D art form that she wasn’t accustomed to. As a result, our primary design choice was to reproduce drawing process as etched frames on acrylic.

The first design metric we had to overcome was finding the best method of translating her drawings onto acrylic sheets. Of course it is possible to just etch her drawing directly using a laser cutter; thus, we felt we had to take it a step further by implementing a frit pattern on Grasshopper. The frit pattern algorithm we used allowed us to keep the intricate details in Genevieve’s drawings while also offering interesting new effects.

Some of the effects we discovered were depth and perspective. Stacking the images enhanced the visibility of major drawing features and allowed us to introduce perspective by incorporating more than one angle of the same image. Our goal was no longer just to reproduce Genevieve’s images as just 3D frames on acrylic sheets, but to also offer an interesting perspective effect.

Outcomes

After producing prototypes of our design deliverable to prove that this was feasible we discovered a few things. Our first finding was that image visibility was improved if we etched the frames rather than cut them. Another finding was that we needed a systematic method of creating the different frames of an image. Instead of having frames of an image in the order they were created it would be best to obtain frames of specific components of a drawing. For example, one frame would be allocated to the person’s nose and another could be for the person’s jawline. Lastly, we decided that filling in the etched frames rather than just outlining the circle pattern would allow us to paint over them and unveil the visual effect more effectively.

These prototypes ultimately revealed that our project was feasible and only needed modifications for the final deliverable. Our final design was two images of a priest at different angles. We adjusted for the focal point and scaling of the frit pattern in order to get the desired perspective effect. We decided to create ~11.5” x 11.5” frames since this would allow for the best viewing distance and visibility of the overall image once the frames were stacked. The frames had to be evenly spaced and properly aligned or else the perspective effect would not work. Using CAD software we designed a simple base that had evenly spaced slots for the frames, small enough to not impact visibility, and strong enough to provide upright stability.

Teamwork/Contribution

Throughout the course of the project, each team member found their niche role that allowed each other to operate most effectively as a group. Peter worked mainly in Rhinoceros and Grasshopper, developing our algorithm and preparing the image files to be outsourced to Olivia and Gerardo for laser cutting.

Our work required many hours at the laser cutter, which was headed up by both Olivia and Gerardo. Olivia also helped in preparing the original images for Rhinoceros and Grasshopper by separating them into layers on Photoshop. Gerardo was also key in designing the hardware stands that help each panel together. Altogether, we painted our prototypes and added any final touches.

Workflow Diagram

Video

Photo Documentation

Final Show

Lighting Tests

Test Sliding Acrylic

Initial Acetate Tests

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1364/3d-frit-patterned-sculpture/feed/ 0
Dehumanized Graffiti https://courses.ideate.cmu.edu/16-455/s2019/1315/dehumanized-graffiti/ https://courses.ideate.cmu.edu/16-455/s2019/1315/dehumanized-graffiti/#respond Fri, 10 May 2019 03:40:50 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1315 Continue reading Dehumanized Graffiti ]]> Perry Naseck, Andey Ng, and Mary Tsai

Abstract

Our project aimed to discover and analyze the gestural aesthetic of graffiti and its translation into a robotic and technical system. Our goal was to augment and extend, but not replace, the craft of graffiti using a robotic system. With the capabilities of the robot, we explored and found artistic spaces that were beyond the reach of human interaction but still carried heavy influences from the original artist. We were able to modify the artist’s aesthetic and tag to create a new art piece that was developed programmed through Grasshopper.

Objectives

Our goals were to be able to utilize the robot’s capabilities to create something that an artist would not be able to make. We were able to take advantage of the reach and scale of the robot, specifically the track on which it could slide up and down. By programming transformations, each letter was able to be perfectly replicated multiple times but in positions that would have been impossible for the artist do complete.

Implementation

Ultimately, we ended up sticking with simple modifications for the sake of time and producing a successful end piece for the show. We were able to take each letter and apply a transformation to it via Grasshopper. A large part of getting to this process was physically making the hardware work (the spray tool and canvases) as well as the programming in Python. If taken further, we would ideally like to create more intricate and complex transformations give the amount of area and space that we have with the robot.

Outcomes

We were able to successfully make the robot spray an artist’s tag to pretty much 100% accuracy. At the final show, one of the graffiti artists commented that it looked exactly like what he would do and wasn’t sure if he would recognize this tag as being sprayed by the robot. The hardware was something we nailed down in the first half of the semester, with some fine tuning along the way (nozzle type, distance from canvas, etc.). The programming side was a bit more complicated and we were able to successfully program the robot to spray the desired outcomes. Some of our challenges included the robot arm being constrained in specific angles and distances, but we were able to remedy most of that by turning the plane 90 degrees, and using the wall that runs the length of the track. We were unable to test all the initial transformations that we had managed to program due to time but those would be included in any future considerations.

Contribution

Perry Naseck worked on developing a Grasshopper programming that pre-processed all the data through his Python script. The backend of this project required precise timing of when the pneumatic trigger turns the spray can on/off. He developed a Python script that directed the robot arm to start and stop spraying based on the branches. Additionally, Perry performed transformation implementation such as resizing, rotation, and shifting.

Andey Ng worked on transformation implementation through different plug-ins such as Rabbit. To create more elaborate transformations that replicate fractals, the Rabbit plugin allowed us to create large scaled transformations. She also was able to create the negative-space transformations that manipulates the piece to fill in the bounded whitespace except for the letters itself.

Mary Tsai was in charge of the necessary 3D Rhino modeling as well as the physical hardware and tool construction. She created the spray mount by attaching the pneumatic piston to applicator by a milled aluminum piece as well as the easel that would hold the practice canvases. Mary also worked on transformations that incorporated the sine wave into the path of the robot, which could vary motion based off of amplitude and period. She also filmed and produced the final video.

Diagram

Photo Documentation

Final Show Pieces
Tool Mount
Motion Capture Spray Can
Example of a transformation in Grasshopper

In the final sprayed piece, the letters G and E were rotated around specific points, the letter M was replicated and shifted up multiple times, and the S was scaled up and shifted to the right.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1315/dehumanized-graffiti/feed/ 0
LIZZEE👩🏻‍🎨FEEDS🌭AMERICA🇺🇸 https://courses.ideate.cmu.edu/16-455/s2019/1296/prepare/ https://courses.ideate.cmu.edu/16-455/s2019/1296/prepare/#respond Tue, 30 Apr 2019 03:05:32 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1296 Continue reading LIZZEE👩🏻‍🎨FEEDS🌭AMERICA🇺🇸 ]]> By: Yi-Chin Lee, Matt Prindible, and Lizzee Solomon
Submission Date: Wednesday, May 1, 2019

Abstract(ion)
The capacity to explore physical artistic work in abstraction through digital transformation and fabrication is conceptually simple, but technically challenging. Our “machine-in-the-loop” workflow is comprised of a laser scanner to generate a point cloud of the artist’s sculpture, software to create a watertight mesh of that point cloud, parametric design software to procedurally transform the mesh, a slicing tool for generating toolpaths for severely damaged 3D models, a 3D printer capable of working through tooling errors, and a vacuum form table to prepare the final model. The tool of intervention for our artist was air drying clay and Super Sculpey in addition to some basic sculpting tools. An oven was also involved. We used this workflow to generate a series of machine “responses” to the artist’s sculpture. The artist then created a response this response. And so on. Our goal was to enable to “dialog” between the artist and machine that helped the artist explore unexpected shapes and gestures. Our technical exploration was successful and, based on this functional experimental workflow and a conversation with our artist, our artistic exploration was also successful.

Objectives
The fundamental goal of this project was to demonstrate a function experimental workflow for generating a transformed physical object from original sculpture—the technical underpinnings rather than the artistic expression. Because of the recursive nature of the workflow, it should also be highly repeatable and somewhat reliable. Working with an artist, however, afforded the opportunity to discuss and critique the value of this workflow as part of her artistic practice. To do this, we needed to generate a form of a high enough artistic value.

Implementation
Given the experimental nature of our workflow, the primary driver of design choices was “Whatever It Takes.” Most of the software and hardware used for this project was not meant to be used this way and, as in the case of most digital fabrication, aesthetic was sacrificed in order to generate a physical form. That being said, in order to have a useful conversation about the value of this workflow in our artist’s practice, we had to generate forms of a compelling artistic value.

The first step in the process was evaluating tools for volumetric scanning of the original sculpture. We tested photogrammetry methods with simple mobile phone cameras, a Microsoft Kinect, and a 3D laser scanner. We picked the EinScan laser scanner not only for its precision, but because it also included the software needed to generate a watertight model from a point cloud.

3D scanning setup

We ultimately used Grasshopper for Rhino to procedurally transform our mesh because of its control over parameterization—we knew early on that we wanted to heavily influence the forms the machine could generate (playing the role of a human artificial intelligence). But we also played with solid modeling programs like Fusion 360 and datamoshing techniques for glitch art.

For pre-processing the generated model, we used Meshmixer, Simplify3D, and Magics. The pre-processor and slicer used depended on its ability to use the models we generated, which were often non manifold or riddled with thousands of inverted normals and naked edges.

During the printing process, we had a chance to test every 3D printer on CMU’s campus. Although IDeATe’s Stratasys printers had a best finish for the cost, the small build volumes made production difficult. And although dFab’s binder jetting printer and the School of Design’s Formlabs 2 had the best finishes overall, they were incredibly expensive (although we’d still like to explore the use of binder jetting for models impossible to print via extrusion). Ultimately, dFab’s Ultimakers with PLA became the workhorse for our projects. The finish and timing was decent (with most models taking about 30 hours to print), but most importantly, the build volume gave us the sizes we needed.

Our artist used either air dry clay (which did not adhere well to the PLA surface) or Super Sculpey for her response (which baked well in the oven with the PLA and no additional slumping). Differential expansion between the materials was a problem (Super Sculpey shrinks quite a bit in the oven, PLA does not).

Hybrid Workflow

Hotdog Collection

Dinner Table Setting

table setting showing the hotdog evolution
Lizzee’s respond
Lizzee’s respond
hotdog slice as side meal

Contribution
These was an equal effort by both. This is the kind of project that demands someone at the computer driving the mouse and keyboard. And someone in the next seat cheering them on. And switching back and forth as we worked through all the technical hurdles.

Outcomes
The biggest failures in our project were build failures during 3D printing (mostly for the lost time) and unrepairable meshes from our procedural transformation (which kept us from printing some of the crazier forms). However, these limitations did not affect us from seeing a distinct change in the artist’s sculpting style (often reciprocating the “crystalline” geometry generated).

Precedent and cited work
Three pieces of existing work were referenced early and often throughout the process. Robin Sloan’s Flip Flop, Navaneeth R Nair’s Wave-form, and Geoffrey Mann’s Cross-fire. As well as a million YouTube tutorials.


the ultimaker clip is cut from the ultimaker official wesite
https://www.youtube.com/watch?time_continue=11&v=otmmihz3Gq8
]]>
https://courses.ideate.cmu.edu/16-455/s2019/1296/prepare/feed/ 0
Project Report Template https://courses.ideate.cmu.edu/16-455/s2019/1290/project-report-template/ https://courses.ideate.cmu.edu/16-455/s2019/1290/project-report-template/#respond Mon, 29 Apr 2019 15:52:19 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1290 Continue reading Project Report Template ]]> The following is a general template for the final report, which is to be delivered as a blog post. Please write out your narrative in paragraph form, the outline of prompts below is not intended as a format guide. The video is crucial; please make sure it captures the intention and success of your project clearly.

  1. Project Title
    1. Author 1, Author 2
    2. Submission Date
    3. URL of project video (typically up to two minutes)
  2. Abstract: Provide a brief paragraph summarizing the overall goals and results.
  3. Objectives: State your goals, and discuss what specific features are within scope for the project.
  4. Implementation: Discuss your design choices.
  5. Outcomes: Discuss the successes and failures of your choices.
  6. Contribution: Please provide a clear statement of each author’s individual contribution to the outcomes.
  7. Diagram: please include a workflow diagram highlighting the analog and digital transformations included in the final result
  8. Video: Please embed a video which captures the intent and success of your project. As a general rule, these might be edited one to two minutes long. Please observe good camera and editing technique.
  9.  Photo Documentation: Provide captioned photos which support your discussion. Please consider the purpose of each photo and write a caption which helps the reader understand your intent.
  10. Citations: Please provide references or links to related work.
]]>
https://courses.ideate.cmu.edu/16-455/s2019/1290/project-report-template/feed/ 0
Paul’s Group (3/25 Update) https://courses.ideate.cmu.edu/16-455/s2019/1280/pauls-group-3-25-update/ https://courses.ideate.cmu.edu/16-455/s2019/1280/pauls-group-3-25-update/#respond Mon, 25 Mar 2019 15:15:41 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1280 Continue reading Paul’s Group (3/25 Update) ]]> Team Management

Kevin: Generating Rhino/Grasshopper toolpathing

Elton: Developing mounting systems for foam, Robotic arm attachments

Testing paint and medium printing on foam

Andy: Writing Python code for pattern generation from provided image

Bi Weekly Goals 

Kevin is working on the tool paths in Rhino and once he is done, we will try to apply the same algorithms to the final image Paul wants us to carve. Then we will runs simulations to ensure no collisions and then hopefully carve on a full sheet.

Final Show Vision

For the live presentation we want to display the full size carvings/prints and also have a small scale demo, maybe with a 1’x1′ print block. We’d like to have a multi-block color print or something beyond just outlines for the smaller print. We will have videos playing of the robot carving at the foam since we won’t have the actual robot, and display the hot knife setup that Paul and the robot used.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1280/pauls-group-3-25-update/feed/ 0
Lizzee Feeds America: the shortest path to twenty billion hot dogs https://courses.ideate.cmu.edu/16-455/s2019/1220/lizzee-feeds-america_shortest-path/ https://courses.ideate.cmu.edu/16-455/s2019/1220/lizzee-feeds-america_shortest-path/#respond Wed, 20 Mar 2019 02:00:14 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1220 Continue reading Lizzee Feeds America: the shortest path to twenty billion hot dogs ]]> Where we last left off: A proposal and an abstract model of our workflow. This was enough to guide our exploration of each enabling technology (3D scanners, 3D printers, sculpting, and transforming), but, as we expected the real project was in the weeds of each step of the process.

Where we are now: We can now successfully demonstrate two trips through our proposed human-machine sculpture process.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1220/lizzee-feeds-america_shortest-path/feed/ 0
P2.2_Hybrid Skill_Graffiti https://courses.ideate.cmu.edu/16-455/s2019/1181/p2-2_hybrid-skill_graffiti/ https://courses.ideate.cmu.edu/16-455/s2019/1181/p2-2_hybrid-skill_graffiti/#respond Wed, 20 Mar 2019 01:41:07 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1181 Continue reading P2.2_Hybrid Skill_Graffiti ]]> Live Demonstration

The tool we built to mount to the robot consists of a spray applicator that was cut and attached to the piston via a milled piece of aluminum. After building the spray paint applicator tool, we tested it on the robot by inputting a manual path for it to follow while manipulating the pneumatic piston on and off at the right times. Overall, the test was successful and showed potential for what a programmed in path could may like. Our simple test allows us to test speed, corner movement, and distance from the drawing surface. In the GIF above, the robot arm is moving at 250 mm/second (manual/pendant mode), but it can increase to up to 1500 mm/second with an automated program path.

Example Artifact – Capability

Initial Recording

Pictured above is a spray path of a box recorded with motion capture in Motive and then imported into HAL in Rhino. Gems visited the lab to make this recording, which captured the linear motion, speed, angle, and distance from the drawing surface of the can. Below are two possible transformations applied to this square path.

Concentric Outline Transformation
Translated Rotation Around a Point Transformation

Next Steps | Moving Forward

We have determined the key building blocks of our project and split them into two main sections. The first being solely contained in Rhino and Grasshopper – determining the capabilities of the robot and programming in the paths that can be taken based off of the motion capture study. The second part is the actual hardware and spraying ability of the robot, which we managed to tackle using a spray applicator and and a pneumatic piston system.

Our final product will be an iterative building of the piece over multiple steps. The piece would emerge where the artist responds after the robot has acted, making this a procedural and performative process. In the beginning stages of the performance, the artist would have only simple concepts and ideas of what the robot is actually capable of but his knowledge would increase over time of what can be computationally translated, which would inform his next steps.

The artists will choose from a set of transformations, and the the robot will perform the selected transformation the on the last step. The amount of options available to the artist for each transformation (such as the location or scale) will be determined as we program each transformation.

Our next steps are to determine a library of specific ways that the motion capture can be modified and exemplified. We will use many of the 2D transformations from our original proposal as a starting point. Our main goal is to have the robot produce something that a human in incapable of doing. The different variables such as speed, scale, and precision are the most obvious factors to modify. With speed, the robot has capabilities of going up to 1500 mm/second. In scale, the robot has a reach of 2.8 meters, and combining with the increased speed, the output would be drastically different from what a human could produce. The robot is capable of being much more precise, especially at corners. If allowed, the outcome could look quite mechanical and different, which would be an interesting contrast to the hand-sprayed version.

Project Documentation

To understand the gestural motions, we initially captured the artist’s techniques to understand which variables we must account for. For instance, we must account for the artist’s speed, distance from the canvas, and wrist motions to create the flares.

To find the measurements of velocity that we need to input into the robot arm, we calculated the artist’s average speed and distance of each stroke. These measurements are used as data points to be plugged into Grasshopper.

To capture the stroke types, we recorded the artist’s strokes through Motion Capture. Each take describes a different type of stroke and/or variable such as distance, angle, speed, flicks, and lines.

The Motion Capture rigid body takes are recorded as CSV points which are fed into Grasshopper that create an animated simulation of the robot arm’s movement.

To perform the spray action with the robot, we designed a tool with a pneumatic piston that pushes and releases a modified off-the-shelf spray can holder.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1181/p2-2_hybrid-skill_graffiti/feed/ 0
Shortest Path Prototype – Paul Foam Knife Group https://courses.ideate.cmu.edu/16-455/s2019/1178/shortest-path-prototype-paul-foam-knife-group/ https://courses.ideate.cmu.edu/16-455/s2019/1178/shortest-path-prototype-paul-foam-knife-group/#respond Tue, 19 Mar 2019 23:06:01 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1178 Continue reading Shortest Path Prototype – Paul Foam Knife Group ]]> Our group’s plan moving forward was to get a MOCAP of Paul working with the hot knife, editing and reproducing his stroke pattern through Rhino, and having the robot replicate his dextrous skill in carving out a completely different drawing. We’ve made significant progress towards that end.

Paul appeared to have an easy time transferring his woodcarving skills to foam. He drew a flower on a block of foam and went to work.

We weren’t able to MOCAP his motion since we hadn’t created a mount for the reflective spheres yet, so Paul drew up a quick picture of a tree and some cubes for us to experiment on before he left. We took the scanned image and converted it into vector graphics before dropping it into Rhino to see how the robot arm pathing would look.

It was incredibly messy. The scan captured way too much detail in the drawing, and Rhino interpreted every little bump in the lines as a new “path” for the robotic arm to follow. We ended up with a few hundred strokes just for minor detailing in the image, so we decided to simplify the image significantly. We also used a smoothing function in Grasshopper to remedy the tiny bumps in the final drawing.

Since we had to cut out an inverse in order to make a printing piece of foam, the generated pathing avoids the outer edge of each line and enclosed body.

Since we’re trying to extend and expand on what Paul can do artistically, we created a Python program that acts as a pattern paint bucket. Much of the detailing of Paul’s work comes from filling in white space of an outline, and enhances the drawing dramatically. The program reads in patterns and a main image, and the user decides which regions should have which patterns. Clicking on the display’s regions fills them in, and the edited picture is exportable into vector format. Paul can use this to decide what parts of the final carving he would rather the robot do.

Striped shading on one of the boxes in the image

We’ll ask Paul what kinds of patterns he wants, as well as any other edits he might want to see done to the image. Currently, we just have a striped pattern as a proof-of-concept for the pattern fill.

This program is not integrated with the RAPID code generation yet; currently the picture has to be manually converted to an SVG after the program changes the image, and then processed into Rhino. We’re working on automating this process so that everything after the scan and artist-selected patterning will be automated.

On the physical side of our project, we’ve created a mount for the tool and the foam block. Using one of the orange robot tool plates, we mounted two laser-cut pieces of wood to hold the hot knife in place with a system of pegs. The hot knife can be easily removed with a wrench, allowing Paul and the robot to collaboratively share the knife without having to recalibrate the tool’s location with respect to the arm every time it is dismounted.

Hot knife mount in CAD
Hot knife mount with knife

We still had a manufacturing constraint that the hot knife couldn’t be turned on for periods longer than ~30 seconds safely, for fear of the transformer blowing out or the knife itself bending out of shape. While a pneumatic trigger would work, re-prototyping another hot knife mount would have taken too much time. Garth told us about PulseDO, a command in RAPID that can turn on and off the power supply. We plan on pulsing the power to the tool to make sure it doesn’t get too hot, but for now we can manually toggle the power from the control box.

First test of robot arm cutting foam with the hot knife

We created a quick script in RAPID to test how well the robot arm could wield the hot knife. After a period of pre-heating, the robot had no problems cutting through the foam, even during very deep cuts. However, we realized that some of the pieces of foam would cool and re-stick in their slots after cutting. We settled on mounting the final foam block standing up, so that any cut-out pieces of foam would just fall out of their slots. We designed a CAD model of the mounts and assembled the laser-cut final pieces.

Reflection on next steps

Currently our prototyping runs pretty smoothly. The only parts that have to be done manually and can be automated are scanning the work that Paul wants to render in the foam, channeling the output from the program into Rhino, then exporting the RAPID code to the robot. After Paul makes his design choices, the process should just automatically spit out RAPID code to work with, minimizing the manpower required to perform the intermediates. But that’s a later improvement.

Currently we have no pattern templates from Paul, which are essential if we want to apply his chosen textures onto a picture. This would be best transferred through some kind of picture file that could be repeatedly tiled, and the Python program would have to be modified to include a number of constraints on where the patterning goes, i.e. density, separation, border size. With the base program done this wouldn’t take too long.

The hot knife tool we attached to the robot arm can cut at any radius up to ~2 inches, but the tool pathing generation in Grasshopper does not utilize this potential of the knife. By adding a depth-to-width cutting awareness depending on how wide the perpendicular is, we could cut down on carving time, as well as get a more accurate trench between converging lines. This could also be adapted to cut using a different edge for large swathes of blank foam, which require less accuracy and more sweeping cuts.

The tool’s mount is pretty stable for now, as is the foam mount. Since we want to carve on a standing piece of foam so that the scrap pieces fall out, the foam mount is essential. We need to experiment with the gaps sizing, but with 4 mounts the setup is stable enough to carve on.

Finally, we’d like to give Paul more options with regard to modifications to the art he would like to carve and the art he’d like the robot to carve. Right now we have patterning and straight line down, but we need to consult with him about tasks he finds difficult in the process that could be automated. Alternatively, we could look at hybrid art pieces that have strokes that are intended to have a different style in the way that they’re carved, allowing Paul and the robot to work together on one canvas.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1178/shortest-path-prototype-paul-foam-knife-group/feed/ 0
Witch Layered Sculpture https://courses.ideate.cmu.edu/16-455/s2019/1165/witch-layered-sculpture/ https://courses.ideate.cmu.edu/16-455/s2019/1165/witch-layered-sculpture/#respond Mon, 18 Feb 2019 15:10:19 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1165 Continue reading Witch Layered Sculpture ]]> by Gerardo, Olivia, and Peter

After further consideration between our group and our artist we have changed trajectory towards transforming Genevieve’s art into a 3D layered image made of layers of transparent acrylic. After our initial proposal we spoke more with Genevieve, Garth, and Josh and realized we had been focusing on the wrong things. Instead of trying to incorporate the robotic arm in order to increase scale it was more important to Genevieve that we capture her method of layering paint in a new way.

Our new proposal is to translate her artwork into vector graphics or rasterized images to either cut or etch acrylic sheets and combine them into layered sculptural images. This will combine our knowledge of the laser cutter and Genevieve’s knowledge of layering and blending to create a unique workflow that she can use for many more projects.

In between the output to the laser cutter we will include a 3D visualization tool through the use of Grasshopper and Rhino so that Genevieve would be able to see what the output will look like before committing to a final output. This would be accomplished by creating UV maps from the vector or raster graphics from step 2. This step will also allow the use of 3D visualization to combine two images into one sculpture.

Workflow
Inspiration Image
Image result for uv mapping
UV Mapping Example
]]>
https://courses.ideate.cmu.edu/16-455/s2019/1165/witch-layered-sculpture/feed/ 0
Witch Anamorphic https://courses.ideate.cmu.edu/16-455/s2019/1163/witch-anamorphic/ https://courses.ideate.cmu.edu/16-455/s2019/1163/witch-anamorphic/#respond Mon, 18 Feb 2019 14:54:09 +0000 https://courses.ideate.cmu.edu/16-455/s2019/?p=1163 Continue reading Witch Anamorphic ]]> by Gerardo, Olivia, and Peter

We’re interested in utilizing the motion capture technology in the dFab Lab to explore the expressive nature of Genevieve’s drawing skills and to, ultimately, use the motion capture data to scale up her artwork. Genevieve prefers to work on a smaller scale. The end result is a process that generates large-scale versions of her art based on her specific artistic gesture, without her having to actually work at this larger scale. She would save time and physical effort doing so.

We hope, also, to still adhere to Genevieve’s preferred method of layering to develop these images. By using differently shaped MoCap wands, we can define changes in tool or “layer” that are read and then carried out by the robotic arm. Once the introductory MoCap data is compiled, there are endless opportunities as to how the robot could transform the image in its recreation, if we choose to do so. Depending on the type of drawing tool the robot utilizes or if an algorithmic transformation is applied off-line, we can systematically change Genevieve’s image using our available technology.

]]>
https://courses.ideate.cmu.edu/16-455/s2019/1163/witch-anamorphic/feed/ 0