Uncategorized – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2015 An exploration of skilled human gesture and design, Spring 2015. Mon, 11 May 2015 20:01:48 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.28 Project Two: Digital – Physical Transformations https://courses.ideate.cmu.edu/16-455/s2015/project-two-digital-physical-transformations/ Tue, 21 Apr 2015 13:57:40 +0000 http://courses.ideate.cmu.edu/physcomp/s15/16-455/?p=213 [Read more...]]]> The scope of the second project is to extend the system from project one to include physical fabrication of a generated form.  This will extend the notions of ‘transformation’ and ‘output’ to include both mathematical and physical machine processes.  This will explore the effects of incorporating the analog qualities of physical materials into a purely digital process, and the balance between human gesture and machine fabrication. The final projects will be highlighted in a curated end-of-semester show.


Objectives

  1. Develop an extension of the design concept from Project 1 which can accommodate a machine-production transformation, e.g. laser-cut, 3D-printing, or CNC routing of either physical parts, fixtures or jigs.  The forms may be 2D or 3D.
  2. Extend the modeling process to include geometric forms which can be manufactured.
  3. Develop a workflow including at least the following stages:
    analog gesture > digital representation > transformed digital output > analog re-representation > (optional) digital re-capture.
  4. Apply the system to the production of an individual final artifact or successive evolving sequence of artifacts.  We expect at least one artifact per group member, e.g. each member may take responsibility for a single phase of a multi-step process, each member may produce their own final artifact, or any combination in keeping with the design intent.

Process

  1. Tuesday 04.21.15
    1. Concept Commitment (in class)
    2. Work plan development with scheduled deliverables (in class)
    3. Develop test artifact(s) (for homework)
  2. Thursday 04.23.15
    1. Review initial test artifact(s) (in class)
    2. Refine and develop final system (begin in class)
  3. Tuesday 04.28.15
    1. Preliminary demo of final system (in class)
  4. Thursday 04.30.15
    1. Final work session + trouble shooting (in class)
  5. Tuesday 05.04.15
    1. Optional work session (by appointment).
  6. Thursday 05.07.15
    1. End of Semester Show (10:00 am – noon): conversation with outside reviewers to discuss final projects
  7. Monday 05.11.15
    1. Final Documentation and revision of all exercises and projects due on XSEAD

Deliverables

  1. per group: design system
  2. per group: functional demonstration
  3. per group: documentation of design concept, challenges, implementation
  4. per student: sample artifact(s)

Prior to the next class, please submit a short text write up with accompanying drawing images and a video link to the XSEAD site as discussed on the Submissions page.  Please be sure to create your project within the appropriate ‘pool’ as linked.

Prompt Questions

The following questions may not apply exactly to every project but are intended as a guide to our expectations for the detail of the project writeup. Please answer every question relevant to your specific project with text and graphics as appropriate.

Explain the conceit or ambition.

  1. What was the theory of the process? How do the artifacts produced by your system embody the skill of human gesture and the potential of algorithmic control?
  2. Does your system allow both experts and novices to explore constraints of a physical, material process?
  3. Is the theory based on a simulated physical process? If so, how were the physical constraints of the materials and tools incorporated into the design system?
  4. How does the user prompt balance global structure and local detail? Are there nested scales?

Reflection on the conceit and execution.

  1. Did the development of more refined artifacts reinforce the conceit? How could the system develop further?
  2. How do the designed artifacts differ from those produced with the related traditional crafts?
  3. What are the decisions available to the person using the system?
  4. How would practice influence decision making within the constraints of the system?
  5. How would those decisions translate to other tools or materials?
  6. How did the prompt stimulate the user to apply their expertise, i.e. their expert knowledge?
  7. How did the prompt encourage novices to develop skill?
  8. Was there a technique that promised to reward practice or repeated use?
  9. When comparing multiple artifacts produced by the system, are the constraints of the system legible? Is there variety?
  10. What failures would prompt further investigation?
  11. Was there too much or too little information provided to the user? If too much, how could it be filtered? If too little, how could a simple process be extended to scaffold a richer prompt for the conceit?
  12. Could the system benefit from further development of dexterous skill?

Clear text and visual documentation

  1. Be sure to include: the Grasshopper patch file, representative prompt images, images of representative artifact (both in-progress and final), screenshot of Grasshopper patch.
  2. Documentation of the time progression of the experience through video and selected still images.
  3. Explanation of the logic of the Grasshopper patch to translate tool input to graphical feedback.
  4. Were there interesting failures?
  5. Can you visually relate the prompt and resulting artifact?
]]>
Exercise Four: Animated Sketch https://courses.ideate.cmu.edu/16-455/s2015/exercise-four-animated-sketch/ Thu, 19 Feb 2015 11:59:24 +0000 http://courses.ideate.cmu.edu/physcomp/s15/16-455/?p=191 [Read more...]]]> Description

In this exercise students will create a digital drawing machine and project it onto a physical canvas to prompt a physical, hand drawn sketch. Students will create an algorithmically generated pattern and project it onto a 2’x2’ canvas, to prompt fellow classmates in free-hand sketching experiments. Patterns should be informed by fundamental compositional techniques (e.g. translations, reflections) and computational processes (e.g. agent based behavior, particle simulation, physics simulation, point attractors). Patterns should also be time-based exhibiting emergent behaviors, narrative arc, and/or rule based growth. Students should share their drawing prompt with at least two partners. Students may use their choice of drawing implement including pens, markers, brushes, etc.

 

Objectives

In this exercise students will:

  1. Develop parametric control of design workflows using Grasshopper.
  2. Enhance physical dexterity with information-rich visualization.
  3. Learn the basics of projection mapping.
  4. Explore dynamic patterning.

Process

  1. Thursday 02.19.15
    1. Mount and calibrate workstation projectors (Begin in Class).
    2. Introduce Grasshopper (Begin in Class).
    3. Generate a test sketch with 100 items in a pattern (For Homework).
  2. Tuesday 02.24.15
    1. Introduce agent based, physics, and evolutionary design strategies in Grasshopper (In Class).
    2. Develop time based drawing prompts for digital projection (For Homework).
  3. Thursday 02.26.15
    1. Test drawing prompt with partners (Begin in Class).
  4. Tuesday 03.03
    1. Discuss final drawings and prompt generation (In Class)

Deliverables

Artifacts: 

  1. Hand Sketches drawn by partners (2 minimum)

Documentation (can include drawings, images, videos):

  1. Documentation of the logic of the digital drawing prompt (diagram, screen capture, video).
  2. Documentation of the physical drawing process (image, video).

Prior to the next class, please submit a short text write up with accompanying drawing images and a video link to the XSEAD site as discussed on the Submissions page.  Please be sure to add your project to the appropriate ‘pool’ as linked.

Resources

Help with Grasshopper: If you are just getting started it is worth skimming the manuals linked on the rhino resources page.

Download and help for “Agent” Plugin: Resources for Alex Fischer’s plugin. Download for grasshopper component, help files, and video tutorials.

 

 

 

]]>
Reverse Engineering Lab https://courses.ideate.cmu.edu/16-455/s2015/reverse-engineering-lab/ Tue, 03 Feb 2015 14:51:21 +0000 http://courses.ideate.cmu.edu/physcomp/s15/16-455/?p=181 [Read more...]]]> Overview

There is no silver bullet for making a high fidelity copy of a physical object in a digital modeling environment. Most processes take careful planning, understanding of technological constraints, and significant post processing to achieve suitable results. The following discusses relevant considerations and workflows for reverse engineering using Rhino’s modeling environment.

Rhino’s Bias

Rhino is a native NURBS surface modeler. This means that Rhino can very efficiently handle complex, free-form surfaces and smooth curvature. The majority of Rhino’s tools are oriented toward surface manipulation. Although Rhino can accommodate solid modeling with polygon meshes, its toolset in this regard is limited.  This is important to consider when reverse engineering or preparing a file for rapid prototyping since many scanning and printing workflows generate or require high density polygon meshes. Because of this distinction, a significant part of the workflow requires knowing when and how to translate between NURBS Surfaces and Polygon Meshes in Rhino.

Approaches

There are three main approaches to reverse engineering using Rhino and the resources available in the IDeATe Lab.

  1. Manual Reconstruction > Authors can build accurate reconstructions of physical objects using a combination of Rhino’s native commands. This workflow typically involves taking careful measurements of an object and importing scaled images to trace key information. There is usually a base layer of construction drawings that support the creation of 3D surfaces. Benefits to this workflow include full control of modeling accuracy and level of detail, ability to generate NURBS geometry directly. Drawbacks include time-consuming multi-step operations, limits to the level of complexity and surface detail one can realistically model.
  2. 3D Scanning > Authors can use a table top or hand-held 3D scanner to reconstruct an object. Benefits to this workflow include high-resolution mesh output (depending on the scanner), relatively low setup and processing times, the potential for minimal post-processing of mesh. Drawbacks include the need for expensive dedicated equipment, limitations in size and surface quality of scanned objects.
  3. Photogrammetry > Authors can use a series of overlapping photographic images to reconstruct a physical scene using software like Autodesk’s 123D Catch. Benefits to this workflow include the use of simple and accessible equipment (camera), relatively low capture times, the ability to capture irregular and complex geometries. Drawbacks include inability to capture reflective surfaces, need to extract objects from scene, often significant mesh post-processing.

Example File

I have uploaded a folder with example  files that illustrates best practice and useful commands for reverse engineering using Rhino and 123D Catch.

FAQs

  • Rhino can import and export .obj and .stl files for mesh editing and rapid prototyping
  • Use PictureFrame to import images into Rhino.
  • Use CageEdit to apply custom global transformations to complex objects.
  • Use Check to analyze mesh before printing.

Resources

123D Catch: A free app from Autodesk to create 3D scans using a camera.

Rhino Tools: A collection of resources related to reverse engineering in Rhino.

Rhino Reverse: A plugin for Rhino designed to aid Mesh to NURBS translation and handle high polygon count meshes.

]]>