Uncategorized – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2020 An exploration of skilled human gesture and design, Spring 2020. Wed, 22 Jul 2020 19:51:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.17 Reconstructed Mementos – Final Report https://courses.ideate.cmu.edu/16-455/s2020/2223/reconstructed-mementos-final-report/ https://courses.ideate.cmu.edu/16-455/s2020/2223/reconstructed-mementos-final-report/#respond Fri, 08 May 2020 22:19:27 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=2223 Continue reading Reconstructed Mementos – Final Report ]]>

This project uses archived 3d models as a basis for creating ‘reconstituted objects of meaning’ via algorithmic unwrapping and craft-derived digital manipulation. 

We align this process with themes of memory and reclamation, recognizing the impossibility of perfect recall/reproduction, especially within the digital/physical flipflop.

Craft Basis

Our goal was to create meaningful deformations through flat pattern manipulation – a family of methods for cutting, translating, and re-marking the unrolled flat pattern of a 3d object to change the final model. There are examples of this type of manipulation in both paper craft and garment production.

The Process

Each team member was tasked with first choosing a meaningful object that they didn’t have physical access to, then find a representative model of that thing from an online library. With that model in hand, we followed distinct design pipelines to generate the final model. Our methods were unified by an overarching set of steps.

Our Separate Processes

Celi’s Process

Skinning

I chose to work with a model of a Star War’s AT-AT that I have at home but do not currently have with me.

I used Blender to break a model of my object into different sections and then unwrapped each section using the blender Paper Model add-on.

Manipulation

The goal of my type of manipulation was to create organic growths on my model in order to cover certain parts that I did not remember well. I wanted to use addition material to “blur out” these parts, attempting to simulate what my mind does to parts of the memory of the object.

I created an illustrator script that would output a random-sized growth to designated edges of my unwrapped model’s net. This growth consisted of tessellation origami patterns that could then be folded into an organic-looking addition.

Module for origami pattern
Unwrapped model with added patterns
Reconstruction

The reconstruction of my model was made by folding, pasting and reassembling the different parts. The origami additions were folded up onto the model in order to cover up and “blur out” certain parts.

Final model

David’s Process

Skinning

I used tools in blender to simplify and unwrap a model of a dog into a paper-foldable pattern.

Manipulation

The goal of these manipulations was to create a simple system for playing exquisite corpse with the computer. The computer would randomly place cutting and folding patterns across the flat pattern. I respond by adding 2d illustration that resolve the intention of those folding patterns. For example a folded flap could become a window for a character or a shelf for some books or a crack for grass to grow through.

Reconstruction

This method emerged when I was trying to consider a ‘charming’ way to frame this project, i.e a way to frame the manipulation so that I’d be drawn back to it and excited by it.

Flattening models is integral to the process of applying textures to models, this pipeline inverts that process by flattening the model then generating illustrated texture that responds to the flat pattern.

Lea’s Process

Skinning

I chose a Kitchenaid mixer. This object is an icon of domesticity and is in reality quite heavy and difficult to move (which is why I don’t have it with me in Pittsburgh). When thinking about memories and domestic confinement, I continued to return to the theme of hermit crabs — of repurposing a cast-off object at body scale; of “making do and mending.” I decided to sew a lightweight fabric “shell” of my mixer to inhabit.

The mixer has smooth curves that felt perfect for cutting as seams; however, my sketchy shareware model file was full of quirks and oddities. I initially planned to use Stein, Grinspun, and Crane’s intriguing Developability of Triangle Meshes code to cut the seams, but the model’s irregularities caused so many problems that by the time I had fixed them, I decided to mark the cut lines on the model in Blender by hand.

This process was very manual, with the result that, by the time I had a nicely carved model, I knew the digital model more intimately than I have ever known the physical one.

Manipulation

I noticed that I really liked the back part of the mixer head, and I would like to use it as a sleeve cap. So I marked that part with UV painting in blender, and unwrapped again. With Illustrator scripting, I wrote a length-aware scale to match the relevant seam lines (marked in red) to the seam length of a known sleeve cap.

Also in Illustrator, I performed a computational “smearing” operation to add pleats. This operation is based on the flat pattern manipulation technique “slash and spread” and it functionally lengthens one seam length while maintaining the others. The lengthened seam can then be pleated or gathered back into its previous length, adding three-dimensional fabric volume.

Reconstruction

To reconstitute the digital model at body scale, I used a pico projector to project and trace onto fabric. I numbered the parts and referred to the digital model for how to re-assemble them.

System Insights

Sewing has a deceptively simple constraint: you can frankenstein/kitbash patterns together as long as their seam lengths match. This system therefore is one that scales objects by reference to body, via flattening all the way to the one dimension that is seam length: a sort of variational autoencoder for translating from model data to body space.

Reflections

Our collection of manipulations explores how the process of digital and craft based deconstruction can expose novel and creative ways of reinterpreting an object.

Many computational processes are about filtering data. This project imagines how a craft person can act a part of a filtering process for 3 Dimensional Data.

]]>
https://courses.ideate.cmu.edu/16-455/s2020/2223/reconstructed-mementos-final-report/feed/ 0
Evil Super Grandma Final Report https://courses.ideate.cmu.edu/16-455/s2020/1988/evil-super-grandma-final-report/ https://courses.ideate.cmu.edu/16-455/s2020/1988/evil-super-grandma-final-report/#respond Thu, 07 May 2020 15:18:26 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1988 Continue reading Evil Super Grandma Final Report ]]> In this project, Group A (Ophelie Tousignant, Timothy Nelson-Pyne, and Elizabeth Lister) developed a social game about making a quilt together to perform a summoning ritual.

Abstract

  • The primary goal of this project was to develop a lighthearted game to connect friends over a distance. We also aimed to make creative use of AR and emphasize communication challenges and collaboration among the players.
  • We created a small game intended to be played by a group of friends over a video call. One player, the group leader, actually runs the software and communicates playful instructions to the other players, who then sew quilting tiles and send photos of them to the group leader, who assembles them virtually using AR technology to “complete the ritual”.
  • We intended for the physical appearance of the tiles (such as the colors and whether they are constructed “correctly”) to have a physical impact on the summoning that occurs at the end, but ultimately abandoned this goal in light of the technical challenges it would pose.

Implementation

  • Our group used Unity to create the game; this option made the most sense to us given that it’s a free, versatile tool with a wide range of support resources.
  • We assume that the players will be on a group video call using software such as Skype, Discord, or Zoom to play together. This is a reasonable assumption given that present circumstances practically mandate widespread use of group video call tools anyway.
  • The players are assumed to have access to sewing supplies and scrap fabric. The act of sewing while making conversation with friends can be fun and relaxing. Group activities such as this also help to pave the way for comfortable conversation.
  • The group leader is assumed to have some familiarity with Unity. This was unavoidable given that it’s necessary for them to prepare and upload photos of the sewn tiles to be used in-game.
  • Given that some tiles are more laborious to sew than others, we utilize silly penalty rules, such as speaking in fake Australian accents, for players who find themselves with idle time on their hands. This is so they stay engaged.

Outcomes

  • Shortcomings: using AR in Unity resulted in a somewhat clunkier experience than intended when considering that the game leader has to upload photos, use an additional webcam, etc. This, unfortunately, somewhat limits the accessibility of our game. We also regret that we were unable to include some way of using the quilt’s construction to affect the summoning at the end.
  • Successes: we felt that we were able to incorporate the “grandma” theme and narrative in a way that augments the experience for the players, and that the AR aspect of the game was cleverly executed. Additionally, Tim in particular was able to learn a lot about Unity in the process of making this game, and Elizabeth and Ophelie really enjoyed giggling about evil grandmas.

Contributions

  • Tim is responsible for the technical implementation of the game.
  • Ophelie is responsible for the scripting of the game and made the test tiles.
  • Elizabeth is responsible for the animated assets of the game.

Description of the Process

  • A designated game leader with the game installed and some number of players are all on a group video call.
  • The group leader relays instructions from the helper grandma to the other players to guide them through sewing quilt tiles. The players are encouraged to discuss instructions amongst themselves and coordinate colors and designs.
  • However, there is a catch — the instructions are vague, and include silly penalty games and challenges. For example, a player with little to do during one step of the process might be asked to perform a coin trick to appeal to the dark forces they serve.
  • After all quilt squares are complete, the players take photos of their respective squares and send them to the group leader, who gives them a quick glance-over and doles out penalties for bad tiles accordingly.
  • The group leader uploads the photos to the program and uses AR tracking to arrange the tiles and choose embroidery in a design the players agree on.
  • The summoning is complete! Witness the reunion of your grandma friend and her bingo buddy or buddies.

Video

https://drive.google.com/file/d/1A84HvRk5NGUUGuK8JqrkcuzfL_r3n3Pa/view?usp=sharing (couldn’t get the video to embed)

Image Gallery

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1988/evil-super-grandma-final-report/feed/ 0
Underwater Garden – Final Report https://courses.ideate.cmu.edu/16-455/s2020/2172/underwater-garden-final-post/ https://courses.ideate.cmu.edu/16-455/s2020/2172/underwater-garden-final-post/#respond Wed, 06 May 2020 19:18:36 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=2172 Continue reading Underwater Garden – Final Report ]]>

We present a playful design system that embeds actuation and computation in ordinary materials like paper. Our technique turns two-dimensional paper structures into self-morphing plants that react to the stimulus of water and transform themselves into organic, pre-programmed ways when placed within underwater environments. 

MORPHING PRIMITIVES

We explored a rich vocabulary of morphing primitives based on the structure and mechanism of Hydrogel Beads On Substrates as shown in Table 1. We hope this serves as a geometrical reference for readers who would like to create morphing structures with the proposed mechanism. The substrates can be divided into 1D linear structures and 2D sheets. For each substrate, the controllable parameters for the bead patterns include bead alignment direction and the gap distance between beads. We were inspired by the primitive designs presented in Printed Paper Actuators [36], and demonstrated that our beads on substrates can reach a similar level of shape complexity without the need of 4D printing and shape memory filament. Later we will show how the morphing primitives can be combined to form more complex and hierarchical morphing structures.

Table 1: Morphing primitives based on the structural composition of hydrogel beads on substrates

COMPUTATIONAL DESIGN WORKFLOW

We developed several computational design tools and a simulation tool to guide users through a generative design process for the morphing mechanism of hydrogel beads on substrates. Users can generate a geometry, define the distribution of hydrogel beads, simulate the morphing behavior, and generate fabrication files through the tool. The tool currently supports two generative structures – branch and kirigami. Additionally, users can create their own geometries from scratch. 

Simulation Model

As hydrogel beads expand after deployed underwater, they eventually collide with each other. The expansion of the beads causes the bending of the substrate, and the maximum bending angle is determined by the distance (D) between adjacent beads and the radius (r) of each bead, as illustrated in Figure 1.

Figure 6: In the structure of hydrogel beads on substrates, the distance (D) between adjacent beads and the radius (r) of each bead determine the maximum bending curvature of the substrate.

We further verified this bending-angle control mechanism with a numerical simulation model as can be seen in Table 2.

Table 2. Simulation results. Shortening the distance between two adjacent beads (b) can enlarge the maximum bending curvature compared to (a).

Since we only deal with the bending behavior that has one degree of freedom, we can simplify our bending model as beads on straight lines. More specifically, a line is represented as a polyline with segments divided by the beads. Each line is set to be free to bend or twist around its own axis. One anchor point (i.e. boundary condition) has to be set for each line in order to run the simulation correctly. To accommodate diverse substrate patterns in our simulation model while keeping the underlying simulation model simple and fast, we divide a given structure into three basic geometrical elements: bead, line, and substrate (Figure 2). A line with beads resides on top of a substrate and can be drawn and defined by users in our computational design tool interface. We first simulate the bending of the line based on the distribution of beads. A ‘circle packing method’ [33] is employed to define the moment of bead expansion and collision. The final simulation of the substrate is then achieved by mapping the original positions of points on the line to the final positions

Figure 2: The simulation process involves the mapping of a 2D substrate to an actuating line defined by the user.

USER DESIGN WORKFLOW

Currently, our design tool supports the generative design of branches and kirigami-based structures. For both structures, the workflow is similar, including both generative design and simulation. The user design flow for the computational design platform is composed of the following steps as shown in Figure 8. 

Step 1: Specify an outline of the substrate. Users can select a basic outline from the library or draw a closed outline from scratch (Figure 3a). 

Step 2: Generate branches that will fill the outline automatically (Figure 3b). Many geometrical features of the branches can be adjusted (Figure 3c), such as the total numbers of hierarchy, the smoothness of each corner, etc. 

Step 3: Specify the layout of the beads. Users firstly draw polylines to indicate which branches to be actuated. Beads will be generated along the actuation lines (Figure 3d). Users can then adjust the distance between each adjacent bead, which will affect the maximum bending curvature of the corresponding line (Figure 3e). Users can also select the side of the substrate on which the selected beads will reside. 

Step 4: Simulate the transformation i.e. the morphing behavior (Figure 3f). Step 1 to 4 can be iterated to reach a desired shape and transformation. 

Step 5: Export the outline of the substrate for laser cutting and export a .pdf file as a reference for bead placement and adherence to the substrate.

Figure 3: (a) Set outline; (b) generate branches; (c) Adjust the branching pattern; (d) Define actuator region and assign actuators; (e) adjust maximum bending angle through distances between adjacent beads; (f) simulates morphing effect.

Generative Geometries

In the computational design platform, we provide two predefined generative shape libraries for users i.e. branching system and kirigami system; and both of them can automatically suggest layout paths for beads in their output.

Branching System: Branch patterns are inspired by corals and seaweeds, following our design guideline of generating biomimetic and organic forms for an ocean garden. Figure 4 shows a collection of British seaweed archived in the 19th century [12].

Figure 4: The Nature-printed British Seaweeds, published by Henry Bradbury, (1859–60)

We adopted the Diffusion-Limited Aggregation (DLA) method [43] to develop rule-based branching generative structures. It opens a design space of fractal growth-producing organic dendritic skeletons with biomimetic aesthetics. While these skeleton lines serve as potential bead layout paths, a non-uniform parametric thickening post-process results in an organic boundary curve with various radius values at different hierarchies of the branch structure. We added additional randomness in the geometry to mimic natural underwater living organisms.

Figure 5. Examples of branches generated and simulated by the design tool.

Kirigami System: Taking the basic input geometry as the outline, a variety of cutting patterns can be generated based on the variation of different parameters including the number of concentric cutting cycles, the number of cuts on each cycle, and the size of the gap between the cuts.

Figure 6. Taking the basic input geometry as the outline, a variety of cutting patterns can be generated based on the variation of different parameters including the number of concentric cutting cycles, the number of cuts on each cycle, and the size of the gap between the cuts.

Fabrication Process

From the digital files generated through our design tool, we can cut the designed pattern and mark the location of the bead within one CNC plotter (Curio, Silhouette America) in two sequential steps (Figure 7a). The next step involved applying a very small quantity of cyanoacrylate glue to the paper or cloth substrate using a glue dispenser tip, just enough to hold a hydrogel ball. The hydrogel balls were placed with tweezers (Figure 7b). Applying a very small quantity of glue in a discrete manner at each contact spot was critical to maintaining the flexibility of the substrate. Otherwise, when the glue was applied in a continuous line, the substrate became hard and stiff, acting as an additional constraint, reducing the efficacy of the bending behavior. Lastly, for our deployment experiment, we used a water tank by simply dropping the structure into it (Figure 7c).

Figure 7. Fabrication and triggering process.
Figure 8. Transformation in real-time



Kexin Lu, Twisha Raja, Maria Vlachostergiou

]]>
https://courses.ideate.cmu.edu/16-455/s2020/2172/underwater-garden-final-post/feed/ 0
Reinterpreting Artwork – Final Report https://courses.ideate.cmu.edu/16-455/s2020/2131/reinterpreting-artwork-final-report/ https://courses.ideate.cmu.edu/16-455/s2020/2131/reinterpreting-artwork-final-report/#respond Mon, 04 May 2020 21:52:38 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=2131 Continue reading Reinterpreting Artwork – Final Report ]]> Project Description

Our design system allows us to reinterpret 2D drawings and extract additional information through stroke and line analysis. This new data allows the 2D art to be visualized in a 3D space, which can aid the artist in creating new further artwork.

Motivation and Background

Our team originally formed around a nebulous idea of transforming and reinterpreting 2D artwork. Some existing examples of this that inspired us included perspective-based sculptures and computational string art.

The artist used a python program to determine optimal string placement to generate the image.
From different angles, the sculpture forms different images.

We were not sure what transformations would produce the most meaningful results, and began experimenting with different ways to transform images in Grasshopper. 

We also wanted to leverage the fact that two of the three members were art and design students, and an important consideration became choosing the source materials for our transformations. In the process of designing the system, Jon and Tay’s artwork and different styles informed the transformations that were done to the image.

Initial tests involved using different layers in Jon’s digital drawings, and arranging them such that certain parts of the drawing are only visible from oblique perspectives.

These transformations relied on how the artist deliberately chooses the arrangement of layers, which is not extensible to non-digital mediums where layers cannot be easily extracted. We began looking at image processing techniques that can “infer” features of the artwork, whether or not it was intended by the artists.

We used high resolution scans of pencil sketches for the analysis. The interaction of the graphite and the texture of the paper, as well as the techniques of shading used, is a much richer set of features than layers in the previous experiment.

Edge detection did not produce interesting results

System Overview

Our finalized pipeline begins with a source artwork, which could be created specifically for this system or any piece that the artists want to take a second look at. Any type of drawing would work, but we found that pencil, pen, and stippling drawings worked the best. We then scanned the artwork (in the highest resolution possible) and processed it in Python using the OpenCV library’s Hough Line Transformation. The script would probabilistically infer the lines and strokes in an artwork.

The line data and the image is then imported into Rhino and Grasshopper. The lines are superimposed on the image. To achieve the 3D effect, the direction of a given line is used to determine how far “up” from the original drawing that the line is moved.

Outcomes 

Animated

We then animated the lines to show the extraction of the “strokes” from the artwork. From the side, we can sometimes see that different regions of the drawings can be almost exclusively in a similar angle. For the artist, these inferred lines can reveal intentions that they might not have been aware of, as well as generate new ideas from lines that were inferred but not intended.

Whether it be the strokes exploding out of the drawing, or lines emerging in a general direction, we believed that an animated 3D sculpture could be both beneficial to the artist and a breathtaking experience for anyone experiencing the art.

AR

Although we initially stayed away from Augmented Reality due to fear or using it as just a gimmick, we realized that it would be an effective mode to view the work in a new context. By putting the newly created 3D models in a physical 3D space, artists would be able to easier explore their work and be able to look at it through literal new perspectives within a new context. The models could also be scaled up and down which could also achieve different effects. Artists could view the models through AR in real time and work in parallel to create new concepts and art.

We were not able to show color in these models due to hardware limitations.

Takeaways and Next Steps

We hope that the digital pipeline and generated visualizations enable Artists to extract a new view of their work. The resultant artifacts are by no means an end product. In fact, the artist could then create new work based on the new forms and abstractions the they identify within the visualization. This new drawing could then be taken back into the system to generate a new image and experience. This iterative process could just be a small part of an artist’s process.

The pathway could also serve as a learning tool for artists. Artists can view the exploded layers of strokes from any drawing and recreate the drawing using the inferred orders and strokes they see in AR.

Since our system did not work well for colored images, a possible next step could be isolating the individual color channels and running each through the system. After each layer has been run through the Hough Line Transformation, they could be superimposed onto each other in Rhino to generate one 3D sculpture.

]]>
https://courses.ideate.cmu.edu/16-455/s2020/2131/reinterpreting-artwork-final-report/feed/ 0
Pleating Final Report https://courses.ideate.cmu.edu/16-455/s2020/1998/pleating-final-report/ https://courses.ideate.cmu.edu/16-455/s2020/1998/pleating-final-report/#respond Sun, 03 May 2020 16:48:44 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1998 Continue reading Pleating Final Report ]]>

System objectives

Starting from the traditional folding technique of pleating, our system facilitates the creation of pleating patterns through a 2D interface in Rhino. Pleat designs can be rendered in the physical world by printing the crease pattern on paper or can be rendered virtually as mesh models in Rhino or other modeling applications.

“Complete Pleats” by Paul Jackson served as the primary reference material regarding pleating techniques. Amanda Ghassaei’s online origami simulator was used to simulate the fold patterns created in Rhino. Translating between Rhino and the simulator employs the.FOLD format.

After creating the core pipeline which translates 2D line drawings from Rhino to 3D dynamic folded forms in the simulator, each team member aimed to extend the tool and create unique artifacts from individual workflows.

Core system

The core system has three primary components: the Rhino working area, the grasshopper script that translates the Rhino geometry to. Fold format and displays the mesh of folds, and the online origami simulator created by others.

Core system’s basic workflow

Core Workflow

A designer creates line geometry in the Rhino work area using layers representing each type of fold supported by the origami simulator. While the type of fold is set by the layer, the angle of the fold is set explicitly by using the line object’s “Name” property.

At any point, the designer can visualize the design by saving a .Fold file of the design and uploading it to the simulator, allowing for quick iteration between 2D and 3D representations. Concurrently, the designer may explore the design by folding physical material, either freehand or from patterns printed from the digital model.

2D representation of folds
Folds simulated from linework

Individual workflows and artifacts

1. Curve to Pleated Surface Tool

By utilizing the primary Rhino to .Fold file pipeline, we are able to easily translate content created in the Rhino workspace into the online simulator to visualize the creation. I have decided to utilize this tool to experiment with pleating patterns to create a dynamic curved surface. As seen in example projects (figure 1.) simple pleating patterns can be freely manipulated after having been folded to create dynamic forms that are more exaggerated than their folding pattern initially lets on. This exploitation of a pleated form highlights the respective strengths of both digital and analog making within this pipeline.

(figure 1.)

Digital Manipulation of Input Geometry: 

Keeping in mind the possibility of manipulating a pleated material, a target form is created in the Rhino workspace. In this pipeline, the target form is a freeform curve created by the user in elevation. The goal of the pipeline is ultimately to approximate this freeform curve though the assignment of pleats to describe it. 

To begin, a freeform curve is drawn to the liking of the user. Then utilizing a grasshopper script, a series of points are assigned to the curve based upon the degree of curvature. (Figure 2.) The areas along the curve of more severe curvature are assigned a higher density of points to describe it. The descriptor points are grouped into two categories: positive slope and negative slope. These two types of points will get assigned different types of pleats described later in the process. 

(Figure 2.)

Next, the curve and its newly assigned points are unrolled along the X-axis. Each point represents the future location of a pleat. In this example, the points with a negative slope are assigned a knife pleat that steps down, while the points with a positive slope are assigned a knife pleat that steps up.

(Figure 3.)

Once assigned along the X-axis, the fold pattern resembles an intricate patterning of mountain and valley folds. (Figure 3.) However, this pattern can be easily visualized utilizing the online simulator. This portion of the pipeline utilizes the primary “rhino file to .FOLD file” export function that is shared between each pipeline developed within our group. Once the file is exported from Rhino to the simulator, we get a sense of how effectively this pipeline was able to approximate the input curve by assigning points and pleats. 

(Figure 4.)

As seen above, the knife pleat fold lines are engaged at a 90-degree fold which maximizes the vertical distance change. We can begin to see the input curve represented in the simulation, however, it is not nearly as dynamic as the input itself. (Figure 4.) This is where the analog folding of the pattern will take the form to the next level and truly start to resemble the original input or something derived from it. One of the strengths of analog folding of the pattern is the ability for the user to intuitively vary the angle at which each fold is set. With the simulation, each fold is set to the same degree. When folded by hand, the folds can be stretched, manipulated, and compressed easily at varying degrees to further accentuate the form.

(Figure 5.)

You can see above that I have accentuated the curve past what the simulator suggests. To create a more interesting form, I have stretched open the pleats along the peak each curve, while minimally altering the pleats in between. (Figure 5.) A single valley fold along the back edge allows the form to keep its shape and become more expressive by providing rigidity.

With this workflow, the creator has the ability to create a dynamic profile for a material knowing that the pleated form will result in a similar shape. From there, they can manipulate, fold, and twist the form into something different from the original input. This application could be utilized from a scale as small as paper folding, to something as large as inspiring the roof form of a building. For a further and more detailed demonstration of how this portion of the pipeline functions, see the video below!

2. Pleat intersection workflow

This workflow introduces an alternate representation method to the core tool. In addition to using single lines to represent individual folds, single line representations are expanded to include pleats of multiple folds. Knife folds, as shown in the Core Workflow images, are used to explore pleat intersections. Rhino layers for left and right-hand knife pleats are added to the model for the designer’s use. Critical to representing pleats of multiple folds is understanding the pattern of the resultant folds at an intersection. The image below explains how the sequence of creating the intersecting folds impacts the pattern of the folds at the intersection. Because the sequence of each intersection is required to be defined, the second avenue of input is opened for the designer to manipulate.

Example of studies of crease patterns that result at knife fold intersections. Note that the fold direction of pleat made first stays continuous though the intersection. The folds of the second pleat swap direction while in the intersection.
Video explaining pleat intersection workflow built plugged into core Rhino 2D to .FOLD file workflow

The work below introduces the formal possibilities that can be explored when intersecting knife pleats. The order patterns were created using simple looping algorithms. However, the nature of this intersection representation affords many other possibilities to assign order values. As is seen in the examples shown below, the interplay between even simple variations in the knife fold hand and the sequence order creates a broad variation in the formal and performance qualities of the 3D object.

Pleat pattern 1 created with pleat intersection tool
Pleat pattern 2 created with pleat intersection tool
Pleat pattern 3 created with pleat intersection tool
Pleat pattern 4 created with pleat intersection tool

3. 3D Pleating

This workflow introduces a way to pleat on a 3D object. This workflow adds a 3D possibility to the core tool. In addition to manipulating solo 2D folding patterns, this workflow provides an opportunity to deal with 3D folding patterns for users. The users can create their own 3D objects and add folding patterns on their unrolled flat surfaces. Just as the figure shown below, a cube can be pleated pretty nicely.

The following graph shows the 3D pipeline workflow. The user first creates a 3D object. The pipeline then automatically gathers this 3D object and unrolls it to a flat surface. All the edges’ relationships and angles are saved as the core workflow standard. The flat surfaces are then translated and scaled so that it can be inside the “Working Area” which can then connect to the core workflow. The last step before using the core workflow is to bake these edges from grasshopper components to rhino components. The user can then iteratively design on the flat surfaces and then see how it goes in the fold simulation.

The 3D pipeline flowchart
Corresponding grasshopper components. The blue group is the 3D unrolling part and the green group is the original core workflow.

The following video shows the automatically unroll part. It can handle most types of 3d objects. The following video shows this scrip unrolling cube, polygon solid, and an egg-shape 3D object.

The following video shows the actual pipeline to pleat or add new folding pattern to a 3D object, which is a dodecahedron. This 3D object is first unrolled as flat surfaces. The user can then try to add mountain or valley fold to these surfaces. After several design iterations, the user can view the result through the simulation environment.

The following pictures show the fabrication result. The user can create the actual modified 3D object using paper.

Flat surfaces created by the pipeline
Flat surfaces fabricated by paper

Reflection

Much like the application of traditional pleating techniques, the rhino to .FOLD pipeline which serves as the core of our project proves to be a useful tool for a wide range of uses. Shown by the three distinct adaptations above, the pipeline can be manipulated as a parametric pleating design tool, 3D to the 2D form builder, and curve simulation tool. Surely there are many integrations of the tool that are yet to be seen, but that’s what makes it so powerful.

Exploration with the tool resulted in a need to balance digital and analog modeling. Questions stemming from these explorations circle around the possibility of how to create a feedback loop between the digital and physical realm. How can one introduce existing pleating or geometry to the rhino workspace from the physical world and continue to develop it through digital means? Once a design is completed and physically made, is there a way to capture this development? While there are a number of directions to explore, we know that our Rhino to .FOLD pipeline represents the “middle link” in a chain of design exploration. Exciting possibilities only need to be imagined using this digital twist on the traditional practice of pleating.

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1998/pleating-final-report/feed/ 0
Group D – Underwater Garden https://courses.ideate.cmu.edu/16-455/s2020/1929/group-d-underwater-garden/ https://courses.ideate.cmu.edu/16-455/s2020/1929/group-d-underwater-garden/#respond Mon, 30 Mar 2020 13:47:01 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1929 Continue reading Group D – Underwater Garden ]]>

Methodology description:

In our project ‘underwater garden’, we present a playful design system that embeds actuation and computation in ordinary materials like paper. Our technique turns passive paper into a self-assembling / self-morphing composite that reacts to the stimulus of water and transforms itself in interesting, pre-programmed ways. 

The two methods used are:

A) the kirigami technique as a design system for the exploration and production of three-dimensional form out of two-dimensional shapes and B) the idea of a new, programmable, bi-layer composite that uses hydrogel balls and paper as actuating and constraint layers correspondingly.  

Method A: Kirigami

Example 1: An exploration of kirigami using anchors and offsets was carried out where the anchors denote the supporting portion of the paper whereas the cuts elevate 2D in space.

Example 2: Outlines of shapes that bend around a particular anchor point.

Method B: Programmable materials

The idea here is to attach hydrogel balls to paper geometries, placing them in sequences that form different shapes upon those 2D geometries. Creating different shapes and orders out of hydrogels will cause the same paper kirigami to morph in different ways in space. The mechanics behind this concept is that hydrogel swells when absorbing water. On these grounds, the attached bubbles will force the paper to bend, as the paper will try to constrain hydrogel’s expansion. 

This methodology presents a bi-level actuation system where linear morphing is achieved by sticking hydrogel material to paper/ thin material that acts as the constraint layer. As the artifact is immersed in water, the hydrogel expands and the constraint layer remains as is. The second level of actuation is when the balls swell up which causes the structure to morph and take shapes as shown. Future directions include scaling up the structures and exploring different natural materials that can help us achieve a similar aesthetic. 

Context and application: 

As we talk about a hydro-composite that uses water and humidity as environmental stimuli, we embrace and integrate the idea of an underwater environment for the context of our application. What if we used our two-core system to explore and build artificial structures of decorative plants for a fish tank environment? What if we created a series of available computational tools so that everybody could create their own decorations for their fish pets at home? That led us to the following inquiry: What kind of DIY tools should we create and suggest for use?

Tools to explore: 

Tool 1: The first computational tool would be a series of generative design systems for the exploration of parametric, organic forms. Each of us will develop a different design system, exploring different ways of creating 3D structures out of 2D shapes. → Week 1

Tool 2: The second computational tool would be an interactive simulation system, in which we can apply the hydrogel effect in different shapes and make predictions about  the ways various designs would morph underwater. → Week 2

Tool 3: Finally, the third tool would constitute a series of online videos, which will demonstrate how people could use the tools to produce designs, simulate the transformations and fabricate their own decorative plants ordering simple means like paper, super glue and hydrogel bubbles from Amazon. 

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1929/group-d-underwater-garden/feed/ 0
Group B – Reconstructed Mementos https://courses.ideate.cmu.edu/16-455/s2020/1852/reconstructed-mementos/ https://courses.ideate.cmu.edu/16-455/s2020/1852/reconstructed-mementos/#respond Mon, 30 Mar 2020 13:33:49 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1852 Continue reading Group B – Reconstructed Mementos ]]> Exploring methods for unwrapping and distorting the flat patterns of objects that connect to our shared imaginaries and memories.

This project will use photogrammetry and archive data as a basis for creating “reconstituted objects” via algorithmic unwrapping and craft-derived manipulation. We align this process with themes of memory and reclamation, recognizing the impossibility of perfect recall/reproduction, especially within the digital/physical flipflop. We will focus on items of our daily lives in our newly-narrowed worlds — our personal domestic surroundings.

Our goal is to use an algorithm to unroll the chosen 3D model at logical seam points. We use various hand transformation techniques to manipulate the pattern – cutting it apart and adding material (as one would do for certain techniques in garment patterning), or collage other 3D patterns onto it. See example precedent of this below. The output of this project will be some fabric or paper artifacts with a range of transformations applied to them.

Conceptual Notes

  • Memory — hints of an object, not photographically perfect
  • Objects, impossibility of re-capture
  • Encoding actions over time — objects identifying past actions
  • Reclamations (hermit crabs, internet detritus)
  • Annotations, recollections

Initial Explorations

For our initial exploration each group member explored the concept of memory with the tools of flat patterning, paper, and fabric.

A very simple and repeatable flat pattern, a paper bag is a highly recognizable geometric form.

The goal of this project was to create a sort of three dimensional pixel and use it to recreate objects as memories. This project plays with the idea of memory distorting objects, and the representation of that through modular origami.

Technical Process

On a technical level, the “unwrapping” pipeline is already somewhat complex. We are using the research code from Stein, Grinspun, and Crane’s “Developability of Triangle Meshes” (SIGGRAPH 2018) to form the basis of the first stage of unwrapping. However, that work relies on preprocessing (using the authors’ existing tool) which introduces some distortions of its own — here is a model which began as a cube:

And the unwrapping is output as an OBJ file, which must undergo further processing to become a set of edges. Here are results from rendering the OBJ through Blender’s “Freestyle” svg rendered (colored in Illustrator for readability):

We will explore computational approaches to further manipulating this data, e.g. in Rhino or Processing, in line with inspirations from sewn flat patterning technique and modular origami.

Process Plan

Our plan is to recreate objects in a distorted way that simulates how our memory stores versions of these objects. It starts by scanning three dimensional objects, and deconstructing them into different components or “pixels”. These “pixels” will then undergo a computational or physical distortion before being put back together into a new version of the original object.

Ways of putting these objects together may include the separation of the structure of the object (the skeleton) from its texture or feel (skin). This separation dictates the way that the distorted version of it will be re-constructed.

As output, we expect to re-phsyicalize the data in a variety of media in line with our own tool and material access. For this part of the process, we will work in parallel.

Available Resources

Collectively, we have the following resources:

Machines: Macs, windows computers, Cannon DSLR camera, Nikon camera, sewing machines, scanners, 3D printer, projector

Materials: Paper, fabric

Coding knowledge/ programs: Adobe Suite, Java, Python, Grasshopper

Other resources: Arduinos, motors, sensors, Kinect, PrimeSense sensor

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1852/reconstructed-mementos/feed/ 0
Group C – Visually Reinterpreting 2D Art https://courses.ideate.cmu.edu/16-455/s2020/1926/group-c-visually-reinterpreting-2d-art/ https://courses.ideate.cmu.edu/16-455/s2020/1926/group-c-visually-reinterpreting-2d-art/#respond Mon, 30 Mar 2020 06:01:30 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1926 Continue reading Group C – Visually Reinterpreting 2D Art ]]> Overall Description:

We are abstracting two images by recreating them in dot patterns based on the brightness of the image. Darker colors are further back while brighter colors are forward in space. We then take the two images, and angle them so the dot patterns overlap, creating a third image. The third image can then be reinterpreted by an artist/ordinary human and they can create a drawing based off of what they see from the image. 

This is not dissimilar to how people interpret gestalten images. However, instead of designing images to be reinterpreted in a specific way like the one below, we are creating them to have an effect similar to a rorschach image and allow for a multitude of reimaginings.

Gestalt
Rorschach

Additionally, users have choice over the original two images, and can either pick or create images with the intention of abstracting them. By controlling the colors and shapes of the initial images, users could control the output image. Different images will always result in a different third image.

Process:

Step 1 – take two images and load them into grasshopper script. Grasshopper will generate circles based on the brightness of each pixel of the image. The images can be read multiple ways, as the artist you can find an alternate view that you find interesting.

Step 2 – Take the new rendered image and draw your own interpretation of the image either on photoshop or on a piece of paper.

Step 3 – continue to develop new drawing through analog, digital, or computational means

Collaborative Roles:

  • Mike will focus on the computational aspects of the workflow
  • Tay and Jon will focus on the art and creative applications of the workflow

Individual Fabrication Methods:

Since a drawing can be interpreted in multiple ways. Each individual could create a drawing of what they visualize out of the third image. This will differ from person-to-person since each individual will see the image differently.

Critical Physical Resources:

Each of our group members of our project team should have a computer that could run Rhino and Grasshopper. For the physical aspect of the design workflow, pen and paper should suffice, although working on an iPad or drawing on Photoshop are other alternatives that can provide human dexterity.  

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1926/group-c-visually-reinterpreting-2d-art/feed/ 0
Group E – Pleating https://courses.ideate.cmu.edu/16-455/s2020/1877/group-e-pleating/ https://courses.ideate.cmu.edu/16-455/s2020/1877/group-e-pleating/#respond Mon, 30 Mar 2020 04:26:40 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1877 Continue reading Group E – Pleating ]]>

Description

Pleating, the act of inducing folds into sheet material, has long been a practice to provide mobility, shape, structure, and utility in a variety of industries around the world. With a strong presence in the textile & fabric industry, the style of the pleat and the way in which it was employed was driven by a variety of factors such as utility, efficiency, cultural influences, or social standing. Today the pleat is seen in cutting edge performance clothing, religious garb, and the latest fashion trends. 

Considering the appealing design and structural implications of the pleat, there is no question of why it has been heavily integrated into origami, sculpture, and architecture as well. At a larger scale, the pleat has been utilized in standing seam metal roofs, folded metal facades, installation sculpture, furniture, etc. As technology and digital fabrication accelerate, new methods of customizing fabrication and mass production will become apparent thus creating an opportunity for the development of creative yet simple designer interfaces. 

We have chosen to work on designing a pipeline that allows users to employ the use of traditional pleating techniques in a virtual environment to create unforeseen and complicated geometries using a thin sheet well. Once the desired pleat design has been reached, the interface will be capable of printing the folding outline on a flat sheet good. With the proper linework and folding instructions embedded in the material, any user could theoretically pick up the sheet good and employ pleating to create the geometry originally modeled on the computer.

This workflow offers the unique ability to create asymmetrical, nonuniform, complex folding patterns that could be used in beautiful sculpture, abstract artwork, or architecture. The combination of varying pleating techniques will create interesting intersections that would normally take multiple construction attempts to understand. The workflow will alleviate the end-user from this complex negotiation, allowing for the focus to be placed on the overall design.

Workflow Diagram

Collaborative Roles

We expect that each of us will participate in each phase of the development collaboratively. As we progress into the work, we may differentiate roles to accomplish separate tasks. However, throughout the process, we will all be testing and folding to refine the system.

Individual Fabrication Methods

With a common design tool in place, individuals will work on creating customized pleat patterns that are completely unique to them. The initial prototype fabrication will take place on 8.5” x 11” paper, however, there is an opportunity to expand the fabrication to other mediums as the tool is developed. 

Critical Physical Resources

Printer, 20lb (or slightly heavier) paper for prototypes, ruler, cutting board, knife

Folding Experiments

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1877/group-e-pleating/feed/ 0
Group A – Evil Super Grandma https://courses.ideate.cmu.edu/16-455/s2020/1849/group-a-evil-super-grandma/ https://courses.ideate.cmu.edu/16-455/s2020/1849/group-a-evil-super-grandma/#respond Sun, 29 Mar 2020 14:42:26 +0000 https://courses.ideate.cmu.edu/16-455/s2020/?p=1849 Continue reading Group A – Evil Super Grandma ]]> Description

Quilt-making is a traditionally community-oriented craft, from Kit’s friends on the American frontier to Stefani Danes + Doug Cooper’s quilt/mural collaboration. Given the current pandemic, our group decided to focus on one question: how do we leverage advances in technology to keep our communities creating together? 

We chose to design a game that would lean into our current social situation. Targeting people with little to no experience in quilting, we chose to develop a program using Spark AR that would guide users through developing individual quilt squares. This guidance will come in the form of an animated grandmother who will teach you to create a summoning square for the Almighty Super Evil Grandma.

Through object recognition, she will be able to tell how much of the pattern you have finished, and guide you to the next step. When each square is complete, all quilters must come together (after the pandemic) and finish the quilt to summon the super grandma. Individual squares will determine a certain aspect of the end animation. Although each quilt will follow a selected pattern, each of these squares can be customized with various fabric textures and colors, and can change super grandma’s appearance and character.

(Learn a new hobby, have fun with your friends, respect your grandmothers, kind of?)

Workflow Diagram

Collaborative Roles

  • Tim will focus on most of the software development with regards to object recognition, organizing the assets, and triggering certain events.
  • Ophelie will focus on developing the quilt templates and some of the assets/animations.
  • Elizabeth will focus on creating the instructions and some of the assets/animations.

Individual Fabrication Methods

Each individual participant may follow a downloadable pattern and aurally dictated instructions to hand sew a simple individual tile. There is some flexibility in that the participant may choose which preset pattern they would like to use, and they may choose what colors they prefer to determine the end result.

Critical Physical Resources

All participants will require access to scrap fabric and basic hand sewing supplies. They will also need to have Instagram and/or Facebook downloaded on a mobile cellular device in order to access the AR content.

We the project team, in addition to the resources above, also require a computer that can run the Spark AR program suite and Blender to create the instructions and the relevant assets.

]]>
https://courses.ideate.cmu.edu/16-455/s2020/1849/group-a-evil-super-grandma/feed/ 0