gabagoo-FinalProject

CHA(i)Rmageddon

 
 


For my final project, I have made several large plots composed of some distribution of chairs. Since I did not make the chairs, nor did I improve or add much to the .svg generation, the bulk of my compositional input was in the distribution of chairs. The two main kinds of distributions I used were a planar distribution and a large pile formed from a physics simulation

Inspiration

- Doris Salcedo's Installations (above)
- SIGGRAPH 2007
- Chris Jordan's Photography

Process Overview

1. Data scraping
2. 2D Experiments
3. 3D File Cleanup
4. Composition Experiments
5. Material Experiments
data scraping
I spent quite a bit of time trying create my own chair generator, only to come to the conclusion that chairs are incredibly diverse. Naturally, the next bright idea that one might have is Googling 'chairs dataset.' The first result was this machine learning paper from 2014. The dataset included 2D rendering of chairs for image detection as well as the 3D files that the images were rendered from. The 3D files that were linked (left) were broken, but I was able to create a web scraping script that would download whatever files were still available.

 

2d experiments
Before I figured out how to fix the deprecated links to the 3D models, I did some experimentation with OpenCV to detect the contours for plotting the chairs. At this time, my intention was to annotate the legs of the chairs such that I could super-impose human arms and legs onto the chair legs.

 

3d file cleanup
After figuring out the web scraping there still remained quite a bit of standardization across the 3D models that needed to be done. The major issues involved creating a standard scale across all the chais and fixing the inverted normals (left, middle). The 3D to 2D .svg pipeline that I used is Blender's builtin Freestyle svg exporter.

The files I downloaded from 3dwarehouse were in .DAE format. I converted them all to .STLs using Blender, but the conversion did not always work. From the ~1600 links, I was able to scrape ~250 .dae files (I reached their download limits and I was too lazy to download more). Of those ~250 .dae files, I was able to successfully convert ~170 files into a workable format within Blender

 

composition experiments
physics simulation
While sorting through the mess of STLs that my previous work had generated, I ended up scaling all the chairs to a similar size and organized them into a simple lattice pattern (left). 

Around this time, the large AxiDraw had arrived and not many people were using it, so I set my goals to create a super intricate large plot. Hence, I arrived at using some built in physics simulation native to Blender (top, right).

After the major milestone critique, I decided to revisit the idea of planar distributions, as they felt less overwhelming than the dumpster of chairs I had previously plotted. What I ended up with was a mixture of Perlin noise fields and a bit of manual object manipulation within Blender.

 

material experiments
Material Experimentation
Throughout the majority of this project, I had been plotting with a 0.38 MUJI pen and a Pilot G-2 05. These had allowed the resultant plots to have individual chairs plot with a lot of clarity. I did a test experimenting with a more t-shirt like pattern using a thicker sharpie, which I plotted over with pen as well.

Takeaways

Technically, I learned a lot about creating large scale workflows for generating the kinds of pieces that I wanted. In the process I learned web scraping, blender python scripting, and improved my skills in the vpype CLI, openCV scripting, and the blender plug-in Sverchok. 

I suppose the larger insight that I have gain from this is that my approach to a project now has a much broader scope because I know that I am capable of combining various kinds of skills into a larger pipeline for achieving an overarching goal.

gabagoo-FinalWIP

 

downloading data from Seeing 3D chairs

2D experimentation

3D rendering experimentation

Siggraph 2007

 

Doris Salcedo

 

gabagoo-MidSemester

I decided to redo my blobs project. I decided to look back at some old sketches of my linewalks and found ^these^ curves formed from a sweeping radius that is always continuous.
I used offset curves to fill my blobs and played more with the physicality of the plotter, via interrupts to create gaps in the offsets (left), and shifting the page to achieve a Moire effect (right).

The blobs are generated via a central spline from generated via the aforementioned linewalk technique. Generated spline reminded me of a hair on the walls of a steamy shower. To contrast the abnormality of the hair with the uniformity of the Moire, I decided to frame a hair (shown below).

 

gabagoo-PatternReading

From the Graphic Games reading I found the below pattern really interesting because of how it uses lineweight to achieve a sort of optical illusion effect. I like how its not a grid tiling, but a row-wise tiling. Pretty cool stuff

gabagoo-TilingPattern

I wanted to create tiles that do not tile in the conventional sense, but rather that create emergent structures from overlapping. I created a few basic primitives (top) and used triangular coordinates to implement the overlapping (bottom).

I struggled quite a bit with finding a good tiling algorithm because there is an optimization between dense overlapping clusters (top left) and leaving space between tiles (top right). In the end, I just randomly placed a set of 20 tiles for each iteration of a loop (bottom left). I also experimented with vector fields and subsampling.

 

The final plots came out well. The left two plots were from a different algorithm than randomly placing tiles. That algorithm involved using probability distributions to determine where a new tile should occur. I think I will further experiment with two tone colors like in the purple plot, however, I’m a bigger fan of the monocolor plots.

gabagoo-BlobFamily

▲ ▲ ▲ ▲ ▲ PLOTS ▲ ▲ ▲ ▲ ▲

I created my blobs by bashing various Python libraries together. I used sklearn's SVM to get a 2D Voronoi classifier. Using that classifier I sorted random points into buckets and used scipy.spatial's ConvexHull algorithm to get a shapely LineString so that I could clip the hatching for later. I drew the actual curves using a modified version of Wikipedia's Catmull-Rom curve. To add the randomness to the edges I used a Perlin noise library to implement some approximation of p5.scribble. My inspiration was to create little potato creatures that were all holding hands (above). Due to time constraints I was only able to get the potato bodies. One thing I found interesting from the readings was the Town and Country salt and pepper shakers. I really vibed with the idea that blobs can be personified.

▼ ▼ ▼ ▼ ▼ GENERATOR ▼ ▼ ▼ ▼ ▼

gabagoooooo-Drawingbots

The mosiac builder I found on drawingbots.net was pretty cool. I think it is interesting because it reminds me of those command line drawings with only ASCII characters, except here, the characters are tiny doodles. I feel like the potential results have more character and a homemade feel than the ASCII drawings.

gabagoo-Hatchin

I experimented with Python drawing tools for the first time. Since vpype is much more modular than p5.js, I was able to tap into my existing python knowledge. In particular, using Numpy’s ndarrays made programming incredibly easy. Furthermore, I also used Shapely’s clipping function to crop my work into the grid cells. My use of matrix operations, particularly in the third row, were surprising and unintended. The rotation transformations were relative to each line’s centroid, and as a result, the hatching looks like a plane with depth. Some of the drawbacks of shapely, is that many of the p5.js primitives I was used to using, were not available to me and I had to implement them myself.

gabagoo-LostrittoReading

Something that struck me from the reading is the claim that a drawing cannot be edited. I thought that viewing any interaction with the physical media as additive is a very unique lens. The example given, about the erasing of a pencil mark, expresses this notion cleanly. I also thought the idea that a drawing needs to be perceived was interesting as well. A computer aided program, with initial conditions given by a human is then definitionally a drawing. It does beg the question: if a human designs a machine to design a machine to draw something, is it still a drawing? In other words, to what extent can the human be removed from the process for the output to still be considered a drawing? The idea of an anti-definition seems to hint that the answer is not binary.