Student Area

frog-FinalWIP

For my final project, I’m programming a Universal Robotics robotic arm to inscribe and mark polymer clay surfaces. This is meant to primarily be a two-pronged technical exploration of both the material possibilities of polymer clay and the technical capabilities of the robotic arm to use tools and manipulate physical media.

The inspiration for this project is honestly more rhizomatic than emergent from a collection of works of interest. That said, I was at least partially inspired by the robot-human collaborations of artist Sougwen Chung.

Throughout the semester, I’ve done most of my work using pen and plotter. While the plotter creates an exciting interaction point where computational principles and the physical  properties of media try to coexist and even contradict each other, I wanted to explore a new space with different effects at play. I decided it was time to try a ne medium, so I moved into paint and polymer clay. Initially I tried merely plotting on clay:

Above – using pen plotter and needle tool to poke points into clay

Above – misregistered plot

After a few experiments, it seemed that the plotter just didn’t have the power needed to do anything substantial to the clay – the tool would often snag and the motor couldn’t always actually push the clay, messing up registration.

I realized that the robotic arm in Frank-Ratchye studio could push what I could do computationally and gives me a continuous third dimension. The robotic arm also seemed to promise more physical pushing strength compared to the plotter. From there, I realized that polymer clay pairs nicely with the additional dimension and stuck with the robot.

I’ve been using polymer clay for most of my life already but have taken a long hiatus, so this feels like a natural direction for my work to go into and an exciting opportunity to explore the medium with a fresh set of eyes.

I started exploring for this project by digging into clay by hand in different ways. My intent was to test what sorts of marks my tools could make in an attempt to think of concrete ideas for the final project and produced the following:

From here, all that I had left to do a learn how to use the robot. After getting comfortable with manually controlling the robot, I moved on to automating the bot. I received extensive help from my classmate Dinkolas in understanding how to program the robot and am indebted to his time and efforts.

Below are the preliminary results of merely having the robot tear across a block of clay along random paths:

The robot managed to fold the clay up during the inscription process. More work will need to go into measurements to ensure that the clay stays flat, or maybe this folding direction may deserve a bit more time and thought.

Video pending, I need to remove audio from the footage that I have.

wip

progress, inspiration, etc

PROGRESS:
Initially, I messed around with blue/red 3D stereoscopy by plotting different angles of one object on top of each other. I thought it was funny that I went from 3D (TiltBrush) -> 3D (Blender) -> 2D (plotting the SVG) -> “3D” (stereoscopy). Even though this experimentation was not enough to be a final piece, it was useful to do because I realized I liked the layering of color. I may or may not go further into the use of contrasting colors in my final.

Finally, I have decided to create a sort of hellscape. It is also an experimentation with different mediums and a really funky workflow. Below is the SVG of a draft version. From here, I want to fill the background with more scenes by adding more 3D objects in Blender as well as drawing on top of the plot by hand. I also would like the use the big Axidraw to be able to add more detail.

Plotted:

Original image:

However, this looks more like an up close view of a traditional hellscape. While I like the perspective, I am also intrigued by art that ignores perspective such as seen in The Hellscape Garden of Earthly Delights by Hieronymus Bosch:The Garden of Earthly Delights - Wikipedia

3 dimensionality exists within each object and their surroundings, but it is overall flat-looking. I think using the giant Axidraw to create this type of scene would be interesting.

gabagoo-FinalProject

CHA(i)Rmageddon

 
 


For my final project, I have made several large plots composed of some distribution of chairs. Since I did not make the chairs, nor did I improve or add much to the .svg generation, the bulk of my compositional input was in the distribution of chairs. The two main kinds of distributions I used were a planar distribution and a large pile formed from a physics simulation

Inspiration

- Doris Salcedo's Installations (above)
- SIGGRAPH 2007
- Chris Jordan's Photography

Process Overview

1. Data scraping
2. 2D Experiments
3. 3D File Cleanup
4. Composition Experiments
5. Material Experiments
data scraping
I spent quite a bit of time trying create my own chair generator, only to come to the conclusion that chairs are incredibly diverse. Naturally, the next bright idea that one might have is Googling 'chairs dataset.' The first result was this machine learning paper from 2014. The dataset included 2D rendering of chairs for image detection as well as the 3D files that the images were rendered from. The 3D files that were linked (left) were broken, but I was able to create a web scraping script that would download whatever files were still available.

 

2d experiments
Before I figured out how to fix the deprecated links to the 3D models, I did some experimentation with OpenCV to detect the contours for plotting the chairs. At this time, my intention was to annotate the legs of the chairs such that I could super-impose human arms and legs onto the chair legs.

 

3d file cleanup
After figuring out the web scraping there still remained quite a bit of standardization across the 3D models that needed to be done. The major issues involved creating a standard scale across all the chais and fixing the inverted normals (left, middle). The 3D to 2D .svg pipeline that I used is Blender's builtin Freestyle svg exporter.

The files I downloaded from 3dwarehouse were in .DAE format. I converted them all to .STLs using Blender, but the conversion did not always work. From the ~1600 links, I was able to scrape ~250 .dae files (I reached their download limits and I was too lazy to download more). Of those ~250 .dae files, I was able to successfully convert ~170 files into a workable format within Blender

 

composition experiments
physics simulation
While sorting through the mess of STLs that my previous work had generated, I ended up scaling all the chairs to a similar size and organized them into a simple lattice pattern (left). 

Around this time, the large AxiDraw had arrived and not many people were using it, so I set my goals to create a super intricate large plot. Hence, I arrived at using some built in physics simulation native to Blender (top, right).

After the major milestone critique, I decided to revisit the idea of planar distributions, as they felt less overwhelming than the dumpster of chairs I had previously plotted. What I ended up with was a mixture of Perlin noise fields and a bit of manual object manipulation within Blender.

 

material experiments
Material Experimentation
Throughout the majority of this project, I had been plotting with a 0.38 MUJI pen and a Pilot G-2 05. These had allowed the resultant plots to have individual chairs plot with a lot of clarity. I did a test experimenting with a more t-shirt like pattern using a thicker sharpie, which I plotted over with pen as well.

Takeaways

Technically, I learned a lot about creating large scale workflows for generating the kinds of pieces that I wanted. In the process I learned web scraping, blender python scripting, and improved my skills in the vpype CLI, openCV scripting, and the blender plug-in Sverchok. 

I suppose the larger insight that I have gain from this is that my approach to a project now has a much broader scope because I know that I am capable of combining various kinds of skills into a larger pipeline for achieving an overarching goal.

dinkolas – FinalWIP

Here’s the reference image I’ve been using to guide my thinking about 2D, 2.5D, and 3D drawing:

An Exhibition of Art by Ian Miller – Nouvelle Art

I have developed my own boolean library, partially just for fun, but also so I have extensive control over overlapping regions, which will be necessary for some of my braiding algorithms.

Here’s a rough summary of my thinking for algorithms that will be easier to develop now that I have the control I want:

Tentacles will have a start and an end, with bases and end caps. They can potentially spawn branches.

I already have some code to generate fields for the sky/ground. Depending on time, there may be many modular doodads.

sapeck-FinalWIP

I am using the Rotrics arm to make an image/drawing modifier. My prompt is the robot answering “this is what you should add.” Some of the responses will be humorous (doodling on faces, arrows to move things, etc.) and others will be more constructive (parallel offsetting existing lines, adding doodles inspired by parts of the drawing, etc.).

Inspiration:

  • The Useless Box: https://www.youtube.com/watch?v=aqAUmgE3WyM
  • Art Sqool: https://artsqool.cool/
  • Google Quick Draw: https://quickdraw.withgoogle.com/

To start, I am working on a robotic L.H.O.O.Q. machine. Duchamp’s original piece is a mustache on a print of the Mona Lisa.

Image source: https://en.wikipedia.org/wiki/L.H.O.O.Q.#/media/File:Marcel_Duchamp,_1919,_L.H.O.O.Q.jpg

I have so far completed the facial recognition and am working on the pen and surface location mapping. Then I will draw a mustache on every face in the frame using the Rotrics robot arm. This is how I will produce the class postcards as well. I also want to try putting an actual head into the frame.

The location tags are implemented with April Tags and the face tracking uses Media Pipe Face Mesh. Face Mesh seems to require proportionally large faces within the frame (optimized for selfie cameras), so I have to limit the camera frame to 640 px by 480 px. The April tags seem to get more accurate with larger resolution, but not too much more accurate at this range.

My current goal is mustaches, but time permitting I want to also expand it to draw other features on not just faces. For example detecting contours and drawing offsets, filling in shapes, etc.

marimonda – FinalProject

For my final project, I have been exploring generative and experimental comic books.

How it started (previous work):

Image

Inspiration:

Gareth A. Hopkins’ Experimental comics

John Pound’s Generative comics.

Greg Borenstein’s generative detective comics.

Extra resources:

Full 10-page comic plotted:

Text Datasets: Text generation: Gutenberg Poetry Corpus by Allison Parrish.

 

aether-FinalProject

Initial Question:

Through this creative inquiry, I seek to understand
the ways in which computational simulation can create
complex, geometric compositions. Using particle
simulations, I’m investigating how one may relinquish
discrete control of compositional geometry by endowing
with these autonomous agents with varying behaviors.
These agents then react and respond to various
environmental parameters, which I manage directly.

How may one create an artistic composition by allowing
digital space to act as environmental pressure on
a computational material?

 

Inspiration:

Murjoni Merriweather

 

Open Source Hair Library

Process:

 

My direction for this project has pivoted.
Rather than having particles merely trace
complex geometries as an end, I seek to use
forces, enacting geometries, and particles
systems to reflect the complex fiber
architectures of African hair.

[videopack id=”2264″]https://courses.ideate.cmu.edu/60-428/f2021/wp-content/uploads/2021/11/20211117_hairTest.mp4[/videopack]


In the second test of the particle trails
emanating from a 3D head model, I switched
to an extra fine Prices V5 rolling ball pen,
which was thinner than my micron pen.

I also used color in a test. I found that 500
particles may be a bit heavy for 8.5 x 11in paper.
Materials constraints also need to be considered
in future tests. As I need a substrate and mark
making tools that collaborate well together.

 

lemonbear-FinalWIP

For my final project, I’m creating a series of 12 postcards generated by a neural net trained on a year of messages I’ve sent to my partner. Each postcard is trained on progressively more data (i.e., the first card is representative of Nov. 2020 only, the second card is representative of Nov. & Dec. 2020, and so on).

Additionally, (though I’m still workshopping this portion) the front of the cards are living rooms generated from the Google quick draw dataset. As the series progresses, the rooms also become progressively lived-in and elaborate.

Some of my inspirations were Nick Bantock’s Griffin & Sabine series, which is told in epistolary form:

Inside pages of the book “The Pharos Gate, Griffin & Sabine’s Lost Correspondence” – Written and Illustrated by Nick Bantock

I enjoyed these books immensely as a kid; there was something thrilling about the voyeuristic quality of opening someone else’s mail, amplified by the lush illustrations & calligraphy. I wanted to emulate the intimacy of the narrative.

I also enjoyed various creative datavis projects, including Dear Data by Giorgia Lupi and Stefanie Posavec. The ways they bridged data, storytelling, and beauty/visual interest were very compelling to me.

All 12 postcards in .svg form are below the cut:

Continue reading “lemonbear-FinalWIP”

bookooBread – FinalProject

My project is going to be a collaborative drawing session with the plotter. When you draw something (on the drawing app that I am currently making) the axidraw will respond is some altered/responsive way. I plan on adding as many features as I can that allow for different variations of response. For example, maybe one drawing feature reads in your drawing’s coordinates and draws hairy lines that appear to sprout out of your drawing. Since what is being drawn onscreen is also being projected onto the area where the axidraw is plotting, it will look as if the plotter’s lines and the user’s lines are a part of the same surface/drawing. Right now I am running into a lot of issues with getting the initial drawing functionality and projection working properly so I am not that far. Below, I am showing my line plotted as a dotted line.

Tests…

Plotter draws dotted line of my line ( and you can see a bug where the pen doesn’t go up when it goes to make the first drawing).

Demo of above:

https://vimeo.com/646995264
Initial interactive tests –> plotter mirrors what I draw (+ some funny faces made by Shiva and I)

Setup:

The goal is to project my screen onto the area of the paper so my drawing is overlaid with the plotter’s drawing. The projection mapping didn’t work in time for this crit 🙂

Sougwen Chung has been a big inspiration for my work.

gabagoo-FinalWIP

 

downloading data from Seeing 3D chairs

2D experimentation

3D rendering experimentation

Siggraph 2007

 

Doris Salcedo