dinkolas – FinalWIP

Here’s the reference image I’ve been using to guide my thinking about 2D, 2.5D, and 3D drawing:

An Exhibition of Art by Ian Miller – Nouvelle Art

I have developed my own boolean library, partially just for fun, but also so I have extensive control over overlapping regions, which will be necessary for some of my braiding algorithms.

Here’s a rough summary of my thinking for algorithms that will be easier to develop now that I have the control I want:

Tentacles will have a start and an end, with bases and end caps. They can potentially spawn branches.

I already have some code to generate fields for the sky/ground. Depending on time, there may be many modular doodads.


I am using the Rotrics arm to make an image/drawing modifier. My prompt is the robot answering “this is what you should add.” Some of the responses will be humorous (doodling on faces, arrows to move things, etc.) and others will be more constructive (parallel offsetting existing lines, adding doodles inspired by parts of the drawing, etc.).


  • The Useless Box: https://www.youtube.com/watch?v=aqAUmgE3WyM
  • Art Sqool: https://artsqool.cool/
  • Google Quick Draw: https://quickdraw.withgoogle.com/

To start, I am working on a robotic L.H.O.O.Q. machine. Duchamp’s original piece is a mustache on a print of the Mona Lisa.

Image source: https://en.wikipedia.org/wiki/L.H.O.O.Q.#/media/File:Marcel_Duchamp,_1919,_L.H.O.O.Q.jpg

I have so far completed the facial recognition and am working on the pen and surface location mapping. Then I will draw a mustache on every face in the frame using the Rotrics robot arm. This is how I will produce the class postcards as well. I also want to try putting an actual head into the frame.

The location tags are implemented with April Tags and the face tracking uses Media Pipe Face Mesh. Face Mesh seems to require proportionally large faces within the frame (optimized for selfie cameras), so I have to limit the camera frame to 640 px by 480 px. The April tags seem to get more accurate with larger resolution, but not too much more accurate at this range.

My current goal is mustaches, but time permitting I want to also expand it to draw other features on not just faces. For example detecting contours and drawing offsets, filling in shapes, etc.

marimonda – FinalProject

For my final project, I have been exploring generative and experimental comic books.

How it started (previous work):



Gareth A. Hopkins’ Experimental comics

John Pound’s Generative comics.

Greg Borenstein’s generative detective comics.

Extra resources:

Full 10-page comic plotted:

Text Datasets: Text generation: Gutenberg Poetry Corpus by Allison Parrish.



Initial Question:

Through this creative inquiry, I seek to understand
the ways in which computational simulation can create
complex, geometric compositions. Using particle
simulations, I’m investigating how one may relinquish
discrete control of compositional geometry by endowing
with these autonomous agents with varying behaviors.
These agents then react and respond to various
environmental parameters, which I manage directly.

How may one create an artistic composition by allowing
digital space to act as environmental pressure on
a computational material?



Murjoni Merriweather


Open Source Hair Library



My direction for this project has pivoted.
Rather than having particles merely trace
complex geometries as an end, I seek to use
forces, enacting geometries, and particles
systems to reflect the complex fiber
architectures of African hair.

[videopack id=”2264″]https://courses.ideate.cmu.edu/60-428/f2021/wp-content/uploads/2021/11/20211117_hairTest.mp4[/videopack]

In the second test of the particle trails
emanating from a 3D head model, I switched
to an extra fine Prices V5 rolling ball pen,
which was thinner than my micron pen.

I also used color in a test. I found that 500
particles may be a bit heavy for 8.5 x 11in paper.
Materials constraints also need to be considered
in future tests. As I need a substrate and mark
making tools that collaborate well together.



For my final project, I’m creating a series of 12 postcards generated by a neural net trained on a year of messages I’ve sent to my partner. Each postcard is trained on progressively more data (i.e., the first card is representative of Nov. 2020 only, the second card is representative of Nov. & Dec. 2020, and so on).

Additionally, (though I’m still workshopping this portion) the front of the cards are living rooms generated from the Google quick draw dataset. As the series progresses, the rooms also become progressively lived-in and elaborate.

Some of my inspirations were Nick Bantock’s Griffin & Sabine series, which is told in epistolary form:

Inside pages of the book “The Pharos Gate, Griffin & Sabine’s Lost Correspondence” – Written and Illustrated by Nick Bantock

I enjoyed these books immensely as a kid; there was something thrilling about the voyeuristic quality of opening someone else’s mail, amplified by the lush illustrations & calligraphy. I wanted to emulate the intimacy of the narrative.

I also enjoyed various creative datavis projects, including Dear Data by Giorgia Lupi and Stefanie Posavec. The ways they bridged data, storytelling, and beauty/visual interest were very compelling to me.

All 12 postcards in .svg form are below the cut:

Continue reading “lemonbear-FinalWIP”

bookooBread – FinalProject

My project is going to be a collaborative drawing session with the plotter. When you draw something (on the drawing app that I am currently making) the axidraw will respond is some altered/responsive way. I plan on adding as many features as I can that allow for different variations of response. For example, maybe one drawing feature reads in your drawing’s coordinates and draws hairy lines that appear to sprout out of your drawing. Since what is being drawn onscreen is also being projected onto the area where the axidraw is plotting, it will look as if the plotter’s lines and the user’s lines are a part of the same surface/drawing. Right now I am running into a lot of issues with getting the initial drawing functionality and projection working properly so I am not that far. Below, I am showing my line plotted as a dotted line.


Plotter draws dotted line of my line ( and you can see a bug where the pen doesn’t go up when it goes to make the first drawing).

Demo of above:

Initial interactive tests –> plotter mirrors what I draw (+ some funny faces made by Shiva and I)


The goal is to project my screen onto the area of the paper so my drawing is overlaid with the plotter’s drawing. The projection mapping didn’t work in time for this crit 🙂

Sougwen Chung has been a big inspiration for my work.



downloading data from Seeing 3D chairs

2D experimentation

3D rendering experimentation

Siggraph 2007


Doris Salcedo


stickz – FinalProject

Jittery Line Drawing App 

I changed the direction of the final project to be a drawing app that interacts with the orientation of the user’s phone that can draw in 3D. I’ve been able to get multi-color lines working, where I implemented them as springs, and got some basic physics working. Wasn’t able to implement the phone orientation into the springs last night as planned, but I’ll get that up and running 🙁

Inspiration: https://experiments.withgoogle.com/ink-space 

Stuff I’m thinking about : 

  • Implementing Multi-touch for drawing – using LingDong’s template 
  • Different color themes for drawing lines 
  • Phone orientation that messes with the physics of the lines (then 3D orientation) 
  • More sophisticated 3D line forms

Screenrecordings of the Glitch Drawing App So Far

Stuff to work on:
- Different color themes for drawing lines 
- Phone orientation that messes with the physics of the lines (then 3D orientation) 
- More sophisticated 3D line forms


aahdee – FinalWIP

I spent the last few weeks devouring Complete Pleats by Paul Jackson. My intention for my final project is computation pleating by using a laser cutter to score paper that is drawn on by an Axidraw plotter. Before knowing how to do that, I need to know how to pleat paper effectively.

I’ve read through approximately 85% of the textbook and created multiple pleating examples that was shown in the book. Even though I have a background in paper arts and bookbinding, the book was rather challenging. Below is a few images of what I’ve created.

These were all created by hand and there are multiple failures behind it. After that was done, I decided to head to the laser cutter to attempt to pleat some paper. Simply put, it was a mess.

The paper can easily be formed into mountain folds but snaps in half when I tried to fold into valley folds. This may be because I used thicker Bristol 100lb paper. The laser cutter leaves nasty brown marks on the paper and when I attempted to fold it the charred color spread to the rest of the paper. I may just need to work out my kinks with the laser cutter, but considering that I can only access the cutter for 1 hour per day I don’t have a lot of room for rapid prototyping. I also need to research the ingredients of the pens I would use if I cut plotted art on the laser cutter as some chemicals can produce toxic results when a laser is applied. Considering that I’m using a language and libraries that I’m not familiar with, there’s a lot of variables here that I may just cut the laser cutter out of the equation and fold the paper with my hands.


miniverse – FinalProject

monster maker

[videopack id=”2283″]https://courses.ideate.cmu.edu/60-428/f2021/wp-content/uploads/2021/11/test.mp4[/videopack]

This is at the moment a little broken and the drawings a tad hasty, but essentially I want to make a recursive monster maker. It’ll take drawings of animal parts and automatically scale and fit them into predefined openings. Here are a broken runs of the algorithm.

I drew some inspiration from a project Golan showed me: Mario Klingemann “Ernst” (examples below):

I’ve done some assemblage in the past with hand drawn pieces:


This is a truchet tiling with hand drawn pieces.


Some process shots:


(weird bug I need to fix)

tool I build to draw, save, and generate, JSONs