I am using the Rotrics arm to make an image/drawing modifier. My prompt is the robot answering “this is what you should add.” Some of the responses will be humorous (doodling on faces, arrows to move things, etc.) and others will be more constructive (parallel offsetting existing lines, adding doodles inspired by parts of the drawing, etc.).
The Useless Box: https://www.youtube.com/watch?v=aqAUmgE3WyM
Art Sqool: https://artsqool.cool/
Google Quick Draw: https://quickdraw.withgoogle.com/
To start, I am working on a robotic L.H.O.O.Q. machine. Duchamp’s original piece is a mustache on a print of the Mona Lisa.
I have so far completed the facial recognition and am working on the pen and surface location mapping. Then I will draw a mustache on every face in the frame using the Rotrics robot arm. This is how I will produce the class postcards as well. I also want to try putting an actual head into the frame.
The location tags are implemented with April Tags and the face tracking uses Media Pipe Face Mesh. Face Mesh seems to require proportionally large faces within the frame (optimized for selfie cameras), so I have to limit the camera frame to 640 px by 480 px. The April tags seem to get more accurate with larger resolution, but not too much more accurate at this range.
My current goal is mustaches, but time permitting I want to also expand it to draw other features on not just faces. For example detecting contours and drawing offsets, filling in shapes, etc.
Through this creative inquiry, I seek to understand
the ways in which computational simulation can create
complex, geometric compositions. Using particle
simulations, I’m investigating how one may relinquish
discrete control of compositional geometry by endowing
with these autonomous agents with varying behaviors.
These agents then react and respond to various
environmental parameters, which I manage directly.
How may one create an artistic composition by allowing
digital space to act as environmental pressure on
a computational material?
Open Source Hair Library
My direction for this project has pivoted.
Rather than having particles merely trace
complex geometries as an end, I seek to use
forces, enacting geometries, and particles
systems to reflect the complex fiber
architectures of African hair.
In the second test of the particle trails
emanating from a 3D head model, I switched
to an extra fine Prices V5 rolling ball pen,
which was thinner than my micron pen.
I also used color in a test. I found that 500
particles may be a bit heavy for 8.5 x 11in paper.
Materials constraints also need to be considered
in future tests. As I need a substrate and mark
making tools that collaborate well together.
For my final project, I’m creating a series of 12 postcards generated by a neural net trained on a year of messages I’ve sent to my partner. Each postcard is trained on progressively more data (i.e., the first card is representative of Nov. 2020 only, the second card is representative of Nov. & Dec. 2020, and so on).
Additionally, (though I’m still workshopping this portion) the front of the cards are living rooms generated from the Google quick draw dataset. As the series progresses, the rooms also become progressively lived-in and elaborate.
Some of my inspirations were Nick Bantock’s Griffin & Sabine series, which is told in epistolary form:
I enjoyed these books immensely as a kid; there was something thrilling about the voyeuristic quality of opening someone else’s mail, amplified by the lush illustrations & calligraphy. I wanted to emulate the intimacy of the narrative.
I also enjoyed various creative datavis projects, including Dear Data by Giorgia Lupi and Stefanie Posavec. The ways they bridged data, storytelling, and beauty/visual interest were very compelling to me.
My project is going to be a collaborative drawing session with the plotter. When you draw something (on the drawing app that I am currently making) the axidraw will respond is some altered/responsive way. I plan on adding as many features as I can that allow for different variations of response. For example, maybe one drawing feature reads in your drawing’s coordinates and draws hairy lines that appear to sprout out of your drawing. Since what is being drawn onscreen is also being projected onto the area where the axidraw is plotting, it will look as if the plotter’s lines and the user’s lines are a part of the same surface/drawing. Right now I am running into a lot of issues with getting the initial drawing functionality and projection working properly so I am not that far. Below, I am showing my line plotted as a dotted line.
Plotter draws dotted line of my line ( and you can see a bug where the pen doesn’t go up when it goes to make the first drawing).
I changed the direction of the final project to be a drawing app that interacts with the orientation of the user’s phone that can draw in 3D. I’ve been able to get multi-color lines working, where I implemented them as springs, and got some basic physics working. Wasn’t able to implement the phone orientation into the springs last night as planned, but I’ll get that up and running 🙁
I spent the last few weeks devouring Complete Pleats by Paul Jackson. My intention for my final project is computation pleating by using a laser cutter to score paper that is drawn on by an Axidraw plotter. Before knowing how to do that, I need to know how to pleat paper effectively.
I’ve read through approximately 85% of the textbook and created multiple pleating examples that was shown in the book. Even though I have a background in paper arts and bookbinding, the book was rather challenging. Below is a few images of what I’ve created.
These were all created by hand and there are multiple failures behind it. After that was done, I decided to head to the laser cutter to attempt to pleat some paper. Simply put, it was a mess.
The paper can easily be formed into mountain folds but snaps in half when I tried to fold into valley folds. This may be because I used thicker Bristol 100lb paper. The laser cutter leaves nasty brown marks on the paper and when I attempted to fold it the charred color spread to the rest of the paper. I may just need to work out my kinks with the laser cutter, but considering that I can only access the cutter for 1 hour per day I don’t have a lot of room for rapid prototyping. I also need to research the ingredients of the pens I would use if I cut plotted art on the laser cutter as some chemicals can produce toxic results when a laser is applied. Considering that I’m using a language and libraries that I’m not familiar with, there’s a lot of variables here that I may just cut the laser cutter out of the equation and fold the paper with my hands.
This is at the moment a little broken and the drawings a tad hasty, but essentially I want to make a recursive monster maker. It’ll take drawings of animal parts and automatically scale and fit them into predefined openings. Here are a broken runs of the algorithm.
I drew some inspiration from a project Golan showed me: Mario Klingemann “Ernst” (examples below):
I’ve done some assemblage in the past with hand drawn pieces: