This project takes walk cycles and converts them to traces of motion.


The project was heavily inspired by Eadweard Muybridge’s captures, which tracked how the human (and horse) form moves in space.

Étienne-Jules Marey was another inspiration, specifically his traces of human locomotion.

However my project is more aligned with Harold Edgerton’s series that focuses on outlines, as it becomes more about the form than the movement of joints.


In order to make this project, I used Detectron2 for segmenting outlines.

A segmented photo of Timothée Chalamet at the Met Gala
A sample segmentation.

Once I had the outline alpha, it was fairly simple to convert it to an SVG using the potrace library. From that point I began playing with what motion works best.

5 Figures with long exposure outlines
Outlines from a video of a group doing the YMCA dance.

After several attempts, I realized that the walk cycle, which has a strong history in animation and drawing, would be a good subject for the series. The next step was to tune movements and sample rates to see what would look best.

I ended up deciding that sampling a straight on walk from every frame looked best.

Tracing motion was also a nod to my architectural training, as depicting motion in this fashion is an exercise many architecture students partake in during their technical education.


This project was a confidence booster in some ways and frustrating in others. When I first decided what I actually wanted to make, I was overwhelmed and had no idea if I could even pull off getting outlines. Funny enough, the outlines were the easy part, but the small details, like dotted lines which ended up mooshing together from blotted ink, were hard. I’m happy with the outcome, but there are definitely places I can still take it.


For my final project, I decided to make a tribute to the typology of drawing that focuses on the movement of living things. This typology explores motion, form, and time. One such example is Eadweard Muybridge’s Horse photographs.

The photographs became import as a series rather than individually, and in turn offer insight into the locomotion of a horse. Another example of this typology is the work of Harold Edgerton, which become more formal due to the stark contrast in the photographs.

In dialogue with these works, I took advantage of the image segmentation library Detectron2, which allowed me to generate outlines of people from an image.

A segmented photo of Timothée Chalamet at the Met Gala
A sample segmentation.

Using these outlines, I began sampling videos to create long exposure outlines.

5 Figures with long exposure outlines
Outlines from a video of a group doing the YMCA dance.

I found walk cycles particularly compelling, partially due to the uniqueness of an individual’s gait.

The outlines of a person walking.
Outlines generated from a video of a person walking.

With the focus of a walk cycle, I began recording my classmates’s walks.

A smear from a particularly nice walk.
A smear from a particularly nice walk.

Through the capture, the personality of a walk can be seen.

Another walk outline

Finally, to emphasize the temporality of the walk, I began experimenting with a dashed representation, fading out the edges of the moment.


For my final project, I plan to create a tribute to Eadweard Muybridge. I am fascinated by his photo series of people in motion, and would like to capture similar information, but with the benefit of contemporary technology to layer it. My current plan of action is to have a projector pointed at people as they walk by, while copying their outlines as they move. Alternatively, I could take the outlines and try to layer them as plots, or geometry to laser cut. Part of the motivation behind this proposal is to move away from traditional concepts of what a drawing is towards something interactive.

11.08	Mon	-- Have segmentation working. 
11.10	Wed	-- Have image to segmentation working. 
11.15	Mon	-- Have projection working. 
11.17	Wed	-- Due: #11 (Major Milestone); CRITIQUE.
11.22	Mon	-- Refine graphic representation.
11.24	Wed	-- NO SESSION (Thanksgiving).
11.29	Mon	-- Document.
12.01	Wed	-- Due: #12 (Final Project); EXHIBITION.


This was not necessarily a fun process, but it was insightful. The flavor of the week was Clojure, a LISP dialect which is fairly popular these days. I ended up using NextJournal as my plotting environment since it was the easiest way to code in Clojure these days. I ended up fighting with the language a good amount, and having to rely on Java’s error messages was…trying. I did feel inspired by all the parenthesis in Clojure–enough to make a bubbly drawing. I do think that this project represents hitting the barrier of trying new tools where fighting to just print results without errors.


Continuing my practice of trying new tools for this course, I decided to try out the thing umbrella for this project. The umbrella is a project by Karsten Schmidt in which he creates a giant package ecosystem full of every algorithm one could ever want. was originally written in Clojure, and the transform definitely feels like Clojure. For instance, I spent a long time fighting with the system until I realized certain functions fail unless passed empty lists as a final parameter. That being said, the library is wildly powerful! I was able to achieve amazing results with very little code. I would totally use the thingiverse again, though maybe when under a much longer deadline. I found myself thinking of Good Shape while I worked—how it’s particularly difficult to describe what makes a shape “better.”

import { quad, asSvg, withAttribs, path } from '';
import { star } from '';
import { vertices } from '';
import { scatter } from '';
import { fuzzyPoly, compFill } from '';
import { svg } from '';
import { serialize } from '';
import { shuffle } from '';

const width = 648;
const height = 864;

const buffer = 20;
const rad = 45;

const s = scatter(
        [buffer + rad, buffer + rad],
        [width - buffer - rad, height - buffer - rad],
        [width - buffer - rad, height - buffer - rad],
        [0, height - buffer - rad],

const blobShapes = s.flatMap(([x, y], i) =>
    Array.from({ length: i % 2 ? 6 : 1 }, (_) =>
            rad + buffer * Math.random(),
            ~~(Math.random() * 5) + 3,
            [1.0, 1.5],
                fill: 'none',
                stroke: 'black',
                'stroke-width': '1px',
                transform: `translate(${x},${y}) scale(${i / s.length})`,
const blobs = blobShapes
    .map((s) =>
        fuzzyPoly(vertices(s), s.attribs, { jitter: 5, curveScale: 0.5 }),
    .map((b) => withAttribs(, b.attribs));

const scene = asSvg(...blobs);
const output = serialize(svg({ width, height }, scene));
document.querySelector('#app').innerHTML = `${output}`;


This week I crashed and burned a little. I had originally intended to attempt the exercises in Julia using the Luxor wrapper for Cairo, but let myself get swept up in the excitement around VSketch, and therefore switched to Python a little late in the game. VSketch was working mostly great for the first two rows in my matrix, but when I hit the third row I began getting errors related to maximum number of geometry on screen. For some reason as geometry increased, issues began popping up that were mutating variables I was not even interacting with. I tried to extract the logic so I could work row by row, but the script was a little too messy to reasonably refactor. I should have fallen back to what I know (JavaScript). VSketch is a ton of fun to work in and I would love to keep exploring it, but this was probably not the right time.


I was particularly interested in the utility, which is able to turn a .STL file into an SVG.
Considering how much pain I go through trying to use tools like Make2D in Rhino, or creating a custom view to SVG script in Houdini, having a website that will just do it for me is really nice. I do wish I could get it to use the contours rather than the triangles, however I understand that would probably require significantly more work and processing power.

The image below was generated using the tool.


Lostritto tackles the question of “what is drawing?” in an interesting way. Defining the concept of drawing by its negative is fascinating, but in many ways also reflects the influence of architecture on the writing. Whereas artists tend to focus more on evaluating if a work is good art rather than the binary understanding of “art” or “not art,” the dialogue of what built structures get to call themselves architecture continues to this day. Nikolaus Pevsner famously said

A bicycle shed is a building; Lincoln Cathedral is a piece of architecture.

Lostritto even hits upon this issue, stating

without limits, it’s easy to see how almost anything can be a drawing.

Maybe almost anything can be a drawing.


The Licia He article was particularly delightful. I resonated with the the “finding your way back” section, since sometimes these n day challenges can feel more like forcing out art than creating it. A theme I see across the articles is artists experimenting with process to find what toolkits allow themselves to express their ideas with the least friction.


I decided to do the following exercises in Rust as a technical challenge. This proved to be a little bit more of an undertaking than I expected, so here are some notes on some of the hurdles I ran into.

I decided to use nannou for most of the exercises, since I did not want to attempt implementing each interaction I needed in WGPU. Nannou seems incredibly powerful, but the guide left much to be desired (when they become more complete I will definitely come back for another look!). Figuring out registering the events I cared about took a significant amount of searching, since the guide did not seem to document it. Eventually I ended up using the all_functions example as a base and removing everything I didn’t care about. The second quirk that came up was nannou’s coordinate system, which puts the origin at the center of the screen. This proved to be problematic later on as I moved to SVG.

Rust also is completely opposed to implicit casts, which is fair, but also meant a lot of code was shifting between data structures. This was particularly apparent in managing the points for lines, which required taking the VecDeque.to_owned() each frame for drawing the polyline.

The SVG generation was its own ordeal. Nannou’s line builder seems to use the lyon package for SVG rendering. Lyon however does not allow actually exporting an SVG, so I ended up using the svg crate. It is also worth mentioning that the svg crate seems to be set up in a way which makes it hard to create SVGs programmatically. Specifically, I had to define the “Data” element mutable and constantly reassign it to the result of data.move_to() or data.line_to().

All in all, I would say the technical challenge was fun to play with, but I would not recommend this method over more mature frameworks like Processing or openFrameworks.

Feel free to check out the code behind the images above on GitHub.