My project is going to be a collaborative drawing session with the plotter. When you draw something (on the drawing app that I am currently making) the axidraw will respond is some altered/responsive way. I plan on adding as many features as I can that allow for different variations of response. For example, maybe one drawing feature reads in your drawing’s coordinates and draws hairy lines that appear to sprout out of your drawing. Since what is being drawn onscreen is also being projected onto the area where the axidraw is plotting, it will look as if the plotter’s lines and the user’s lines are a part of the same surface/drawing. Right now I am running into a lot of issues with getting the initial drawing functionality and projection working properly so I am not that far. Below, I am showing my line plotted as a dotted line.
Plotter draws dotted line of my line ( and you can see a bug where the pen doesn’t go up when it goes to make the first drawing).
For my final project, I plan to create a tribute to Eadweard Muybridge. I am fascinated by his photo series of people in motion, and would like to capture similar information, but with the benefit of contemporary technology to layer it. My current plan of action is to have a projector pointed at people as they walk by, while copying their outlines as they move. Alternatively, I could take the outlines and try to layer them as plots, or geometry to laser cut. Part of the motivation behind this proposal is to move away from traditional concepts of what a drawing is towards something interactive.
To better understand my relationship to physical materials and formal fabrication processes, I am stepping away from 3D digital fabrication. As I diverge my attention from machine knitting towards using the pen plotter, I hope that the change in medium and fabrication process will aid me in uncovering more about my approach in generative making. Thus far in my journey of self-inquiry, I identified that I find interest in generating composition and form by engaging with the forces in a piece's environment:
I propose that I use multi-agent particle systems to create fibrous forms and complex, line compositions. I aim to cultivate visually complex compositions by attributing various behaviors to autonomous agents and endowing the agent's environment with physical properties. Leaving trails behind, the particle agents will then create records of their positions, and thus, visual arrangements will emerge as the trails characterizes how the agents are behaving in digital space.
This investigations challenges me to develop mastery in controlling particle agents, whereas I initially aim to plot more "primitive forms", then graduate onto farming more complex forms using compound techniques and agent behavior.
- Culebra 2.0
Large Format AxiDraw
(n)certainties | The battle of impermanency (opus 5.2) François Roche from R&Sie(n) with Assistant-Partner : Ezio Blasetti & Assistant : Miranda Römer
students : Jennifer Chang & Belen Gandara | Farzin Lotfi-Jam & Juan Francisco Saldarriaga | Hualin Shi & Luis Casanovas | Paraskeyi Fanou & Nathan Hoofnagle | Helen Levin & Jen Wood | Mengyi Fan & Joseph Justus
For my final project, I want to implement an AR drawing app, similar to that of Just a Line app created by Google (https://justaline.withgoogle.com/). I want to use the app as a backbone for the spatial detection and AR rendering, but want to change the simple white line so that it becomes a lot more visually interesting (i.e. more lively and colorful forms instead of a generic line ). I want to bring in an organic behavior to the lines, where the lines feel alive and move around within the space they were drawn in. I’m thinking about implementing flocking or something like that to illustrate motion within the stroke (sliding your finger across the screen). Definitely hope I can get the 3D rendering to look good, as that would be the hardest part of the assignment. More of what I’m talking about is illustrated in the sketches, which hopefully gives a better sense of the visuals that I want to accomplish.
11/3 Project Proposal
11/10 Getting draw a line to work on my phone, implementing a similar version with code
11/17 Implementing the floating, abstract lines that
11/29 Have the entire app working and functional, essentially just a line + fancier/lively lines
12/1 adding extra details and perfecting features, fixing potential bugs in the augmentation
For my final project, I’d like to explore how machines can manipulate the medium itself. So far, we’ve explored how machines can use a medium to produce a drawing on another medium. This includes using the Axidraw to wield a pen or marker that draws on paper. In the latter half of the semester, I want to experiment with what a machine can do to modify paper or other materials. Technically, I would be drawing with a machine onto the chosen medium, but the end goal here is to change the shape and structure of the medium with the help of that drawing.
Golan introduced me to Erik Demaine, a computer scientist who researched computational origami. He has created many sculptures of folded paper creased with curves.
This is done by using a low power laser cutter to slightly score the paper. The paper is then folded on those scores. This is the only way to produce those results.
I’d like to start experimented with that, starting with pleated paper.
Id like to incorporate pleating and drawing with axidraws. By next week Monday, I’ll have a pleated paper sample. By Wednesday I’ll have a sample of pleated paper with axidraw plots on it.
I am planning on doing a real time interactive project with the axidraw in python processing. This will require me to port the existing java processing for real time control code into python first. When you draw something (on the drawing app that I will be making for this) the axidraw will respond is some altered/ interesting way. I plan on adding as many features as I can that allow for different kinds of interesting interaction with the axidraw. For example, maybe one that reads in your drawing and then turns it into something that looks like an animal I don’t know(this would require machine learning obviously) or maybe it takes your line and applies a simulation algorithm to it to make it look altered. What is being drawn on the app is also being projected onto the area where the axidraw is plotting.
I am really inspired by Sougwen Chung’s work and how these interactions with the machines become a kind of really intimate performance. Also the mark marking in her work is absolutely beautiful and so gestural, I am in love with it. At the same time, I am kind of interested in those antagonistic drawing programs – not in an antagonistic way but in the hopes to have something with a completely unexpected outcome.
Right now my plan looks something like this? ish:
Set up projector
Port java axidraw control code to python? or find another way
Start coding the plotter/drawing interaction features
Create the gui for the drawing app
Map the projector to the area of the plotter/do the transforms so image isnt warped
I want to use the Rotrics arm to make an image/drawing modifier. My prompt is the robot answering “this is what you should have done.” Some of the responses will be humorous (doodling on faces, crossing out sections, etc.) and others will be more constructive (parallel offsetting existing lines, adding doodles inspired by parts of the drawing, etc.). I will mount a camera above the robot, and the arm will move to modify multiple drawings at a time (lined up in front of the sliding rail). I will add some placement markers to the arm and/or drawings to help the robot easily find and register them. I will use a plethora of existing algorithms for identifying contours, faces, bodies, doodles, etc. as well as some of my own.
The Useless Box: https://www.youtube.com/watch?v=aqAUmgE3WyM
Art Sqool: https://artsqool.cool/
Google Quick Draw: https://quickdraw.withgoogle.com/
11/10 Collection of algorithms for detecting parts/sketches, working with webcam
11/17 Page detection, generate and visualize drawings on pages
11/29 Mapping robot to pages/drawings, drawing on pages
I want my final project to be a data-driven small multiples work. I was really touched by Lukas’ work for the midsemester critique with the zigzagging lines reflecting texts between him and his girlfriend, and I had also seen Caroline Hermans’ project exploring attention dynamics with linked scarves, so I was considering a project reflecting in some way the history of my messages with my partner…! As for the output, I’m still sort of waffling between some ideas—generative poems constructed in a “mad-libs” manner; generative living rooms with different elements like couches, tables, lamps, etc.; generative letters between fictional figures with text with varying levels of readability (delving into asemic writing territory) corresponding to varying levels of understanding between said fictional figures.
Golan recommended Allison Parrish’s Eyeo 2015 talk, which I thoroughly enjoyed. I liked the analogy she drew between unmanned travel and exploration of strange language spaces. In the same way a weather balloon can go higher and explore longer without the weight of a human, generative language can go into novel spaces without the weight of our intrinsic biases or bents towards meaning. I also liked the piece at the bottom of this lecture, Sissy Marley’s “My House Wallpaper” (2020) because I’ve always been obsessed with the concept of building a home and understanding the components (people, objects, light) that define a home. I thought generation might be an interesting way to explore home construction without being weighed down by past connotations of home/traumas surrounding home.
Lukas also recommended “Dear Data” (putting this here for my future reference).
11.03 Wed -- Due: #10 (Research/Proposal/Tests); DISCUSSION.
11.08 Mon -- Have parsed the data to get something meaningful out of it
11.10 Wed -- Come to class with a chunk of code that is a prototype for the output (have done some research regarding language generation and/or small multiples)
11.15 Mon -- Create 3-4 small pieces that *exist* with aforemented data & code; continue revising codebase
11.17 Wed -- Due: #11 (Major Milestone); CRITIQUE.
11.22 Mon -- Revise work based on critique; make 3-4 small pieces again that *exist*
11.24 Wed -- NO SESSION (Thanksgiving).
11.29 Mon -- Create final plotted project; by Wednesday have made all the necessary hand-drawn changes
12.01 Wed -- Due: #12 (Final Project); EXHIBITION.
For my project I’m planning on figuring out how to use pitch detection for real-time axidraw-ing. It’s already turning out be be a bigger challenge than I anticipated (just with finding pitch-osc right now) but I think after I figure that out it will be alright. I sort of want to plot an asemic language which records something in relation to pitch detection. I create a lot of mini songs in my spare time, but I usually just save them as voice memos in my phone.
So I thought it’d be cool if there was a way to turn this into a paper form rather than what it is now (still thinking about how I’d go about this).
Following MidSemester critique, one of the guest critics mentioned wanted to see my organic cell generation bounded in a shape that was less square. So I’m also planning on making little paper chains of cells in the shape of a human body – just thought it’d be fun to do, maybe a little creepy too.
I also want to create a kind of tessellation that resembles the patterns found in chinese paper cutting so that I can axidraw it onto my laptop using the super cool aren pen.