Student Area

aether-ProjectProposal

Proposal

.................................................
To better understand my relationship to physical materials and formal fabrication processes, I am stepping away from 3D digital fabrication. As I diverge my attention from machine knitting towards using the pen plotter, I hope that the change in medium and fabrication process will aid me in uncovering more about my approach in generative making. Thus far in  my journey of self-inquiry, I identified that I find interest in generating composition and form by engaging with the forces in a piece's environment:

I propose that I use multi-agent particle systems to create fibrous forms and complex, line compositions. I aim to cultivate visually complex compositions by attributing various behaviors to autonomous agents and endowing the agent's environment with physical properties. Leaving trails behind, the particle agents will then create records of their positions, and thus, visual arrangements will emerge as the trails characterizes how the agents are behaving in digital space.

This investigations challenges me to develop mastery in controlling particle agents, whereas I initially aim to plot more "primitive forms", then graduate onto farming more complex forms using compound techniques and agent behavior. 
Software 
Rhino/Grasshopper 
  - Culebra 2.0 
  - Heteroptera 

Hardware
Large Format AxiDraw
Inspiration


gsapp
(n)certainties | The battle of impermanency (opus 5.2) François Roche from R&Sie(n) with Assistant-Partner : Ezio Blasetti & Assistant : Miranda Römer

students : Jennifer Chang & Belen Gandara | Farzin Lotfi-Jam & Juan Francisco Saldarriaga | Hualin Shi & Luis Casanovas | Paraskeyi Fanou & Nathan Hoofnagle | Helen Levin & Jen Wood | Mengyi Fan & Joseph Justus

5-evolution-021
Initial Sketches

 

Stickz – ProjectProposal

For my final project, I want to implement an AR drawing app, similar to that of Just a Line app created by Google (https://justaline.withgoogle.com/).  I want to use the app as a backbone for the spatial detection and AR rendering, but want to change the simple white line so that it becomes a lot more visually interesting (i.e. more lively and colorful forms instead of a generic line ). I want to bring in an organic behavior to the lines, where the lines feel alive and move around within the space they were drawn in. I’m thinking about implementing flocking or something like that to illustrate motion within the stroke (sliding your finger across the screen). Definitely hope I can get the 3D rendering to look good, as that would be the hardest part of the assignment. More of what I’m talking about is illustrated in the sketches, which hopefully gives a better sense of the visuals that I want to accomplish. 

Timeline
  • 11/3 Project Proposal
  • 11/10 Getting draw a line to work on my phone, implementing a similar version with code 
  • 11/17 Implementing the floating, abstract lines that
  • 11/29 Have the entire app working and functional, essentially just a line + fancier/lively lines
  • 12/1 adding extra details and perfecting features, fixing potential bugs in the augmentation 

aahdee – ProjectProposal

For my final project, I’d like to explore how machines can manipulate the medium itself. So far, we’ve explored how machines can use a medium to produce a drawing on another medium. This includes using the Axidraw to wield a pen or marker that draws on paper. In the latter half of the semester, I want to experiment with what a machine can do to modify paper or other materials. Technically, I would be drawing with a machine onto the chosen medium, but the end goal here is to change the shape and structure of the medium with the help of that drawing.

Golan introduced me to Erik Demaine, a computer scientist who researched computational origami. He has created many sculptures of folded paper creased with curves.

This is done by using a low power laser cutter to slightly score the paper. The paper is then folded on those scores. This is the only way to produce those results.

I’d like to start experimented with that, starting with pleated paper. Ingenia - Applied origami

Id like to incorporate pleating and drawing with axidraws. By next week Monday, I’ll have a pleated paper sample. By Wednesday I’ll have a sample of pleated paper with axidraw plots on it.

ProjectProposal – bookooBread

I am planning on doing a real time interactive project with the axidraw in python processing. This will require me to port the existing java processing for real time control code into python first. When you draw something (on the drawing app that I will be making for this) the axidraw will respond is some altered/ interesting way. I plan on adding as many features as I can that allow for different kinds of interesting interaction with the axidraw. For example, maybe one that reads in your drawing and then turns it into something that looks like an animal I don’t know(this would require machine learning obviously) or maybe it takes your line and applies a simulation algorithm to it to make it look altered. What is being drawn on the app is also being projected onto the area where the axidraw is plotting.

I am really inspired by Sougwen Chung’s work and how these interactions with the machines become a kind of really intimate performance. Also the mark marking in her work is absolutely beautiful and so gestural, I am in love with it. At the same time, I am kind of interested in those antagonistic drawing programs – not in an antagonistic way but in the hopes to have something with a completely unexpected outcome.

Right now my plan looks something like this? ish:

  • Set up projector
  • Port java axidraw control code to python? or find another way
  • Start coding the plotter/drawing interaction features
  • Create the gui for the drawing app
  • Map the projector to the area of the plotter/do the transforms so image isnt warped

sapeck-Proposal

I want to use the Rotrics arm to make an image/drawing modifier. My prompt is the robot answering “this is what you should have done.” Some of the responses will be humorous (doodling on faces, crossing out sections, etc.) and others will be more constructive (parallel offsetting existing lines, adding doodles inspired by parts of the drawing, etc.). I will mount a camera above the robot, and the arm will move to modify multiple drawings at a time (lined up in front of the sliding rail). I will add some placement markers to the arm and/or drawings to help the robot easily find and register them. I will use a plethora of existing algorithms for identifying contours, faces, bodies, doodles, etc. as well as some of my own.

Inspiration:

    • The Useless Box: https://www.youtube.com/watch?v=aqAUmgE3WyM
    • Art Sqool: https://artsqool.cool/
    • Google Quick Draw: https://quickdraw.withgoogle.com/

Rough Timeline

    • 11/3 Proposal
    • 11/10 Collection of algorithms for detecting parts/sketches, working with webcam
    • 11/17 Page detection, generate and visualize drawings on pages
    • 11/29 Mapping robot to pages/drawings, drawing on pages
    • 12/1 Fine tuning

 

lemonbear-Proposal

I want my final project to be a data-driven small multiples work. I was really touched by Lukas’ work for the midsemester critique with the zigzagging lines reflecting texts between him and his girlfriend, and I had also seen Caroline Hermans’ project exploring attention dynamics with linked scarves, so I was considering a project reflecting in some way the history of my messages with my partner…! As for the output, I’m still sort of waffling between some ideas—generative poems constructed in a “mad-libs” manner; generative living rooms with different elements like couches, tables, lamps, etc.; generative letters between fictional figures with text with varying levels of readability (delving into asemic writing territory) corresponding to varying levels of understanding between said fictional figures.

Golan recommended Allison Parrish’s Eyeo 2015 talk, which I thoroughly enjoyed. I liked the analogy she drew between unmanned travel and exploration of strange language spaces. In the same way a weather balloon can go higher and explore longer without the weight of a human, generative language can go into novel spaces without the weight of our intrinsic biases or bents towards meaning. I also liked the piece at the bottom of this lecture, Sissy Marley’s “My House Wallpaper” (2020) because I’ve always been obsessed with the concept of building a home and understanding the components (people, objects, light) that define a home. I thought generation might be an interesting way to explore home construction without being weighed down by past connotations of home/traumas surrounding home.

Lukas also recommended “Dear Data” (putting this here for my future reference).

potential parts of a home
mad libs brainstorming
i got distracted in 213 and thought about light and place instead

11.03	Wed	-- Due: #10 (Research/Proposal/Tests); DISCUSSION.
11.08	Mon	-- Have parsed the data to get something meaningful out of it
11.10	Wed	-- Come to class with a chunk of code that is a prototype for the output (have done some research regarding language generation and/or small multiples)
11.15	Mon	-- Create 3-4 small pieces that *exist* with aforemented data & code; continue revising codebase
11.17	Wed	-- Due: #11 (Major Milestone); CRITIQUE.
11.22	Mon	-- Revise work based on critique; make 3-4 small pieces again that *exist*
11.24	Wed	-- NO SESSION (Thanksgiving).
11.29	Mon	-- Create final plotted project; by Wednesday have made all the necessary hand-drawn changes
12.01	Wed	-- Due: #12 (Final Project); EXHIBITION.

grape – ProjectProposal

For my project I’m planning on figuring out how to use pitch detection for real-time axidraw-ing. It’s already turning out be be a bigger challenge than I anticipated (just with finding pitch-osc right now) but I think after I figure that out it will be alright. I sort of want to plot an asemic language which records something in relation to pitch detection. I create a lot of mini songs in my spare time, but I usually just save them as voice memos in my phone.

[videopack id=”1934″]https://courses.ideate.cmu.edu/60-428/f2021/wp-content/uploads/2021/11/RPReplay_Final1635957413.mov[/videopack]

So I thought it’d be cool if there was a way to turn this into a paper form rather than what it is now (still thinking about how I’d go about this).

Following MidSemester critique, one of the guest critics mentioned wanted to see my organic cell generation bounded in a shape that was less square. So I’m also planning on making little paper chains of cells in the shape of a human body – just thought it’d be fun to do, maybe a little creepy too.

I also want to create a kind of tessellation that resembles the patterns found in chinese paper cutting so that I can axidraw it onto my laptop using the super cool aren pen.

grape – Tiling Pattern / MidSemester

I was half complete with the tiling pattern project when it was due (hence the combined title) so I figured I’d finish it for the MidSemester review.

I first created a 16×16 tile generator so I could draw my name (I made 15 tiles, 3 for each letter). Here are some examples:

I then modified a wave function collapse implementation (after reading up about WFC’s and how they work) mentioned in this Processing thread which is based off of mxgmn’s original implementation to generate tilings of each of the letter tiles I produced as a sort of signature.  And then I converted it to svg using marching squares.

Here’s one

I sort of like how it’s almost a gradient and depending on the type of font the wave function collapse algorithm generates tilings with varying amount of order/disorder.

I also presented a single tiling with used a brush pen, a willow stick, and a fine pen (just to play with the different textures.

I was also interested in exploring different ways of manipulating these tiles. Here are two experiments I did:

In one I wanted to do almost a “where’s waldo” with WFC, so I created a 16×16 waldo tile in my tile maker in processing and then generating multiple tilings.

I sort of want to make this my wallpaper.

Anyways, I ended up not plotting it because I ran out of time. I also think this would look better at a larger scale, so think of this as a little swatch test.

Another thing I wanted to do was see if the original letter appeared in any of the produced tilings. But I also didn’t want to hand annotate/circle any letters I found, just because I thought that would be boring/time consuming imo.

So instead I found another time consuming way to identify letters, trying to implement OCR(Optical Character Recognition)!

I implemented it in python and it generated the following results.

Some examples:

I think it was sort of interesting to uncover what the computer reads as “text”, but I ultimately didn’t plot any of them bc I didn’t think showing that part of the tile as a different color really added much + I didn’t have a lot of time.

Something interesting to note though…while generating these WFC patterns using the tiles I created, the letter R had the most contradictions I’ve ever seen. It was almost infuriating trying to get a single tiling to finish, because there was (like ~85% of the time) always a contradiction. I’m not too sure why exactly, maybe the diagonal bit of a “R” is prone to contradiction? But I also personally find the R’s the be the most visually appealing.

It looks itchy.

 

spingbing-Proposal

I am going to do a combination of ideas put into one idea:

  1. I was inspired by  Xu Bing ‘s Book from the Sky to create Korean asemic writing.
  2. I liked exploring tiling in my last project, so something like this with Korean characters or fake Korean characters would be visually interesting.
  3. I have done work in the past about my grandfather’s artwork:

I would like the end product to be in a similar shape or layout.

Things I want to explore further to perhaps integrate into my piece:

    1. Houdini geometry packing
    2. CJKV characters and how asian written language was created -> apply those in making my asemic writing