At this point in the project, we have a solid understanding of the technical capabilities and limitations of the workflow. The primary areas of work moving forward are selecting the input objects, creating a finger mount for the mocap marker, refining the workflow, defining the parameters of translation from input information to output trajectory, creating the output mechanisms, and evaluating the output.
The choice of input objects will be given additional consideration, as we would like to have a cohesive set that maintains meaning in the process. Additionally, having multiple users scan a single object could provide very interesting insight into how the workflow adapts based on user input. We may 3D print a ring mount for the mocap marker, or use an existing ring and adhesive. The translation will rely on point density (the points being a sampling of those generated by the motion capture process), and may also take into account direction and speed of scan trajectory. Additionally, this data will be converted to robot motion that will likely need to take an “inside-out” pattern – traveling outward from a central point rather than inward from a border. The output mechanism to be created will be (i) a mount for the robot arm to hold the polystyrene sheet and move it within a given bound and (ii) a mount for the hot air rework station to keep the nozzle and box secure and stationary.
This project is a hybrid fabrication method between a human and virtual autonomous agents to develop organic meshes using MoCap technology and 3D spatial printing techniques. The user makes conducting gestures to influence the movements of the agents as they move from a starting point A to point B, using real-time visual feedback provided by a projection of the virtual workspace.
For our shortest path prototype, we discussed what the agents’ default behaviour might look like, without any interaction or influence by the user, given starting and end points A and B. We then took some videos of what the conducting might look like given a set of agents that would progress in this default behavior from the start to end points, and developed a few sketches of what the effects on the agents would be while watching the videos. In Grasshopper, we were able to start developing a python script that dictates the movements of a set of agents progressing from one point to another in the default behavior we had established.
Our next steps moving forward would be to flesh out our python script so that we can generate a plausible 3D mesh as the agents progress from start to finish, first without any interaction. Once we have this working, we’ll work on building a more robust algorithm for incorporating user interaction to create more unique and complex structures. The resulting mesh would then be printed from one end to the other using the spacial printing tools in development by Manuel.
]]>