svaultz@andrew.cmu.edu – Human-Machine Virtuosity https://courses.ideate.cmu.edu/16-455/s2018 An exploration of skilled human gesture and design, Spring 2018. Mon, 14 May 2018 07:53:05 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.25 Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/741/agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/741/agent-conductor/#respond Mon, 14 May 2018 01:10:53 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=741 Continue reading Agent Conductor ]]> by Manuel Rodriguez & Jett Vaultz
13 May, 2018
https://vimeo.com/269571365

Abstract

Agent Conductor is an interactive fabrication process where one can control virtual autonomous agents to develop organic meshes through MoCap technology and 3D spatial printing techniques. The user makes conducting gestures to influence the movements of the agents as they progress from a starting-point A to end-point B, using real-time visual feedback provided by a projection of the virtual workspace. The resulting artifact is a 3D printed version of the organic tunnel-like structure generated based on the agents’ paths.

Objectives

Our goals for this toolkit were to be able to fabricate an organic and structurally sound truss that could most likely be used for decorative and non-structural architectural elements, given the weakness of the material.

We wanted to develop a few key features:

  1. Generate a virtual 3D mesh that is printable with the spatial printing tool
  2. User control of the path the agents take to get to the target to some degree
  3. User control of the diameter/size of the mesh at any given point
  4. Responsive view of the virtual workspace for real-time feedback

Implementation

With this list of key control elements in mind, we wanted the interaction to be simple and intuitive. For affecting the path the agents take to the target, it seemed most effective to control this with swiping gestures in the direction the user would like the agents to move, like throwing a ball or swinging a tennis racket.

We originally had a flock of autonomous agents for generating the truss, each with their own individual movements. However, we wanted to avoid the flock making any translational movements when each gesture force is applied, so we eventually changed the structure of the agents from a cluster of boids to a single autonomous frame with an adjustable number of satellites that represents the agents. With this method, we can ensure smooth rotations as well as avoid most of the self-intersecting issues that come with having a flock of agents generating the truss lines. The truss surface would lose some of its organic nature, but in its place we gain ease of control and greater printability with controlling a frame.

We needed the user to be able to understand what they were seeing, and be able to change the view to gain meaningful visual feedback. We added a rigid body for the camera to be controlled by the user’s position, so that it would be easier to see the shape and direction of the movement of the agents on the screen. The camera eases over to the user’s position, as opposed to having jarring camera movements matching every motion of the user.

After the conducting process, the generated virtual 3D mesh is printed with the spatial printing tool, using the fabrication method detailed here.

 

Possible structures:

With this system, users can create a few different kinds of tunnel-like structures based on the type of conducting movements. Without any influence or interaction from the user, the agents will move smoothly from the starting point to the finish. This results in mesh that looks much like a uniform pipe:

 

The distance between the user’s left and right hand determines the diameter of the mesh, from about 10cm to half a meter. Varying this can either cinch or create balloons in the pipe:

 

The user’s right hand can affect the direction the agents progress in, by making repeated swiping gestures in the desired direction. This should cause the agents to move in that direction, creating curves in the mesh:

 

The start and end positions can also be set anywhere to match the physical space if needed, as shown above. To generate a more smooth and dense mesh, one can increase the number of agents and reduce the pace at which the truss segments are generated. Likewise, to get a more open and low-poly mesh, the number of agents can be reduced to as low as three with a higher pace.

More agents at a shorter pace.
Fewer agents at a longer pace.

See here for a chart of possible variations one can generate with our toolkit.

Outcomes

3D printing one of the generated structures. *Speed augmented—actual speed is 3mm/s

Overall, the toolkit creation was successful, and we were able to generate mostly-printable meshes through this conducting process. It was interesting watching users work with our toolkit, and how they would interpret the instructions to create the virtual structures. This helped us work on making the interaction more intuitive and usable, as well as develop a better explanation for the toolkit and how to use it.

In regards to generating the virtual 3D mesh, there are still some self-intersecting issues that we have not been able to completely sort out, because of the way we have implemented the gestural interaction. Additionally, there is a twisting issue that sometimes occurs at the base of the structure; this becomes more visible with a shorter pace.

Manuel and his thesis partner were able to develop the tool such that we were able to print one of the generated structures, shown below. The fabrication was mostly successful, although there were  a few small issues with the extrusion and air flow that required repeated intervention to ensure that the plastic would stick and hold its shape.

 

Contribution

Manuel: GH/Python algorithm (gestural interaction, truss generator, agent definition), user interface, 3D printing

Jett: GH/Python algorithm (gestural interaction, camera motion, agent definition), user interface, blog post, video

Video

]]>
https://courses.ideate.cmu.edu/16-455/s2018/741/agent-conductor/feed/ 0
Prototype: Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/666/prototype-agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/666/prototype-agent-conductor/#respond Mon, 02 Apr 2018 15:18:13 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=666 Continue reading Prototype: Agent Conductor ]]> Manuel Rodriguez & Jett Vaultz

This project is a hybrid fabrication method between a human and virtual autonomous agents to develop organic meshes using MoCap technology and 3D spatial printing techniques. The user makes conducting gestures to influence the movements of the agents as they move from a starting point A to point B, using real-time visual feedback provided by a projection of the virtual workspace.

Screencap from our role-playing test, with an example of the agents’ movements and generated mesh.

 

Workflow diagram

For our shortest path prototype, we discussed what the agents’ default behaviour might look like, without any interaction or influence by the user, given starting and end points A and B. We then took some videos of what the conducting might look like given a set of agents that would progress in this default behavior from the start to end points, and developed a few sketches of what the effects on the agents would be while watching the videos. In Grasshopper, we were able to start developing a python script that dictates the movements of a set of agents progressing from one point to another in the default behavior we had established.

Sketch of the progression of agents from the top plane from the role-playing test down to the second without any kind of influence from the user.

A sketch of a possible structure based on the video, which shows a silhouette of the movements of the agents.

Our next steps moving forward would be to flesh out our python script so that we can generate a plausible 3D mesh as the agents progress from start to finish, first without any interaction. Once we have this working, we’ll work on building a more robust algorithm for incorporating user interaction to create more unique and complex structures. The resulting mesh would then be printed from one end to the other using the spacial printing tools in development by Manuel.

 

]]>
https://courses.ideate.cmu.edu/16-455/s2018/666/prototype-agent-conductor/feed/ 0
Project Proposal: Agent Conductor https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/ https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/#respond Thu, 08 Mar 2018 01:14:09 +0000 https://courses.ideate.cmu.edu/16-455/s2018/?p=621 Continue reading Project Proposal: Agent Conductor ]]> Manuel Rodriguez & Jett Vaultz

 

Agents conductor is a design and fabrication method hybrid between digital and physical worlds. It is a negotiation between two living parts, one physical, one digital. The former refers to a human, the latter, to a set of agents with their own behaviour. Is it possible to tame them?

Sketch of proposed physical workcell

Design skills combine the ability to direct the set of agents as you will, with some constraints given by a list of rules and digital tools with the creativity of dealing with a dynamic entity that will move around. It will be closer to train a dog rather than getting better at some fabrication skills as sculpting.

Hybrid Skill Workflow

In order to follow and understand your own movements with the piece to fabricate, a projector will be necessary, as it will be where the agents live. Through streaming motion capture, the actor’s movement are recorded and projected, having full interaction with the agents.

The degree of manipulation of the behaviour of the agents is to be determined. Nevertheless, a basic setup will consist of number of agents, sped, degree of separation vs cohesion, alignment, repulsion or attraction. These variables are expected to be controlled by a number of different rigid bodies. How to combine those is up to the user.

The resulting geometry is also variable. From a curve made out of all the tracked location of the agents (with variable thickness), to a surface formed by lofting each of the mentioned curves, or more volumetrical bodies made out of shapes associated with the motion of the agents.

The algorithms will be written in Grasshopper Python. Agents behaviours, interaction with the user, resulting geometry and fabrication files.

Fabrication will happen by means of additive manufacturing, being robotic spatial printing the chosen medium. The thickness and degree of fabricability will rely on the development of the tool. Therefore some coding in HAL and Rapid will be necessary at this stage.

]]>
https://courses.ideate.cmu.edu/16-455/s2018/621/project-proposal-agent-conductor/feed/ 0