This Grasshopper sketch implements an interactive ‘playing card’ model editor controlled using motion capture gesture input.

There are three kinds of Python files in this sketch folder.

  1. The sketch includes a number of GHPython blocks which contain Python scripts embedded in the .gh file. As a convention, these scripts have also been saved externally to .py files for reference.
  2. Local Python modules loaded from GHPython scripts.
  3. Command-line scripts run separately from Rhino.

MocapDemo Sketch

The interactive application is contained in one large sketch. The following notes follow the data flow generally moving left to right. Please note that the GHPython blocks use the scriptcontext module to save objects in a global dictionary scriptcontext.sticky. This is the primary means to enable iterative computation.

Mocap Stream Receiver

This portion runs the Python networking module optirx which receives streaming Optitrack motion capture data from the Motive application. The version number sets the protocol version based on the version of Motive (dFAB uses version 2500; IDeATe uses version 2900). A timer is used to iteratively run the script to poll the network interface. The data arrives at 120 fps, the polling might be only 20 fps, so each poll returns a tree of results containing multiple samples for multiple bodies. The individual body names are reported, along with a tree of Plane objects indicating the body poses.

Mocap CSV File Loader/Mocap CSV File Player

This portion can load a CSV file captured in the normal way from Motive and play it back through the sketch. This is useful for offline debugging of the interactive interface.

Select input source and divide trajectory streams

This portion uses the body names to segment the incoming tree of trajectory data by body. This reduces the dependency on the order of the bodies in the stream data to guarantee the different body trajectories are interpreted correctly.

Gesture Event Detection

The demo treats the ‘Gesture’ input as a wand which can be flicked to indicate events. This section runs a filter which estimates the acceleration of the tip of the wand to detect high-acceleration inflection points to treated as flick events.

Record Indicated Poses

This buffer captures the Cursor input pose at the time of each flick event.

Editor Logic

This is the heart of the interactive application and where all the main complexity lives.

A few general notes: the SelectedGeometry, LayerGeometry, AddGeometry, and ReplaceGeometry scripts are used to read and write objects from and to the RhinoDoc database. The overall logic is modal: a set of selected objects is read from a layer, the gestural input modifies a transient copy of the geometry until the user is done, and then the changes are either abandoned or written back into the RhinoDoc. Part of the complexity is keeping track of the set of GUIDs identifying the selected objects in the database until the time arrives to modify or delete them. There are some dependencies on the naming conventions within the database; the cards are named with unique numbers and stored on a layer named ‘Cards’. These names are kept unique by only using unused indices to create new cards, which also limits the total card creation.


The post-processing of the resulting card model into a laser-cuttable card layout is performed by a second Grasshoppper sketch within the same folder.

GHPython Block Scripts

These scripts are run using GHPython semantics: the block inputs are provided as global variables, the block outputs are read from global variables, and objects are translated as per the menu settings for each input or output port.