Exercise: Robot Jockey¶
The main objective of this exercise is to perform live with a simulation in front of the class. You will need to refine your Webots simulation model, program behaviors to support your show, set up a simple control interface, then practice live performance. You may choose to continue developing your model from the previous week or develop a new model.
The broader objective remains reflecting carefully on how form and movement can be used deliberately to provoke an emotional experience. But now the focus shifts to the balance of autonomy between the human performer and the machine behavior, exploring how the process can discover richer behavior, and observing how the relationship to the audience is affected.
The title is inspired by the role of the DJ in live music performance. The DJ can take a number of roles, from live instrumental performance to choosing and blending pre-recorded material. The human DJ works with automation to generate the majority of expression while still taking specific decisions in response to the crowd.
After this exercise, you should be able to:
Refine a kinetic sculptural form capable of intuitive, emotional expression conveyed through form and movement.
Develop performance primitives for autonomous machine expression.
Integrate controls for human input to provide compositional and expressive control.
Improvise machine performance and discover expressive opportunities.
Practice and perform a short show.
Please review the following reference guides as needed:
Many sample Webots worlds are included in the reference projects in Webots.zip described under Webots Robot Simulator Examples. You are welcome to use one as a starting point, but please note this in your documentation.
The first part of this exercise is concerned with preparing resources which can support live performance. One way to understand this is as a set of systems inside of systems which produce expressive behavior:
The most immediate form of expression is the form and context of the physical machine. The sculptural properties and the setting convey an impression in a single image.
The actuated and natural dynamics of the machine frame the behavior. The audience can intuit the mass, force, and proprioceptive feedback through the movement of both programmed actuation and unactuated elements.
The interactions of the machine with props or other machines constitute a set of external dynamics. These can also trigger somatic empathy, or may include symbolic or textual interpretation.
- The autonomous control driving the machine is not directly visible but is
manifested through poses, trajectories, scripted sequences, feedback behaviors, and reactive behaviors.
The human control can operate at a variety of levels including high-level scripting decisions, parametric choices, or low-level direct control.
The human could be augmented with additional automation in form of machine learning which evolves the autonomous control in response to the human.
The human performer improvises, explores, composes, and practices.
An in-class live performance of your simulation between 90 seconds and three minutes long.
A paragraph briefly describing your objectives and outcomes in a post on the RCP project site, either public or private.
Short video clip of human and robot performance, submitted as an embedded video. Small .mp4 files may be directly uploaded, or you may embed a third-party streaming service.
Zip file of your Webots project. Please give distinctive names to your modified worlds, controllers, and protos (or delete the extraneous ones), and please preserve the folder structure (e.g. worlds/, protos/, controllers/) so it could be unpacked and used.