Exercise: Visual Interaction¶
The main objective of this exercise is to explore audience interaction with a simulated kinetic sculpture. A secondary objective is exploring the use of a camera as a controller for live improvisatory performance.
The broader objective remains reflecting carefully on how form and movement can be used deliberately to provoke an emotional experience. But now the focus shifts to the expressive potential of reactive behaviors in an autonomous process.
Objectives¶
After this exercise, you should be able to:
Develop performance primitives for autonomous machine expression.
Integrate camera input to create expressive reactive behaviors.
Improvise machine performance and discover expressive opportunities.
Resources
Webots simulation model: Impeller Array, included in in Webots.zip as impeller-array.wbt.
Camera input server script: Camera Input
Reference Guides
Please review the following reference guides as needed:
OpenCV computer vision library
python-osc OSC UDP networking library
Deliverables¶
Please be prepared to demo your system live in class.
A Google Doc with a brief paragraph describing your choreographic intent and outcome, submitted as per Documentation Procedure. This may be private if you prefer.
Your Python controller script, submitted as a properly-formatted block within the document.
Short video clip (less than 60 seconds) of robot performance, uploaded to the Google Shared Drive and included as a link within the document.
Zip file of your Webots project uploaded to the Google Shared Drive and linked into the document. Please note the relevant world and controller file names in your doc. Please preserve the folder structure (e.g. worlds/, protos/, controllers/) so it could be unpacked and used. If possible, please give distinctive names to your modified worlds, controllers, and protos (or delete the extraneous ones).