Exercise: Vision-Guided Behavior¶
Note: this exercise is a work-in-progress and still being revised.
The main objective of this exercise is to create an autonomous performance in which a simulated machine with a simulated camera responds to its environment. The conceit is that the machine actually responds to visual stimulus and thus reflects its relationship to its environment through its autonomous behavior.
Objectives¶
After this exercise, you should be able to:
Formulate a single-agent narrative as autonomous reactive behaviors.
Convey intuitive, emotional, communicative expression through the gestural movement of a kinetic sculptural form.
Simulate and render a one-minute composition.
Resources
The recommended model is the Arm Camera world.
All sample Webots worlds are included in the reference project Webots.zip described under Webots Robot Simulator Examples.
Deliverables¶
A Webots simulator project including a machines in the A11 space reacting to its environment.
A Google Doc with a brief paragraph describing your choreographic intent and outcome, submitted within a subfolder of the appropriate shared folder.
Your Python controller script uploaded as one or more files.
One-minute video clip of robot performance, uploaded to the Google Shared Drive and included as a link within the document.
Zip file of your Webots project. Please give distinctive names to your modified worlds, controllers, and protos (or delete the extraneous ones), and please preserve the folder structure (e.g. worlds/, protos/, controllers/) so it could be unpacked and used.