Demo: Ambient Telepresence

Each Demo assignment is a micro-project which include both technical and creative elements. The primary focus is on developing your technical skills, but each will include an element of application to a human context.

The principal design prompt is to develop a device which supports symmetric ambient telepresence. We will define ambient telepresence as the process of physically signaling the passive presence or activity of a remote human being. By passive I mean that the remote person does not need to take intentional action for their activity to be communicated. By symmetric I mean that the device functions both as a sensor and a display, i.e., a transmitter and a receiver. The display should involve physical movement in some way. Such a device could continuously inhabit a human space, provide a remote awareness, and encourage a feeling of community.

The principal technical goals of the project are to apply low-power actuators and bearing components to create smooth, quiet motion. The secondary goal is to utilize the ultrasonic range sensor as a medium-distance activity sensor.

The display must be physical; this is not a project involving lighting, the LCD display, or electronically generated sound. My own assumption is that a tolerable ambient device will be sculptural, quiet, and slow. These properties are not required, however, if you have a well-considered alternative.

My primary suggestion for leveraging the limited torque of the micro-servo is to design the mechanical system as one or more parts in a neutrally stable equilibrium. The simplest example is a beam pivoted around the center of mass: it is neutrally stable, with weight supported by the pivot, so an attached actuator need only overcome friction to create movement. The visual and sculptural content of the moving part is up to you.

Objectives

After this exercise, you should be able to:

  1. Construct a rotational axis using shoulder screw shafts and bushing bearings.

  2. Construct a parallel-axis drive using a hobby servo and tie rod.

  3. Apply an ultrasonic range sensor for ambient human presence detection.

  4. Design a balanced mechanical structure suited for ambient display.

  5. Work with a partner to design either a common or complementary designs.

  6. Develop control software to activate a mechanical display based on either local or remote telepresence signals.

Technical Constraints

  1. You and your partner will work together remotely to design and build a pair of devices. These may be similar or complementary designs.

  2. Stick to exactly one micro hobby servo for actuation.

  3. Please note that the weight of the moving masses must be decoupled from the servo axes.

    • Please support the moving mass using structure and bearings to transmit the primary loads to ground via the structure.

    • Designs with mass directly attached to the servo horn will be summarily dismissed. Tie rods are exempt.

  4. The physical movement must be visible to the eye, i.e., no purely audible results.

  5. No LEDs, displays, or buzzers.

  6. Your activity detection signal should take the form of a single scalar value between 0 and 100 generated at 1 Hz (once per second). The controller for the activity display should accept the same signal.

  7. The semantics of the signal is up to you and your partner to negotiate. The minimum solution would be a binary presence value. However, the use of a continuously-variable output is strongly encouraged.

  8. There must be both a computational and a physical process creating the outcome.

  9. Please focus your attention on the concept and function; you won’t be judged on fine fabrication.

  10. The device should actually work; no faking.

Creative Prompts

  1. You can use whatever materials are available to you. The kit includes some laser-cut parts which may be useful as structural foundation. The air-dry clay can be used for free-form parts or solving connection problems. Cut and folded cardboard (not supplied) is also a versatile building material, as is lightweight papercraft and origami.

  2. A single-axis driven motion can be magnified using secondary passive elements. So for example, a motor-driven pivoting part could support flexible papercraft or fabric to create a complex motion from a single freedom.

  3. Additional passive mechanical freedoms are also possible; e.g. consider the effect of a Calder mobile driven by a single motor.

  4. A more graphic or visual effect could be achieved using printed paper or secondary elements. E.g. a patterned graphic moving behind a cutout scrim would create moire effects.

  5. A device based on a robust physical metaphor will allow some versatility with additional programming. E.g, if all it can do is raise a flag to indicate binary state, it won’t offer much in the way of parametric variation or subtle gesture, so options for refinement will be limited.

Part 1: Design and Construction

The first part includes the design and construction of the device hardware.

I highly recommend making a series of paper sketches and sharing them with your partner before either begins construction. No CAD model is required, but it may help with planning to draw at least a rough layout in CAD. If you do produce CAD sketches, please share those as part of your submitted results.

For this phase, the result should include at least a minimal Arduino sketch capable of sensing a person and triggering a motion, but it need not be final. This sketch should use the onboard sensor for local testing, but please be prepared to switch easily to remote MQTT communication using the serial port.

Part 1 Deliverables

Since each partner is fabricating a separate device, each should separately document their work with an individual upload. However, you are encouraged to share results wherever possible, including electrical schematics, mechanical sketches, and the text writeup.

The result of part 1 should include the following uploads to Canvas:

  1. a brief video (~30 seconds) demonstrating operation of your device; please

  2. several high-resolution photos (enough to identify the wiring and understand the structure)

  3. your Arduino sketch uploaded as a single .ino file

  4. a hand-drawn schematic diagram showing all circuits

  5. a brief paragraph describing any problems you encountered, submitted as a text file

Note: please apply your name as a prefix on the filename for all files uploaded to Canvas.

Part 2: Revised Mechanism, Signal Processing, Animation, and Telepresence

The first part of this exercise produced a first iteration of a mechanical device intended as an ambient display. For the second phase, we’ll revisit both the hardware and software.

Hardware

Please revise the physical device toward satisfying the following prompts:

  1. How might you customize the personality of the device to reflect a particular remote person (not necessarily your working partner)?

  2. Could this device operate continuously in your living space?

    1. Which particular context would it inhabit, and how would it be adapted to suit the space?

    2. How could it be appropriately unobtrusive or attention-getting?

    3. What kind of sounds should it make or not?

    4. How could it be constructed to operate more reliably and operate autonomously?

  3. How could the movement be more considered? Please consider your choices for the following attributes: gestural vocabulary, tempo, rests or pauses, precision, iteration, predictability.

  4. What other materials, objects, props, or cultural signifiers could be included to create more meaning for the movement?

  5. You may consider adding a second actuator if well-motivated by your specific expression.

Software

The software in the device notionally implements several functions:

  1. sensor interpretation

  2. remote communication

  3. physical animation

Each of these needs to be considered in light of the semantics of the physical device. The potential for abstraction comes in the definition of the signal used for remote communication. The simplest solution is to send raw data, e.g., sonar distance reading samples. However, this data is noisy and generally poorly suited as a parameter for the remote physical animation.

Please consider the appropriate level of abstraction for the semantics of the symmetric communication. This could be single or multi-dimensional; the nature of a time series is that even a single sensor encodes high-dimensional data over time. Some possibilities include:

  1. a smoothed or low-pass filtered signal estimating average position

  2. trajectory estimates: the filtered position and estimated velocity

  3. symbolic human activity events: arrival, departure, sitting, standing, etc.

  4. the dominant frequency of periodic movements; different frequencies might reflect different activities

A key step is to observe actual data. This could be performed in real time by setting up the sensor in situ and watching the Serial Plotter while performing some characteristic activities. It could be performed offline by capturing the sensor output stream and saving it to a file, then loading into a spreadsheet or plotting program of your choice.

The physical animation can be improved by considering the most suitable motion vocabulary and programming it as a parameterized motion generator. Then the abstract semantics of the communication can be mapped onto appropriate gestures. This will both decouple the motion generation from the literal data and customize it for your application.

The communication itself will need to be configured for plain text numerical input and output to be compatible with the MQTT bridge program for testing in real-time at remote distance.

Testing

While development is easiest to perform using a local mapping from sensor to output, this phase will require coordinating with your partner on real-time testing using the MQTT server.

Please consider how to best approximate a real-life test of your device. I know you may not be free to leave a computer attached to it for an extended period, but please coordinate a test of ambient operation while performing some unrelated tasks.

Part 2 Deliverables

The result of part 2 will take the form of a joint post on the course site documenting the demo result. Please create only one post per group.

  1. Post an entry on the 16-223 WordPress site including a brief video, a short paragraph describing the objectives and outcomes, and your code (properly formatted).

  2. If your Arduino sketch is a single file, please post your code inline, using a SyntaxHighlighter Code block with the Code Language set to “C/C++”, described on the site help page. If it includes multiple files, you may either post them inline or as a zip. Do not post your code as images, as they cannot be copied as text or searched.

  3. Please embed your video so it can be watched directly from the post. The easiest way to do this is to host it on a third-party site. Videos hosted directly on the course site should be .mp4 files and use the appropriate video shortcodes. N.B. hosted QuickTime .mov files cannot be embedded.

  4. Please make all links active. Just pasting in a URL does not necessarily make it clickable.

  5. Please clearly cite any sources. It’s fine to use libraries or modify code you find elsewhere as long as they are clearly credited.