December 8th, 2018
Laser World is an interactive environment which allows the user to better visualize the path of lights using lasers and fog. This experience was designed to evoke a mysterious atmosphere as the fog and lights moving on their own using rotating mirrors draws you in and invites you to interact with it. In the end, it seems that the level of darkness and mysteriousness required for this project ended up scaring the children a little too much and ended making our project less effective as it did not draw them in enough.
Mechanical:
Electrical:
Successes
Due to a failed fog generator we were unable to get good test data when at the Children’s Museum; however, when testing later, we did observe that the enclosure achieved most of our objectives by making the laser path visible and allowing us to interact with it by changing the angle that the beam bounced off at as shown by our video below.
Through early testing, we discovered that the top viewing surface became obscured with condensation and made the lasers harder to see. We were able to correct this issue by applying an anti-fogging coating to the inside of the acrylic box which prevented this from occurring.
Our actuators worked as intended and allowed the enclosure to have a life of its own while also taking commands from the user without it overpowering the children. In the future, we could improve this by adding a gripping pattern to the handle to help the children grasp it but as we were testing we wanted to make sure it would not dig into the children’s hands so we left it smooth.
Failures
From our testing at the Children’s School at Carnegie Mellon, we discovered a factor which we had not accounted for: children’s fear of the dark. We had hoped that the lights shining and moving in a darkened area would draw children in but some children found it scary and did not want to investigate. As our project’s viability was dependent on a slightly darkened area this was an issue that we were unable to resolve; however, this was only a smaller percentage of the children so it was not too large of an issue.
Another failure which we encountered and hope to fix in the future was the enclosure’s draining issues. As the fog condensed within the box it began to create a pool of water which made the enclosure difficult to disassemble cleanly without spilling water everywhere. This could be solved by adding a hole at the bottom of the box which we could open and reseal in order to let the water out before opening the enclosure fully.
Although our ideal vision of the project involved the lasers rebounding near infinitely in order to form intricate patterns, we began to realize that this was impossible as by traveling in the fog and making itself visible it lost power and began to fade over time. Because of this, we had to limit the size of our box so that the beam would be able to reflect off the walls some before fading. Alternatively, we could have focused on the intricate pattern and kept it smaller but we decided it needed to be larger in order to attract attention and allow for more mirrors for multiple children to play at once and interact with each other.
Connor Colombo:
Michael Mong:
Final Demo: Our final project featured 3 mirrors which could be rotated in change the paths of the lasers. In this video, only one actuator is functional.
Initial Test Setup: This test enclosure showed that we could create a visible laser path within our box and that we could interact with it by rotating the mirror.
Foggy Test Box: After running our test set up for a while we realized that the laser’s path was no longer as visible which prompted us to apply an anti-fogging coating to the enclosure window.
Two Laser Test with Actuation: Prior to our final design we modified our initial test to better integrate the actuation and interact with two lasers instead of one to get a better understanding of how all the components worked together.
Internals of the series-elastic actuator during an early functionality test with drive motor at center bottom, input encoder at bottom right, and output encoder visible through center of square slot. Output is the square slot. Observed behaviour shows the actuator rotating independently with unloaded coupling of input and output disks, then when the applied load passes a certain specified torque threshold, the motor pauses until the user lets go.
N/A.
Repository available at https://circuitmaker.com/Projects/Details/Connor-Colombo/EW-Series-Elastic-Motor-Module/embeded
Available as gist at: https://gist.github.com/zCoCo/3e6f14de4269775628b97b6d82cae161
/* Driving Script for Series Elastic Actuator controlling a mirror with a handle. This involves a basic autonomous behaviour of bouncing back and forth between -180 and 180 degrees but following the user's motion if the actuator detects that its handle being grabbed. Once the user lets go, the actuator will stay at the desired position for 1 second. */ // NB: CCW is +ve #include "Arduino.h" #include "HAL.h" #include "Sensing.h" #include "Motion.h" #include "Schedule.h" //#include "Comm.h" #define sgn(x) ( (x==0) ? 0 : abs(x) / (x) ) Schedule* sch = new Schedule(); // Maximum Angular Difference between Input and Output before Actuator Enters Follower Mode: #define DIFF_THRESH 12 bool is_following = false; // Whether Currently in Follower Mode bool holding_position = false; // Whether Currently Holding the Position Set by the User unsigned long let_go_time = 0; // Time when the user last let go void setup(){ Serial.begin(9600); initHAL(); // initComm(); // -TODO: Implement I2C Communications for Sound Sync. schedule(); moveTo(180); // Kick off the Autonomous Motion } // #setup void schedule(){ /** Perform Basic Life-Line Tasks: **/ sch->ALWAYS->DO(updateSensors); sch->ALWAYS->DO(updateMotion); /** Coordinate Responses: **/ // Enter Follower Mode: sch->WHEN(Sensors.diff > DIFF_THRESH)->do_([](){ is_following = true; move( sgn(Sensors.diff) * (abs(Sensors.diff) - DIFF_THRESH + 1) ); }); // Move to Rest at Position the User Set and Stay There for a Time: sch->WHEN(Sensors.diff < DIFF_THRESH)->do_([](){ move( Sensors.diff ); let_go_time = millis(); holding_position = true; }); // Exit Follower Mode and Resume Autonomous Operation after User has Let Go // for 1 Second: sch->WHEN(let_go_time - millis() > 1000 && holding_position)->do_([](){ is_following = false; holding_position = false; moveTo(180); }); sch->WHEN(idle() && !is_following)->do_([](){ moveTo(-getCommAng()); // Bounce Back and Forth }); /** Give Status Updates: **/ // Plot Load on Actuator: sch->EVERY(200)->do_([](){ Serial.print(Sensors.diff); Serial.print(","); Serial.println(torque()); }); } // #schedule void loop(){ sch->loop(); } // #loop
#ifndef _HAL_H #define _HAL_H /** Hardware Abstraction Layer **/ // Uses ESP8266 12-F (AI-Thinker Variant) // Program as Adafruit Feather HUZZAH // Flash: 4M // No Debug // lwIP: v2 Lower Memory // CPU: 80MHz // Baud: 115200 // Erase: Sketch Only #include <Encoder.h> #define ENC_STEPS_PER_REV 80.0 Encoder EncO(13,12); // Output Encoder Encoder EncI(10,9); // Input Encoder #include <AccelStepper.h> #define STP 1 #define DIR 3 #define EN 8 #define MS1 6 #define MS2 4 #define MS3 5 AccelStepper stepper(1, STP, DIR); /** Basic Motion Parameters: **/ const float GEAR_RATIO = 43.0 / 11.0; // Output to Input Gear Ratio const float MOT_STEPS_PER_REV = 4075.7728 * GEAR_RATIO; // Account for internal gearbox /** Series Elastic Parameters: **/ // Radial Position of the Mounting Point of the Rubber Bands on the Inner Disk [m]: const float RP_INNER = 7.46e-3; // Unloaded Length of Rubber Bands (when mounted in actuator): const float L0 = 15.5e-3; // Amount of Stretching Required for Rubber Bands to Reach their Unloaded // Position (L0) from their Relaxed Length: #define d0 8e-3 // Number of Rubber Bands: #define N_BANDS 4 // Average Effective Stiffness of Each Rubber Band [N/m]: #define K_BAND 15 void initHAL(){ // Initialize Motor Driver Pins and Setup for Full-Step Mode: pinMode(STP, OUTPUT); pinMode(DIR, OUTPUT); pinMode(EN, OUTPUT); pinMode(MS1, OUTPUT); pinMode(MS2, OUTPUT); pinMode(MS3, OUTPUT); digitalWrite(MS1, 0); digitalWrite(MS2, 0); digitalWrite(MS3, 0); digitalWrite(EN, 0); // Setup Motor Control Parameters: stepper.setMaxSpeed(100); stepper.setAcceleration(1000); } // #initHAL #endif //_HAL_H
#ifndef _SENSING_H #define _SENSING_H #include "HAL.h" struct SensorsType{ // Useful Data: float input_ang = 0.0; // - Angle of Input Disk float output_ang = 0.0; //- Angle of Output Disk float diff = 0.0; // - Angular Difference between Input and Output Disks [deg] // Helper Variables: float lag_sum = 0.0; // Sum of all measured values for diff unsigned long lag_count = 0; // Number of measured values for diff } Sensors; // Returns the Output Angle from the Encoder in Degrees float outputAng(){ return 360.0 * EncO.read() / ENC_STEPS_PER_REV; } // #outputAng // Returns the Input Angle from the Encoder in Degrees float inputAng(){ return -360.0 * EncI.read() / ENC_STEPS_PER_REV / GEAR_RATIO; } // #outputAng // Computes the Torque Loading the Actuator in N-m. This is an expensive // calculation, only call on an as-needed basis: float torque(){ // Constant Geometric Helper Parameters to Speed Up Calculations: static const float L0_2 = sq(L0); static const float A = 2 * RP_INNER * (L0 + RP_INNER); static const float L0_d = d0 - L0; // Compute Torque (only valid for diff <= 180deg, bands will snap before this): const float th = Sensors.diff * M_PI / 180.0; const float cm = cos(th) - 1; return N_BANDS * RP_INNER * K_BAND * (sqrt(L0_2 - A*cm) + L0_d) * sin( th + atan(RP_INNER * sin(th) / (L0 - RP_INNER*cm)) ); } // #torque // Update Sensor Metadata: void updateSensors(){ Sensors.input_ang = inputAng(); Sensors.output_ang = outputAng(); Sensors.lag_sum += Sensors.output_ang - Sensors.input_ang; Sensors.lag_count += 1; Sensors.diff = Sensors.output_ang - Sensors.input_ang - Sensors.lag_sum / Sensors.lag_count; } // #updateSensors #endif //_SENSING_H
#ifndef _MOTION_H #define _MOTION_H #include "HAL.h" #define MOT_DIR -1 // Used to Invert Motor Direction (-1 for Invert, 1 for Normal) // Immediately Set the New Position Target of the Motor to the Given Angle [deg] void moveTo(float ang){ stepper.stop(); stepper.moveTo(MOT_DIR * ang * MOT_STEPS_PER_REV / 360.0); } // #moveTo // Immediately Set the New Position Target of the Motor to the Given Angle // Relative to the Motor's Current Position [deg] void move(float ang){ stepper.stop(); stepper.move(MOT_DIR * ang * MOT_STEPS_PER_REV / 360.0); } // #move // Returns Whether the Motor is Currently Idle (awaiting a new command) bool idle(){ return stepper.distanceToGo() == 0; } // #idle // Returns the Most Recently Commanded Angle to the Motor float getCommAng(){ return stepper.targetPosition() * 360.0 / MOT_DIR / MOT_STEPS_PER_REV; } // Perform All Necessary Motion Control Commands: void updateMotion(){ stepper.run(); } // #updateMotion #endif // _MOTION_H
/* Schedule.h * Intuitive Scheduling Utility that Allows for Complex Time and Condition Based * Behaviors to be Constructed out of Simple, Legible Event-Based Primitives. * (admittedly, this has a bit of a ways to go in terms of memory efficiency - * badly needs a ring buffer. (especially bad now that state persistence has * been added)) * KNOWN BUGS / PROBLEMS: * - Semi-Required memory leak on the %done% state of Actions. Need to have * some way of determining whether / how long other functions will need access to * this information after the Action has been deleted. NOTE: Until this is fixed, * the ability to create unbounded series of SingleTimedEvents with #in_ is * gone. Keep total number of events known and bounded. * Author: Connor W. Colombo, 9/21/2018 * Version: 0.1.4 * License: MIT */ #ifndef SCHEDULE_H #define SCHEDULE_H #include "Arduino.h" #include <ArduinoSTL.h> #include <vector> /* Example Usage (only call these once, likely in setup): ** avoid calling variables directly from inside these functions unless they are global variables ** void setup(){ // Basic Call: sch->EVERY(500)->DO(blink()); // Will call #blink every 500ms sch->EVERY_WHILE(750, dist < 10)->DO(togglePeek()); // Will peek / unpeek every 750ms while dist is < 10cm sch->IN(2500)->DO(doThisOnce()); // Will call #doThisOnce one time in 2.5s sch->NOW->DO(sortOfUrgent()); // Will call #sortOfUrgent as soon as possible without blocking other events (useful in comm. interrupts for longer behavior) sch->WHILE(dist < 10)->DO(swing_arms()); // Will call #swing_arms as often as possible as long as dist < 10. sch->WHEN(dist > 10)->DO(someOtherThing()); // Will call #someOtherThing every time dist goes from <=10 to >10. sch->WHEN(touched())->DO(uncoverEyes()); // Will uncover eyes when touched goes from false to true (so, when touched) // Other more efficient notation for simple function calls: sch->EVERY(250)->do_(blink); // if you're just calling a void function with no arguments, it's more effective to just use the lowercase #do_ // Note: sch->EVERY(100)->DO(x++); // x or other variables accessed directly must be a global variables (not local scope) // Or Save Events to be Registered to Later: Event* FREQ_100Hz = schedule->EVERY(10); Event* TOO_CLOSE = schedule->WHEN(dist < 10); // ... somewhere else in code: TOO_CLOSE->DO(tone(BUZZER, 1000, 25)); TOO_CLOSE->SIGNUP(tone(BUZZER, 1000, 25)); // Additionally, events which setup other events (using nested actions) return // a double pointer to a bool which indicates when all sub-events have been // executed at least once. // Note: bool** beepboopd must be global. beepboopd = sch->IN(3100)->DO_LONG( *(sch->IN(1000)->DO( plt("***BEEP***BOOP***"); )); ); sch->WHEN(**beepboopd)->DO( plt("## BOP ##"); ); } */ /* NB: Some functionality must be assigned in macros b/c lambdas with captures can't be converted to function pointers. */ // More Legible Shorthand for "do_" syntax: #define DO(x) do_([](){x;}) /* Shorthand for Calling a Function which Takes a Long Time to Complete after it Returns (has its own event calls) and returns a double pointer of a boolean which indicates when it is done. */ #define DO_LONG(x) \ do_(new NestingAction([](Action* action){ \ delete action->done; \ action->done = x; \ })); // More Legible Shorthand for "do_" syntax: #define SIGNUP(x) signup([](){x;}) // More Legible Shorthand for "while_" syntax: #define WHILE(x) while_([](){return (x);}) // More Legible Shorthand for "when" syntax #define WHEN(x) when([](){return (x);}) // Syntax to Normalize All-Caps Syntax used by Conditionals: #define EVERY(x) every(x) // More Legible Shorthand for "everyWhile" syntax: #define EVERY_WHILE(x,y) everyWhile(x, [](){return (y);}) // Syntax to Normalize All-Caps Syntax used by Conditionals: #define IN(x) in_(x) // Shorthand Syntax for Performing a Task as Soon as Possible: #define NOW in_(0) // Shorthand Syntax for Performing a Task as Frequently as Possible: #define ALWAYS EVERY(1) typedef bool** ActionState; #define new_ActionState(b) new bool*(new bool(b)); /* * Container for Action which are called in events and their respective metadata. */ class Action{ // Abstract Container for Use in Arrays of Pointers public: bool* done = new bool(false); virtual ~Action(){ //delete done; // <- Leave the done state variable behind //done = nullptr; } // dtor virtual void call() = 0; /* Tells Whether this Action and its Required Actions are Complete. Returns the dereferrenced state of member %done% */ bool isDone(){ return *(this->done); } // #isDone }; // class Action /* * Most basic form of an Action which takes a void-void function which has no * dependencies and thus is considered to be done executing once the function * returns (ie. doesn't generate any Events). */ class BasicAction : public Action{ public: // Type of Function to be Called which Consumes the Stored Data: typedef void (*function) (); BasicAction(function f) : oncall{f} {}; void call(){ oncall(); *(this->done) = true; } private: // Function to be Executed when this Action is Called: function oncall; }; // class BasicAction /* * Most basic form of an Action which takes a void-Action* function which has * dependencies / triggers other events and is expected to set this Action's * done value to true once all of its sub-functions are complete. */ class NestingAction : public Action{ public: // Type of Function to be Called which Consumes the Stored Data: typedef void (*function) (Action*); NestingAction(function f) : oncall{f} {}; void call(){ oncall(this); } private: // Function to be Executed when this Action is Called: function oncall; }; // class NestingAction /* * An Action (ie function) to be Performed by being Called when an Event * Triggers and Must Receive some Piece(s) of Stored Data of type T to Execute. * The contained function is considered to have no dependencies and thus be * done executing once the function returns (ie. doesn't generate any Events). */ template <typename T> class DataAction : public Action{ public: // Type of Function to be Called which Consumes the Stored Data: typedef void (*function) (T); // Stored Data to be Given to the Function: T data; DataAction(function f, T d) : data{d}, oncall{f} {}; // Calls this Action by Passing the Stored Data to #oncall and Calling It. void call(){ oncall(data); *(this->done) = true; } private: // Function to be Executed when this Action is Called: function oncall; }; // Class: DataAction /* * An Action (ie function) to be Performed by being Called when an Event * Triggers and Must Receive some Piece(s) of Stored Data of type T to Execute. * The contained function has dependencies / triggers other events and is * expected to set this Action's done value to true once all of its s * sub-functions are complete. */ template <typename T> class NestingDataAction : public Action{ public: // Type of Function to be Called which Consumes the Stored Data: typedef void (*function) (T, Action*); // Stored Data to be Given to the Function: T data; NestingDataAction(function f, T d) : data{d}, oncall{f} {}; // Calls this Action by Passing the Stored Data to #oncall and Calling It. void call(){ oncall(this); } private: // Function to be Executed when this Action is Called: function oncall; }; // Class: NestingDataAction /* * Basic Event Class which Triggers only when Called Directly. */ class Event{ public: // Basic void-void function which can signup for the event: typedef void (*RegisteredFunction) (); const bool runs_once; // Indentifies whether this event only happens once. Event() : runs_once{false} {}; virtual ~Event(){ /*for( std::vector<Action*>::iterator it = this->registry.begin(); it != this->registry.end(); ++it ){ delete (*it); } this->registry.clear(); // TODO: Need to come up with way to make Action::done itself stick around*/ } // dtor /* * Request this Event to Execute ASAP. * NOTE: Calls happen IN ADDITION to any event-specific timings or conditions. */ void call(){ this->calledButNotRun = true; } // #call /* * Executes this Event if it Should Execute either Because it's been Called or * Should Self-Trigger. * Returns Whether the Event was Executed. */ bool tryExecute(){ if(this->shouldTrigger() || this->calledButNotRun){ // Call #shouldTrigger first this->execute(); this->calledButNotRun = false; return 1; } return 0; } // #tryExecute /* Test if this Event Should Self-Trigger*/ virtual bool shouldTrigger(){ return 0; // Basic Events only Trigger when Explicitly Called } // #shouldTrigger /* Add the Given Function to the %registry% as a BasicAction to be Executed Every Time the Event is Triggered. Returns a double pointer of the done variable of the Action created. */ bool** signup(RegisteredFunction fcn){ Action* a = new BasicAction(fcn); this->registry.push_back(a); return &(a->done); } // #signup /* Add the Given Action to the %registry% to be Executed Every Time the Event is Triggered. Returns a double pointer of the done variable of the Action. */ bool** signup(Action* a){ this->registry.push_back(a); return &(a->done); } // #signup // Alias for Signing Up for the Event bool** do_(RegisteredFunction fcn){ return signup(fcn); } bool** do_(Action* a){ return signup(a); } // Calls All Functions Registered to this Event void execute(){ if(!this->ran || !this->runs_once){ // Do this ^ check instead of deleting self b/c pointer might be accessed later if in list. for(std::vector<Action*>::size_type i = 0; i != this->registry.size(); i++) { this->registry[i]->call(); } this->ran = true; } } // #execute protected: Event(bool ro) : runs_once{ro} {}; std::vector<Action*> registry; bool ran = false; // Whether this function has been run before (ever). bool calledButNotRun = false; // Whether this Event has been Called Recently but Not Yet Executed }; // Class: Event /* Event which Triggers Anytime #shouldTrigger is called and its condition is True*/ class ConditionalEvent : public Event{ public: typedef bool (*EventCondition) (); EventCondition condition; // Function that Triggers the Event if it's Ready to be Triggered ConditionalEvent(EventCondition t) : condition{t} {}; // Constructor virtual ~ConditionalEvent(){ delete& condition; } // Destructor /* * Triggers this Event if its %condition% Allows It. * Returns Whether the Event was Triggered. */ virtual bool shouldTrigger(){ if(this->condition()){ return 1; } return 0; } // #shouldTrigger }; /* * Event Class which Triggers when its EventCondition is True When #shouldTrigger * is Called and was False the Last time it was Called. */ class TransitionEvent : public ConditionalEvent{ public: TransitionEvent(EventCondition t) : ConditionalEvent(t) {}; // Constructor bool shouldTrigger(){ bool curr_state = this->condition(); if(curr_state && !this->last_state){ this->last_state = curr_state; return 1; } this->last_state = curr_state; return 0; } // #shouldTrigger protected: bool last_state = false; }; /* * Event which Triggers as Close to its Specified Interval after its Previous * Execution as Possible */ class TimedEvent : public Event{ public: unsigned long interval; // Interval between Executions TimedEvent(unsigned long i) : interval{i} { this->timer = i; this->last_time = millis(); }; // Constructor ~TimedEvent(){ } // Destructor /* * Triggers this Event if its %condition% Allows It. * Returns Whether the Event was Triggered. */ bool shouldTrigger(){ unsigned long now = millis(); this->timer -= now - last_time; this->last_time = now; if(this->timer < 0){ this->timer += this->interval; // Keeps execution freq. as close to interval as possible return 1; } return 0; } // #shouldTrigger protected: unsigned long last_time; long timer; TimedEvent(bool runs_once_, unsigned long i) : Event(runs_once_), interval{i} { this->timer = i; this->last_time = millis(); }; }; /* An Event which Triggers Once After a Set Period of Time */ class SingleTimedEvent : public TimedEvent{ public: SingleTimedEvent(unsigned long i) : TimedEvent(true, i) {}; // Constructor }; /* An Event which Triggers at a Certain Frequency so Long as a Given Condition is True */ class ConditionalTimedEvent : public TimedEvent{ public: typedef bool (*EventCondition) (); EventCondition condition; // Function that Triggers the Event if it's Ready to be Triggered ConditionalTimedEvent(unsigned long i, EventCondition t) : TimedEvent(i), condition(t){}; virtual ~ConditionalTimedEvent(){ delete& condition; } // Destructor /* * Triggers this Event if its %condition% Allows It. * Returns Whether the Event was Triggered. */ bool shouldTrigger(){ unsigned long now = millis(); this->timer -= now - last_time; this->last_time = now; bool curr_state = this->condition(); // Everytime Condition Becomes True, Restart Timer if(curr_state && !this->last_state){ timer = this->interval; } this->last_state = curr_state; if(curr_state && this->timer < 0){ this->timer += this->interval; // Keeps execution freq. as close to interval as possible return 1; } return 0; } // #shouldTrigger protected: bool last_state = false; }; class Schedule{ public: std::vector<Event*> events; /* Create an Event to be Triggered as Long as the Given Condition is True */ ConditionalEvent* while_( bool (*condition)() ){ ConditionalEvent* e = new ConditionalEvent(condition); this->events.push_back(e); return e; } // #while_ /* Create an Event to be Triggered Once for Every Time the Given Condition Changes from false to true: */ TransitionEvent* when( bool (*condition)() ){ TransitionEvent* e = new TransitionEvent(condition); this->events.push_back(e); return e; } // #when /* Create an Event that will be Triggered Every %interval% Milliseconds */ TimedEvent* every(const unsigned long interval){ TimedEvent* e = new TimedEvent(interval); this->events.push_back(e); return e; } // #every /* Create an Event that will be Triggered Once in %t% Milliseconds */ SingleTimedEvent* in_(const unsigned long t){ SingleTimedEvent* e = new SingleTimedEvent(t); this->events.push_back(e); return e; } // #in_ /* * Create an Event that will be Triggered Every %interval% Milliseconds While * a Given Condition is True, starting %interval% Milliseconds AFTER the * Condition Becomes True. */ ConditionalTimedEvent* everyWhile(const unsigned long interval, bool (*condition)()){ ConditionalTimedEvent* e = new ConditionalTimedEvent(interval, condition); this->events.push_back(e); return e; } // #everyWhile // Function to be Executed on Every Main Loop (as fast as possible) void loop(){ // Iteration has to account for the fact that elements are intentionally // deleted from the vector in the loop and potentially added at any call // of #Event::tryExecute std::vector<Event*>::size_type size = this->events.size(); std::vector<Event*>::size_type i = 0; while(i < size){ if( this->events[i]->tryExecute() && this->events[i]->runs_once ){ // Delete Event if it's been Executed and Only Runs Once delete this->events[i]; // Delete the Event this->events.erase(this->events.begin() + i); // Remove the addr from the vector size--; // As far as we know, the vector is now smaller } else{ ++i; // Increment iterator normally } } } // #loop }; // Class: Schedule #endif // SCHEDULE_H
]]>
Figure 1. Project setup at museum. Monitor placed below project showed video feed from RPi face tracking program.
Reach for the Sky is a computer vision controlled interactive experience that invites the user to explore the behaviors of an artificial cloud. With the use of face tracking, the cloud is able to detect the position of a users face, and then moves towards the user. Additionally, the user may choose to touch said cloud in order to affect the lighting of the project, such that artificial lighting is produced.
Figure 2. Container corner support pieces and mounts used for attaching cloud track to ceiling.
The container of this piece was originally built with the intent of implementing three artificial clouds (though in the end, only one was achieved). Note that all wood pieces seen in photo documentation are laser cut from 6mm plywood. The backing of the box was created from 3mm mirrored acrylic in order to allow users to see their body position in relation to the clouds. Because the box needed to be large enough to hold the three tracks on which the clouds would travel and to be sturdy enough to not collapse, support pieces were placed in between top and side planes of the box (see Figure 2). In addition to this, planes of the box were mainly held together by captive screws and fitted tabs.
Figure 3. Inside project, view of top panel. Cloud travels by using a base that moves along an aluminum extrusion track. LEDs adhered to top are also visible.
Movement of the cloud was achieved by creating a base (to which the cloud is attached) that utilized a stepper motor to travel along a pair of one-meter long aluminum extrusions (see Figure 3). The motor was attached to a laser cut gear, which allowed movement along a spur rack attached to one of the aluminum extrusions (see below in Figure 4). This mechanism was created based on original designs from our instructor, Garth Zeglin, though several modifications were made to accommodate gravity, the size of the project, and to reduce friction. The artificial cloud was fabricated by filling a plastic bag full with polyester fiber stuffing, then hot gluing more stuffing to the outside. The LEDs were coded to display a pattern when the flex sensor showed a change from its initial value. Because the flex sensor had a tendency to become permanently bent, the Arduino code was created such that the initialized value could be updated if the sensor was damaged. A hole was cut into the bag for placement of the flex sensor and the cloud itself was attached to the base with fishing line.
Figure 4. Close-up of stepper motor base and gear/spur rack mechanism.
The project utilized two Arduinos to help improve overall project latency. One of the Arduinos was used solely for detecting input from a flex sensor in the artificial cloud and modulating the LEDs to produce a lightning-like effect. The other Arduino was used to connect to a Raspberry Pi tracking the user’s face. The Raspberry Pi used OpenCV and Haar Cascades to detect all of the faces in a given frame. It then used the MOSSE tracking algorithm to track the largest face that was found in the frame. If the tracking algorithm could no longer track the face, the program resorted back to detecting faces. The use of both tracking and threading helped to greatly reduce latency and increase the overall fps. The program then found the x position of the center of the face and compared it with the current position of the stepper motor sent by the Arduino. If the stepper motor needed to change direction, the Raspberry Pi would send an integer corresponding to this change in direction. Otherwise, the stepper motor would continue to move until it reached the end of the rail. To control the stepper motor, we used the AccelStepper library which does not include delays, allowing the cloud to react to the messages sent by the Arduino almost immediately.
Figure 5. Laser cut clouds on top of project allowed for easy access to circuit. ‘Smilies’ were applied to the fronts of these containers to enhance user experience and boost morale of the build team.
A camera was connected to the Raspberry Pi for input to the face tracking program, which utilized the OpenCV library. The camera needed to be mounted in such a way that the artificial cloud would not come into view of the camera and to increase the area over where users’ faces could be tracked by the Rpi program.
Figure 6. Raspberry Pi camera mount in mirrored acrylic backing.
The cloud in our final project successfully followed the user and lit up according to our objectives. Although there were some bugs which affected the experience for some users, the majority of users were patient enough to experiment with the movement of the cloud and recognized that the cloud was following them. We found that interactions with the cloud were not greatly affected by the presence of a parent as the child could understand the premise of the project and interact with it by themselves. If a parent was present, they would generally explain how the cloud’s movements were controlled by face tracking.
Originally, we had intended for the user to control the position of the cloud by pushing it away rather than for the cloud to follow the person. However, we found that because children wanted to test the outcomes of their own actions, they needed to perceive a result that made sense and was nearly immediate. Consequently, the level of precision required for children to perceive that they were controlling the clouds would not be possible given the time constraint. As a result, after the Children’s School demo, we switched the main objective of the project so that the cloud would follow rather than move away from the user. This allowed for a certain margin of error in the position of the stepper motor while making it easier for children to connect the abstract relationship between the cloud’s movements and their movements. To some extent, however, the problem of requiring immediate, reliable results in order to fulfill the child’s objectives of autonomy and industry carried through to final work, though they were greatly reduced. During the final demo, we found that the entire interaction was greatly dependent on the child’s first impression. If the child moved and the cloud did not move with them, they quickly lost interest. However, if the cloud moved more reliably, this generally led to a much longer, more meaningful interaction. Over more iterations, this problem could be further mitigated by making the tracking algorithm more reliable and making the LEDs more pronounced.
Another aspect of the project that greatly contributed to the quality of the interaction but that we had not initially considered were the aesthetics of the project. From the first Children’s Museum demo, we found that children were more hesitant to interact with the work if it did not appear finished. Although we accounted for this in the final demo by creating the box structure, hiding a majority of the wiring, adding the LEDs, and adapting the code so that the box could face the room rather than the wall, we think that keeping with the context of the museum and adding more eye-catching features would have made the project appear even more approachable. In this case, a coat of bright paint or music to hide the harsh sounds of the motors would have made the project more appropriate for a children’s museum.
Both of us contributed to the overall design of the project and the electronics.
Jessica contributed to the Arduino/Raspberry Pi code associated with the movement of the clouds and the implementation of the stepper motor.
Jen contributed to the design of the box, rack/pinion/gear mechanism, code and implementation of flex sensor and LEDs.
Demonstration of the cloud’s movement in response to user’s face:
Demonstration of ‘lightning’ in response to handling cloud:
(Note that these drawings were produced between beginning and intermediate project progress reports, and do not reflect the final experience. Many aspects in the following drawings were changed/excluded in later interactions of the project.)
]]>
Henry Zhang, Xueting Li
Submission Date: 12/05/2018
The tilting maze is a collaborative ball maze game where each player controls one rotation axis. The goal of the game is to follow the designated path and collect all the stars along the way. This game is designed to inspire a sense of cooperation between young players and bring them moments of delight by giving reward feedbacks. From our observations, the game did achieve our goal.
Henry Zhang: Electronics; Aruidno programming; mechanical design and manufacturing of the button housing.
Xueting Li: Mechanical design and manufacturing of the chassis, gimbal frame, and wiring housing; wire fabrication.
Video 1: Good collaborative gaming experience
Video 2: A parent who doesn’t want the kids to intervene
Figure 1: System Overview
Figure 2: Children Interacting with the device
Examples of tilting ball maze design:
https://www.instructables.com/id/Labyrinth-From-a-Single-2×4/
https://www.instructables.com/id/DIY-Android-Ball-Maze-An-Introduction-to-the-And/
Solidworks files and drawings are here!
]]>
During the visit, our valve mechanism did not work very well. The cardboard flap we had inside the fan duct failed to successfully cut off air flow, which led to our project not being able to modulate air properly. In the end, we just let the flap open so that the air can flow continuously over the bottles to make sound the whole time. However, in the future, we want the flap to be functional so that it can produce different air modulation patterns in order to open the opportunity for more interaction between human and machine.
We found that children found delight in spinning the turntable. Adults were quick to understand the concept of our project and explained to children how changing the water levels would change the sound when air passes through the bottle. Some adults were interested in our setup with the fan and even tried spinning the turntable as well.
It was surprising to see how uninterested most of the children were in changing the water levels. They seemed to be more enticed with turning and watching the turntable as opposed to the sound-producing aspect of the project. However, near the end of our time at the museum, one child was interested in hearing how a bottle would sound if it was filled to the brim with water. So she filled a bottle up with water and placed it under the fan to see how it would sound. Finding that no sound was being produced, she kept pouring a little bit of water out until it finally did produce sound (see: Summary Video).
In order to highlight the sound production aspect of the project, we plan on adding a functional and more aesthetically pleasing fan duct as well as flap to modulate air. Additionally, we plan on installing lights to highlight the bottle that is currently being blown into so that it emphasizes where the sound is coming from. Our original setup did not make clear the location of the bottle producing sound and so we believe that aided in masking the sound production aspect of the project. It also did not really facilitate pouring water in and out of the bottles to change the water levels so we plan on establishing a “water station” to make that more clear and lowering the height of the turntable by either using a smaller table or making our own stand to make the project more accessible to smaller children.
Seeing that the children are more interested in the turning aspect of the turntable, we are now shifting our focus from mechanizing the turntable to controlling the air flow to modulate the air so that the physicality of turning the turntable is not removed from the experience. We want to center the project’s focus on the interaction between human and machine where the fan flap will be programmed to control air modulation patterns while the children can control which bottles are being played and how much water is in each bottle or what tones can be played.
For modulating the air, we will be using a solenoid instead of a servo to ensure a clear cutoff and opening of air flow for sound production. We also need to revise the fan duct design to accommodate the solenoid.
The fundamental experience does not need to be modified very much. Children will still be turning the turntable for the main experience but this time, the project will be more active in changing the sound that is produced. The turntable will also be at a more accessible height and a more appropriate “water station” will be set up to make the task of changing water levels more clear.
Air modulation will be the only new capability that will be added to the initial objectives. Lights will simply be an accessory to highlight sound production.
The demo at the Children’s Museum was, if anything, humbling. Nothing seemed to function as planned and we became very frustrated with not only the physical project but began questioning the idea of it itself. The PVC pipes were not nearly as resonant as expected, we had wiring issues with the solenoid, and the drum pedal was unable to effectively produce sound by hitting the pipes. Despite the technical failures, we were able to learn a lot about how people approached the project and what they expected from it. It appears that the tube configuration was intriguing to people, as many approached the project and attempted to interact with it. Children would step on the drum pedal just for the fun of it, trying to figure out how it connected to the rest of the functionality, and a few people banged on the box. Having moving, functional parts would definitely add more magic to the project, as well as aesthetic adjustments (such as colorful paint), and a second functionality that would create more pleasant feedback than simple acoustic elements (we were thinking of adding colored lights). The visit really helped put our project into perspective; we now know the limitations of the materials we are working with, our own skill limitations, as well as how people approach the project and what expectations from the user we have to rise to meet.
Revision Plan:
We know that our main barrier in the successful completion of this project will be simple trial and error of the form of acoustic elements and the tectonics of the project as a whole (how elements will be coupled to the base). Based on the museum demo we are revising the experience to include reactive lights and the use of drumsticks to produce sound as opposed to the drum pedal, not only because they are more effective at producing sound (at least with our particular material palette), but also because they allow the user more agency in where and how they can hit the various elements of the acoustic configuration.
]]>