Reports – 16-223 Work https://courses.ideate.cmu.edu/16-223/f2018/work Introduction to Physical Computing: Student Work Mon, 17 Dec 2018 16:17:45 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.25 SaxoWall Final Report: Leah&Kirman https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/15/saxowall-final-report-leahkirman/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/15/saxowall-final-report-leahkirman/#respond Sat, 15 Dec 2018 18:37:31 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3490 Sorry, but you do not have permission to view this content. ]]> Sorry, but you do not have permission to view this content. ]]> https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/15/saxowall-final-report-leahkirman/feed/ 0 Bottle Organ Final Report https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/08/bottle-organ-final-report/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/08/bottle-organ-final-report/#respond Sun, 09 Dec 2018 02:53:44 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3424 Sorry, but you do not have permission to view this content. ]]> Sorry, but you do not have permission to view this content. ]]> https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/08/bottle-organ-final-report/feed/ 0 Laser World Final Report https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/08/laser-world-final-report/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/08/laser-world-final-report/#respond Sat, 08 Dec 2018 08:08:08 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3430 By Connor Colombo and Michael Mong

December 8th, 2018

Abstract

Laser World is an interactive environment which allows the user to better visualize the path of lights using lasers and fog. This experience was designed to evoke a mysterious atmosphere as the fog and lights moving on their own using rotating mirrors draws you in and invites you to interact with it. In the end, it seems that the level of darkness and mysteriousness required for this project ended up scaring the children a little too much and ended making our project less effective as it did not draw them in enough.

Objectives

  1. Show users how light travels in a straight line and can be reflected off objects
  2. The mirrors should be able to move on their own and stop and take inputs in response to being touched by a user
  3. The fog generator should maintain the fog at optimal laser viewing levels either though sensor feedback or manual manipulation
  4. The moving lasers and fog should be mysterious and enticing to draw in users
  5. The water level in the tank for the fog generator should be adjustable and refillable.

Implementation

Mechanical:

  1. The walls of the Laser World enclosure were made out of clear acrylic to ensure that there was maximum to visibility into the enclosure. To further facilitate this we coated the inside of the enclosure with an anti-fogging mixture to prevent condensations from obscuring the view.
  2. The enclosure was made to be a 20-inch square prism with a height of 6 inches as through experiments we determined that this was large enough to allow 3 mirrors fit comfortably while also allowing the lasers to reflect off the furthest side of the box before becoming too obscured in fog to be seen.
  3. In order to make the box as airtight as possible to keep the fog enclosed, we used acrylic cement to weld the sides of the boxes together instead of using typical bolts.
  4. The enclosure was designed in two parts: a top and a bottom half. These two halves were then clamped together with screws using a flanged area in order to prevent water leakage while still allowing the halves to decouple to allow for future maintenance.
  5. The outside of the enclosure except for the very top panel was enclosed in a reflective mylar sheet to allow the walls to act as mirrors and reflect the lasers.
  6. The rotating mirrors required a second interior walled off section to prevent water from leaking into the gearboxes and damaging them and the electronics.
  7. The handles of the rotating mirrors were designed to be detectable using countersunk screws to allow for easy removal of the top half of the enclosure.
  8.  Series-elastic actuators with integrated force-feedback were used to drive each of the mirrors to allow for autonomous behaviour, which could be cleanly interrupted at any time by a user grabbing a mirror’s handle – an event which, itself, could be detected by the actuator, allowing it to collaborate with the user while they are engaging with a particular mirror. Additionally, to prevent the user from feeling like they are going to break the mirror when they grab and move it, these actuators can be programmed to provide a smooth, consistent, low-torque tactile response to the user.
  9. Rubber bands were used to couple the output of the gearbox to the motor to allow for some elasticity when grabbing the handles to prevent the motor from hurting the user by attempting to keep moving.

Electrical:

  1. Stepper motors (28BYJ-48) were used to control each actuator since they provide a cheap, precise, and easy to control means of rotary motion.
  2. Rotary dial encoders were chosen to measure the angular difference between the input and output to determine the load on the motor using the following formula. These were chosen since they are very low cost, lots of them are needed, and this application does not require particularly precise torque estimations so long as they are consistent enough for a stable threshold to be chosen.
  3. Since providing a consistent force feedback in the series elastic actuators requires a fast reaction time, requiring the control and processing of a multitude of signals, each actuator was given its own independent microcontroller.
  4. These microcontrollers were networked to a Raspberry Pi for diagnostics and potential control of music over an I2C bus. This bus was routing using USB cables because they are cheap, shielded, and available.

Outcomes

Successes

Due to a failed fog generator we were unable to get good test data when at the Children’s Museum; however, when testing later,  we did observe that the enclosure achieved most of our objectives by making the laser path visible and allowing us to interact with it by changing the angle that the beam bounced off at as shown by our video below.

Through early testing, we discovered that the top viewing surface became obscured with condensation and made the lasers harder to see. We were able to correct this issue by applying an anti-fogging coating to the inside of the acrylic box which prevented this from occurring.

Our actuators worked as intended and allowed the enclosure to have a life of its own while also taking commands from the user without it overpowering the children. In the future, we could improve this by adding a gripping pattern to the handle to help the children grasp it but as we were testing we wanted to make sure it would not dig into the children’s hands so we left it smooth.

Failures

From our testing at the Children’s School at Carnegie Mellon, we discovered a factor which we had not accounted for: children’s fear of the dark. We had hoped that the lights shining and moving in a darkened area would draw children in but some children found it scary and did not want to investigate. As our project’s viability was dependent on a slightly darkened area this was an issue that we were unable to resolve; however, this was only a smaller percentage of the children so it was not too large of an issue.

Another failure which we encountered and hope to fix in the future was the enclosure’s draining issues. As the fog condensed within the box it began to create a pool of water which made the enclosure difficult to disassemble cleanly without spilling water everywhere. This could be solved by adding a hole at the bottom of the box which we could open and reseal in order to let the water out before opening the enclosure fully.

Although our ideal vision of the project involved the lasers rebounding near infinitely in order to form intricate patterns, we began to realize that this was impossible as by traveling in the fog and making itself visible it lost power and began to fade over time. Because of this, we had to limit the size of our box so that the beam would be able to reflect off the walls some before fading. Alternatively, we could have focused on the intricate pattern and kept it smaller but we decided it needed to be larger in order to attract attention and allow for more mirrors for multiple children to play at once and interact with each other.  

Contribution

Connor Colombo:

  1. CAD and construction of actuators
  2. Control software
  3. PCB and general electrical design and assembly
  4. Part sourcing

Michael Mong:

  1. CAD and construction of enclosure
  2. Laser integration into enclosure
  3. Mirror design

Supporting Media

Final Demo: Our final project featured 3 mirrors which could be rotated in change the paths of the lasers. In this video, only one actuator is functional.


 

Initial Test Setup: This test enclosure showed that we could create a visible laser path within our box and that we could interact with it by rotating the mirror.


 

Foggy Test Box: After running our test set up for a while we realized that the laser’s path was no longer as visible which prompted us to apply an anti-fogging coating to the enclosure window.


 

Two Laser Test with Actuation: Prior to our final design we modified our initial test to better integrate the actuation and interact with two lasers instead of one to get a better understanding of how all the components worked together.


 

Internals of the series-elastic actuator during an early functionality test with drive motor at center bottom, input encoder at bottom right, and output encoder visible through center of square slot. Output is the square slot. Observed behaviour shows the actuator rotating independently with unloaded coupling of input and output disks, then when the applied load passes a certain specified torque threshold, the motor pauses until the user lets go.

Citations

N/A.

Supporting Material

SolidWorks (CAD) Files
    1. EnclosureAssembly
    2. Actuator Assembly
    3. Mirror Assembly
Electronic Schematics
EWM Module Schematic
EWM Module ROUTING

 

EWM Module PCB TOP

EWM Module PCB BOTTOM

Repository available at https://circuitmaker.com/Projects/Details/Connor-Colombo/EW-Series-Elastic-Motor-Module/embeded
Source Code

Available as gist at: https://gist.github.com/zCoCo/3e6f14de4269775628b97b6d82cae161

main control script – driver.ino
/* Driving Script for Series Elastic Actuator controlling a mirror with a handle.
This involves a basic autonomous behaviour of bouncing back and forth between
-180 and 180 degrees but following the user's motion if the actuator detects
that its handle being grabbed. Once the user lets go, the actuator will stay at
the desired position for 1 second. */
// NB: CCW is +ve

#include "Arduino.h"
#include "HAL.h"
#include "Sensing.h"
#include "Motion.h"
#include "Schedule.h"
//#include "Comm.h"

#define sgn(x) ( (x==0) ? 0 : abs(x) / (x) )

Schedule* sch = new Schedule();

// Maximum Angular Difference between Input and Output before Actuator Enters Follower Mode:
#define DIFF_THRESH 12
bool is_following = false; // Whether Currently in Follower Mode
bool holding_position = false; // Whether Currently Holding the Position Set by the User
unsigned long let_go_time = 0; // Time when the user last let go

void setup(){
  Serial.begin(9600);
  initHAL();
  // initComm(); // -TODO: Implement I2C Communications for Sound Sync.

  schedule();
  moveTo(180); // Kick off the Autonomous Motion
} // #setup

void schedule(){
  /** Perform Basic Life-Line Tasks: **/
  sch->ALWAYS->DO(updateSensors);
  sch->ALWAYS->DO(updateMotion);

  /** Coordinate Responses: **/
  // Enter Follower Mode:
  sch->WHEN(Sensors.diff > DIFF_THRESH)->do_([](){
    is_following = true;
    move( sgn(Sensors.diff) * (abs(Sensors.diff) - DIFF_THRESH + 1) );
  });

  // Move to Rest at Position the User Set and Stay There for a Time:
  sch->WHEN(Sensors.diff < DIFF_THRESH)->do_([](){
    move( Sensors.diff );
    let_go_time = millis();
    holding_position = true;
  });

  // Exit Follower Mode and Resume Autonomous Operation after User has Let Go
  // for 1 Second:
  sch->WHEN(let_go_time - millis() > 1000 && holding_position)->do_([](){
    is_following = false;
    holding_position = false;
    moveTo(180);
  });

  sch->WHEN(idle() && !is_following)->do_([](){
    moveTo(-getCommAng()); // Bounce Back and Forth
  });

  /** Give Status Updates: **/
  // Plot Load on Actuator:
  sch->EVERY(200)->do_([](){
    Serial.print(Sensors.diff);
    Serial.print(",");
    Serial.println(torque());
  });
} // #schedule

void loop(){
  sch->loop();
} // #loop
hardware abstraction layer – hal.h
#ifndef _HAL_H
#define _HAL_H
/** Hardware Abstraction Layer **/
// Uses ESP8266 12-F (AI-Thinker Variant)
// Program as Adafruit Feather HUZZAH
// Flash: 4M
// No Debug
// lwIP: v2 Lower Memory
// CPU: 80MHz
// Baud: 115200
// Erase: Sketch Only

#include <Encoder.h>
#define ENC_STEPS_PER_REV 80.0
Encoder EncO(13,12); // Output Encoder
Encoder EncI(10,9); // Input Encoder

#include <AccelStepper.h>
#define STP 1
#define DIR 3
#define EN 8
#define MS1 6
#define MS2 4
#define MS3 5
AccelStepper stepper(1, STP, DIR);

/** Basic Motion Parameters: **/
const float GEAR_RATIO = 43.0 / 11.0; // Output to Input Gear Ratio
const float MOT_STEPS_PER_REV = 4075.7728 * GEAR_RATIO; // Account for internal gearbox

/** Series Elastic Parameters: **/
// Radial Position of the Mounting Point of the Rubber Bands on the Inner Disk [m]:
const float RP_INNER = 7.46e-3;
// Unloaded Length of Rubber Bands (when mounted in actuator):
const float L0 = 15.5e-3;
// Amount of Stretching Required for Rubber Bands to Reach their Unloaded
// Position (L0) from their Relaxed Length:
#define d0 8e-3
// Number of Rubber Bands:
#define N_BANDS 4
// Average Effective Stiffness of Each Rubber Band [N/m]:
#define K_BAND 15


void initHAL(){
  // Initialize Motor Driver Pins and Setup for Full-Step Mode:
  pinMode(STP, OUTPUT);
  pinMode(DIR, OUTPUT);
  pinMode(EN, OUTPUT);
  pinMode(MS1, OUTPUT);
  pinMode(MS2, OUTPUT);
  pinMode(MS3, OUTPUT);
  digitalWrite(MS1, 0);
  digitalWrite(MS2, 0);
  digitalWrite(MS3, 0);
  digitalWrite(EN, 0);

  // Setup Motor Control Parameters:
  stepper.setMaxSpeed(100);
  stepper.setAcceleration(1000);
} // #initHAL

#endif //_HAL_H
Sensing  primitives – sensing.h
#ifndef _SENSING_H
#define _SENSING_H

#include "HAL.h"

struct SensorsType{
  // Useful Data:
  float input_ang = 0.0; // - Angle of Input Disk
  float output_ang = 0.0; //- Angle of Output Disk
  float diff = 0.0; //      - Angular Difference between Input and Output Disks [deg]

  // Helper Variables:
  float lag_sum = 0.0; // Sum of all measured values for diff
  unsigned long lag_count = 0; // Number of measured values for diff
} Sensors;

// Returns the Output Angle from the Encoder in Degrees
float outputAng(){
  return 360.0 * EncO.read() / ENC_STEPS_PER_REV;
} // #outputAng

// Returns the Input Angle from the Encoder in Degrees
float inputAng(){
  return -360.0 * EncI.read() / ENC_STEPS_PER_REV / GEAR_RATIO;
} // #outputAng

// Computes the Torque Loading the Actuator in N-m. This is an expensive
// calculation, only call on an as-needed basis:
float torque(){
  // Constant Geometric Helper Parameters to Speed Up Calculations:
  static const float L0_2 = sq(L0);
  static const float A = 2 * RP_INNER * (L0 + RP_INNER);
  static const float L0_d = d0 - L0;

  // Compute Torque (only valid for diff <= 180deg, bands will snap before this):
  const float th = Sensors.diff * M_PI / 180.0;
  const float cm = cos(th) - 1;
  return N_BANDS * RP_INNER * K_BAND * (sqrt(L0_2 - A*cm) + L0_d) * sin( th + atan(RP_INNER * sin(th) / (L0 - RP_INNER*cm)) );
} // #torque

// Update Sensor Metadata:
void updateSensors(){
  Sensors.input_ang = inputAng();
  Sensors.output_ang = outputAng();
  Sensors.lag_sum += Sensors.output_ang - Sensors.input_ang;
  Sensors.lag_count += 1;
  Sensors.diff = Sensors.output_ang - Sensors.input_ang - Sensors.lag_sum / Sensors.lag_count;
} // #updateSensors

#endif //_SENSING_H

 

Motion primitives – motion.h
#ifndef _MOTION_H
#define _MOTION_H
  #include "HAL.h"

  #define MOT_DIR -1 // Used to Invert Motor Direction (-1 for Invert, 1 for Normal)

  // Immediately Set the New Position Target of the Motor to the Given Angle [deg]
  void moveTo(float ang){
    stepper.stop();
    stepper.moveTo(MOT_DIR * ang * MOT_STEPS_PER_REV / 360.0);
  } // #moveTo

  // Immediately Set the New Position Target of the Motor to the Given Angle
  // Relative to the Motor's Current Position [deg]
  void move(float ang){
    stepper.stop();
    stepper.move(MOT_DIR * ang * MOT_STEPS_PER_REV / 360.0);
  } // #move

  // Returns Whether the Motor is Currently Idle (awaiting a new command)
  bool idle(){
    return stepper.distanceToGo() == 0;
  } // #idle

  // Returns the Most Recently Commanded Angle to the Motor
  float getCommAng(){
    return stepper.targetPosition() * 360.0 / MOT_DIR / MOT_STEPS_PER_REV;
  }

  // Perform All Necessary Motion Control Commands:
  void updateMotion(){
    stepper.run();
  } // #updateMotion
#endif // _MOTION_H
Scheduler – schedule.h
/* Schedule.h
 * Intuitive Scheduling Utility that Allows for Complex Time and Condition Based
 * Behaviors to be Constructed out of Simple, Legible Event-Based Primitives.
 * (admittedly, this has a bit of a ways to go in terms of memory efficiency -
 * badly needs a ring buffer. (especially bad now that state persistence has
 * been added))
 * KNOWN BUGS / PROBLEMS:
 *  - Semi-Required memory leak on the %done% state of Actions. Need to have
 * some way of determining whether / how long other functions will need access to
 * this information after the Action has been deleted. NOTE: Until this is fixed,
 * the ability to create unbounded series of SingleTimedEvents with #in_ is
 * gone. Keep total number of events known and bounded.
 * Author: Connor W. Colombo, 9/21/2018
 * Version: 0.1.4
 * License: MIT
 */
#ifndef SCHEDULE_H
#define SCHEDULE_H
#include "Arduino.h"
#include <ArduinoSTL.h>
#include <vector>
/* Example Usage (only call these once, likely in setup):
 ** avoid calling variables directly from inside these functions unless they are global variables **

 void setup(){
 // Basic Call:
 sch->EVERY(500)->DO(blink()); // Will call #blink every 500ms
 sch->EVERY_WHILE(750, dist < 10)->DO(togglePeek()); // Will peek / unpeek every 750ms while dist is < 10cm sch->IN(2500)->DO(doThisOnce()); // Will call #doThisOnce one time in 2.5s

 sch->NOW->DO(sortOfUrgent()); // Will call #sortOfUrgent as soon as possible without blocking other events (useful in comm. interrupts for longer behavior)

 sch->WHILE(dist < 10)->DO(swing_arms()); // Will call #swing_arms as often as possible as long as dist < 10. sch->WHEN(dist > 10)->DO(someOtherThing()); // Will call #someOtherThing every time dist goes from <=10 to >10.
 sch->WHEN(touched())->DO(uncoverEyes()); // Will uncover eyes when touched goes from false to true (so, when touched)

 // Other more efficient notation for simple function calls:
 sch->EVERY(250)->do_(blink); // if you're just calling a void function with no arguments, it's more effective to just use the lowercase #do_
 // Note:
 sch->EVERY(100)->DO(x++); // x or other variables accessed directly must be a global variables (not local scope)

 // Or Save Events to be Registered to Later:
 Event* FREQ_100Hz = schedule->EVERY(10);
 Event* TOO_CLOSE = schedule->WHEN(dist < 10); // ... somewhere else in code: TOO_CLOSE->DO(tone(BUZZER, 1000, 25));
 TOO_CLOSE->SIGNUP(tone(BUZZER, 1000, 25));

 // Additionally, events which setup other events (using nested actions) return
 // a double pointer to a bool which indicates when all sub-events have been
 // executed at least once.
 // Note: bool** beepboopd must be global.
 beepboopd = sch->IN(3100)->DO_LONG( *(sch->IN(1000)->DO( plt("***BEEP***BOOP***"); )); );
 sch->WHEN(**beepboopd)->DO( plt("## BOP ##"); );
 }
 */

/* NB: Some functionality must be assigned in macros b/c lambdas with captures
 can't be converted to function pointers. */
// More Legible Shorthand for "do_" syntax:
#define DO(x) do_([](){x;})
/* Shorthand for Calling a Function which Takes a Long Time to Complete after it
 Returns (has its own event calls) and returns a double pointer of a boolean which
 indicates when it is done. */
#define DO_LONG(x) \
do_(new NestingAction([](Action* action){ \
delete action->done; \
action->done = x; \
}));
// More Legible Shorthand for "do_" syntax:
#define SIGNUP(x) signup([](){x;})
// More Legible Shorthand for "while_" syntax:
#define WHILE(x) while_([](){return (x);})
// More Legible Shorthand for "when" syntax
#define WHEN(x) when([](){return (x);})
// Syntax to Normalize All-Caps Syntax used by Conditionals:
#define EVERY(x) every(x)
// More Legible Shorthand for "everyWhile" syntax:
#define EVERY_WHILE(x,y) everyWhile(x, [](){return (y);})
// Syntax to Normalize All-Caps Syntax used by Conditionals:
#define IN(x) in_(x)
// Shorthand Syntax for Performing a Task as Soon as Possible:
#define NOW in_(0)
// Shorthand Syntax for Performing a Task as Frequently as Possible:
#define ALWAYS EVERY(1)

typedef bool** ActionState;
#define new_ActionState(b) new bool*(new bool(b));

/*
 * Container for Action which are called in events and their respective metadata.
 */
class Action{ // Abstract Container for Use in Arrays of Pointers
public:
    bool* done = new bool(false);

    virtual ~Action(){
        //delete done; // <- Leave the done state variable behind //done = nullptr; } // dtor virtual void call() = 0; /* Tells Whether this Action and its Required Actions are Complete. Returns the dereferrenced state of member %done% */ bool isDone(){ return *(this->done);
    } // #isDone
}; // class Action
/*
 * Most basic form of an Action which takes a void-void function which has no
 * dependencies and thus is considered to be done executing once the function
 * returns (ie. doesn't generate any Events).
 */
class BasicAction : public Action{
public:
    // Type of Function to be Called which Consumes the Stored Data:
    typedef void (*function) ();

    BasicAction(function f) : oncall{f} {};

    void call(){
        oncall();
        *(this->done) = true;
    }
private:
    // Function to be Executed when this Action is Called:
    function oncall;
}; // class BasicAction
/*
 * Most basic form of an Action which takes a void-Action* function which has
 * dependencies / triggers other events and is expected to set this Action's
 * done value to true once all of its sub-functions are complete.
 */
class NestingAction : public Action{
public:
    // Type of Function to be Called which Consumes the Stored Data:
    typedef void (*function) (Action*);

    NestingAction(function f) : oncall{f} {};

    void call(){
        oncall(this);
    }
private:
    // Function to be Executed when this Action is Called:
    function oncall;
}; // class NestingAction
/*
 * An Action (ie function) to be Performed by being Called when an Event
 * Triggers and Must Receive some Piece(s) of Stored Data of type T to Execute.
 * The contained function is considered to have no dependencies and thus be
 * done executing once the function returns (ie. doesn't generate any Events).
 */
template <typename T>
class DataAction : public Action{
public:
    // Type of Function to be Called which Consumes the Stored Data:
    typedef void (*function) (T);
    // Stored Data to be Given to the Function:
    T data;

    DataAction(function f, T d) :  data{d}, oncall{f} {};

    // Calls this Action by Passing the Stored Data to #oncall and Calling It.
    void call(){
        oncall(data);
        *(this->done) = true;
    }
private:
    // Function to be Executed when this Action is Called:
    function oncall;
}; // Class: DataAction
/*
 * An Action (ie function) to be Performed by being Called when an Event
 * Triggers and Must Receive some Piece(s) of Stored Data of type T to Execute.
 * The contained function has dependencies / triggers other events and is
 * expected to set this Action's done value to true once all of its s
 * sub-functions are complete.
 */
template <typename T>
class NestingDataAction : public Action{
public:
    // Type of Function to be Called which Consumes the Stored Data:
    typedef void (*function) (T, Action*);
    // Stored Data to be Given to the Function:
    T data;

    NestingDataAction(function f, T d) : data{d}, oncall{f} {};

    // Calls this Action by Passing the Stored Data to #oncall and Calling It.
    void call(){
        oncall(this);
    }
private:
    // Function to be Executed when this Action is Called:
    function oncall;
}; // Class: NestingDataAction

/*
 * Basic Event Class which Triggers only when Called Directly.
 */
class Event{
public:
    // Basic void-void function which can signup for the event:
    typedef void (*RegisteredFunction) ();
    const bool runs_once; // Indentifies whether this event only happens once.

    Event() : runs_once{false} {};

    virtual ~Event(){
        /*for(
            std::vector<Action*>::iterator it = this->registry.begin();
            it != this->registry.end();
            ++it
            ){
            delete (*it);
        }
         this->registry.clear(); // TODO: Need to come up with way to make Action::done itself stick around*/
    } // dtor

    /*
     * Request this Event to Execute ASAP.
     * NOTE: Calls happen IN ADDITION to any event-specific timings or conditions. */
    void call(){
        this->calledButNotRun = true;
    } // #call

    /*
     * Executes this Event if it Should Execute either Because it's been Called or
     * Should Self-Trigger.
     * Returns Whether the Event was Executed.
     */
    bool tryExecute(){
        if(this->shouldTrigger() || this->calledButNotRun){ // Call #shouldTrigger first
            this->execute();
            this->calledButNotRun = false;
            return 1;
        }

        return 0;
    } // #tryExecute

    /* Test if this Event Should Self-Trigger*/
    virtual bool shouldTrigger(){
        return 0; // Basic Events only Trigger when Explicitly Called
    } // #shouldTrigger

    /* Add the Given Function to the %registry% as a BasicAction to be Executed
     Every Time the Event is Triggered. Returns a double pointer of the done variable of the Action created. */
    bool** signup(RegisteredFunction fcn){
        Action* a = new BasicAction(fcn);
        this->registry.push_back(a);
        return &(a->done);
    } // #signup

    /* Add the Given Action to the %registry% to be Executed Every Time the Event
     is Triggered. Returns a double pointer of the done variable of the Action. */
    bool** signup(Action* a){
        this->registry.push_back(a);
        return &(a->done);
    } // #signup

    // Alias for Signing Up for the Event
    bool** do_(RegisteredFunction fcn){ return signup(fcn); }
    bool** do_(Action* a){ return signup(a); }

    // Calls All Functions Registered to this Event
    void execute(){
        if(!this->ran || !this->runs_once){
            // Do this ^ check instead of deleting self b/c pointer might be accessed later if in list.
            for(std::vector<Action*>::size_type i = 0; i != this->registry.size(); i++) {
                this->registry[i]->call();
            }
            this->ran = true;
        }
    } // #execute

protected:
    Event(bool ro) : runs_once{ro} {};
    std::vector<Action*> registry;
    bool ran = false; // Whether this function has been run before (ever).
    bool calledButNotRun = false; // Whether this Event has been Called Recently but Not Yet Executed
}; // Class: Event

/* Event which Triggers Anytime #shouldTrigger is called and its condition is True*/
class ConditionalEvent : public Event{
public:
    typedef bool (*EventCondition) ();

    EventCondition condition; // Function that Triggers the Event if it's Ready to be Triggered

    ConditionalEvent(EventCondition t) : condition{t} {}; // Constructor

    virtual ~ConditionalEvent(){
        delete& condition;
    } // Destructor

    /*
     * Triggers this Event if its %condition% Allows It.
     * Returns Whether the Event was Triggered.
     */
    virtual bool shouldTrigger(){
        if(this->condition()){
            return 1;
        }
        return 0;
    } // #shouldTrigger
};

/*
 * Event Class which Triggers when its EventCondition is True When #shouldTrigger
 * is Called and was False the Last time it was Called.
 */
class TransitionEvent : public ConditionalEvent{
public:
    TransitionEvent(EventCondition t) : ConditionalEvent(t) {}; // Constructor

    bool shouldTrigger(){
        bool curr_state = this->condition();

        if(curr_state && !this->last_state){
            this->last_state = curr_state;
            return 1;
        }

        this->last_state = curr_state;
        return 0;
    } // #shouldTrigger

protected:
    bool last_state = false;
};

/*
 * Event which Triggers as Close to its Specified Interval after its Previous
 * Execution as Possible
 */
class TimedEvent : public Event{
public:
    unsigned long interval; // Interval between Executions

    TimedEvent(unsigned long i) : interval{i} {
        this->timer = i;
        this->last_time = millis();
    }; // Constructor

    ~TimedEvent(){ } // Destructor

    /*
     * Triggers this Event if its %condition% Allows It.
     * Returns Whether the Event was Triggered.
     */
    bool shouldTrigger(){
        unsigned long now = millis();
        this->timer -= now - last_time;
        this->last_time = now;

        if(this->timer < 0){ this->timer += this->interval; // Keeps execution freq. as close to interval as possible
            return 1;
        }

        return 0;
    }  // #shouldTrigger

protected:
    unsigned long last_time;
    long timer;
    TimedEvent(bool runs_once_, unsigned long i) : Event(runs_once_), interval{i} {
        this->timer = i;
        this->last_time = millis();
    };
};

/* An Event which Triggers Once After a Set Period of Time */
class SingleTimedEvent : public TimedEvent{
public:
    SingleTimedEvent(unsigned long i) : TimedEvent(true, i) {}; // Constructor
};

/* An Event which Triggers at a Certain Frequency so Long as a Given Condition is True */
class ConditionalTimedEvent : public TimedEvent{
public:
    typedef bool (*EventCondition) ();

    EventCondition condition; // Function that Triggers the Event if it's Ready to be Triggered

    ConditionalTimedEvent(unsigned long i, EventCondition t) : TimedEvent(i), condition(t){};

    virtual ~ConditionalTimedEvent(){
        delete& condition;
    } // Destructor

    /*
     * Triggers this Event if its %condition% Allows It.
     * Returns Whether the Event was Triggered.
     */
    bool shouldTrigger(){
        unsigned long now = millis();
        this->timer -= now - last_time;
        this->last_time = now;

        bool curr_state = this->condition();

        // Everytime Condition Becomes True, Restart Timer
        if(curr_state && !this->last_state){
            timer = this->interval;
        }

        this->last_state = curr_state;

        if(curr_state && this->timer < 0){ this->timer += this->interval; // Keeps execution freq. as close to interval as possible
            return 1;
        }

        return 0;
    }  // #shouldTrigger

protected:
    bool last_state = false;
};

class Schedule{
public:
    std::vector<Event*> events;

    /* Create an Event to be Triggered as Long as the Given Condition is True */
    ConditionalEvent* while_( bool (*condition)() ){
        ConditionalEvent* e = new ConditionalEvent(condition);
        this->events.push_back(e);
        return e;
    } // #while_

    /* Create an Event to be Triggered Once for Every Time the Given Condition
     Changes from false to true: */
    TransitionEvent* when( bool (*condition)() ){
        TransitionEvent* e = new TransitionEvent(condition);
        this->events.push_back(e);
        return e;
    } // #when

    /* Create an Event that will be Triggered Every %interval% Milliseconds */
    TimedEvent* every(const unsigned long interval){
        TimedEvent* e = new TimedEvent(interval);
        this->events.push_back(e);
        return e;
    } // #every

    /* Create an Event that will be Triggered Once in %t% Milliseconds */
    SingleTimedEvent* in_(const unsigned long t){
        SingleTimedEvent* e = new SingleTimedEvent(t);
        this->events.push_back(e);
        return e;
    } // #in_

    /*
     * Create an Event that will be Triggered Every %interval% Milliseconds While
     * a Given Condition is True, starting %interval% Milliseconds AFTER the
     * Condition Becomes True.
     */
    ConditionalTimedEvent* everyWhile(const unsigned long interval, bool (*condition)()){
        ConditionalTimedEvent* e = new ConditionalTimedEvent(interval, condition);
        this->events.push_back(e);
        return e;
    } // #everyWhile

    // Function to be Executed on Every Main Loop (as fast as possible)
    void loop(){
        // Iteration has to account for the fact that elements are intentionally
        // deleted from the vector in the loop and potentially added at any call
        // of #Event::tryExecute
        std::vector<Event*>::size_type size = this->events.size();
        std::vector<Event*>::size_type i = 0;
        while(i < size){ if( this->events[i]->tryExecute() && this->events[i]->runs_once ){
                // Delete Event if it's been Executed and Only Runs Once
                delete this->events[i]; // Delete the Event
                this->events.erase(this->events.begin() + i); // Remove the addr from the vector
                size--; // As far as we know, the vector is now smaller
            } else{
                ++i; // Increment iterator normally
            }
        }
    } // #loop
}; // Class: Schedule
#endif // SCHEDULE_H

 

]]>
https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/08/laser-world-final-report/feed/ 0
Vroom Car Final Report https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/07/vroom-car-final-report/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/07/vroom-car-final-report/#respond Fri, 07 Dec 2018 22:11:55 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3373 Sorry, but you do not have permission to view this content. ]]> Sorry, but you do not have permission to view this content. ]]> https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/07/vroom-car-final-report/feed/ 0 Reach for the Sky – Final Report https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/05/reach-for-the-sky-final-report/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/05/reach-for-the-sky-final-report/#respond Wed, 05 Dec 2018 23:21:51 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3327 Reach for the Sky
Jessica Lew, Jen Kwang
December 1, 2018

Figure 1.  Project setup at museum.  Monitor placed below project showed video feed from RPi face tracking program.

Abstract

Reach for the Sky is a computer vision controlled interactive experience that invites the user to explore the behaviors of an artificial cloud.  With the use of face tracking, the cloud is able to detect the position of a users face, and then moves towards the user.  Additionally, the user may choose to touch said cloud in order to affect the lighting of the project, such that artificial lighting is produced.

Objectives

  1. The cloud should follow the user’s movements with enough accuracy for the user to relate their change in position to the cloud’s change in position
  2. The LED strips at the top of the box should light up in response to user’s squishing the cloud to mimic lightning
  3. The stepper motor should move freely along the rack at an approximately constant speed
  4. The latency in the Raspberry Pi and Arduino should be reduced so that delays in changing the cloud’s position do not disrupt the experience
  5. The flex sensor should be embedded in the cloud such that the cloud can be handled in a variety of ways without breaking the sensor or resulting in unreliable data

Implementation

Figure 2.  Container corner support pieces and mounts used for attaching cloud track to ceiling.

The container of this piece was originally built with the intent of implementing three artificial clouds (though in the end, only one was achieved).  Note that all wood pieces seen in photo documentation are laser cut from 6mm plywood.  The backing of the box was created from 3mm mirrored acrylic in order to allow users to see their body position in relation to the clouds.  Because the box needed to be large enough to hold the three tracks on which the clouds would travel and to be sturdy enough to not collapse, support pieces were placed in between top and side planes of the box (see Figure 2).  In addition to this, planes of the box were mainly held together by captive screws and fitted tabs.

Figure 3.  Inside project, view of top panel.  Cloud travels by using a base that moves along an aluminum extrusion track.  LEDs adhered to top are also visible.

Movement of the cloud was achieved by creating a base (to which the cloud is attached) that utilized a stepper motor to travel along a pair of one-meter long aluminum extrusions (see Figure 3).  The motor was attached to a laser cut gear, which allowed movement along a spur rack attached to one of the aluminum extrusions (see below in Figure 4).  This mechanism was created based on original designs from our instructor, Garth Zeglin, though several modifications were made to accommodate gravity, the size of the project, and to reduce friction.  The artificial cloud was fabricated by filling a plastic bag full with polyester fiber stuffing, then hot gluing more stuffing to the outside.  The LEDs were coded to display a pattern when the flex sensor showed a change from its initial value.  Because the flex sensor had a tendency to become permanently bent, the Arduino code was created such that the initialized value could be updated if the sensor was damaged.  A hole was cut into the bag for placement of the flex sensor and the cloud itself was attached to the base with fishing line.

Figure 4.  Close-up of stepper motor base and gear/spur rack mechanism.

The project utilized two Arduinos to help improve overall project latency.  One of the Arduinos was used solely for detecting input from a flex sensor in the artificial cloud and modulating the LEDs to produce a lightning-like effect.  The other Arduino was used to connect to a Raspberry Pi tracking the user’s face. The Raspberry Pi used OpenCV and Haar Cascades to detect all of the faces in a given frame. It then used the MOSSE tracking algorithm to track the largest face that was found in the frame. If the tracking algorithm could no longer track the face, the program resorted back to detecting faces. The use of both tracking and threading helped to greatly reduce latency and increase the overall fps. The program then found the x position of the center of the face and compared it with the current position of the stepper motor sent by the Arduino. If the stepper motor needed to change direction, the Raspberry Pi would send an integer corresponding to this change in direction. Otherwise, the stepper motor would continue to move until it reached the end of the rail. To control the stepper motor, we used the AccelStepper library which does not include delays, allowing the cloud to react to the messages sent by the Arduino almost immediately.

Figure 5.  Laser cut clouds on top of project allowed for easy access to circuit.  ‘Smilies’ were applied to the fronts of these containers to enhance user experience and boost morale of the build team.

A camera was connected to the Raspberry Pi for input to the face tracking program, which utilized the OpenCV library.  The camera needed to be mounted in such a way that the artificial cloud would not come into view of the camera and to increase the area over where users’ faces could be tracked by the Rpi program.

Figure 6.  Raspberry Pi camera mount in mirrored acrylic backing.

Outcomes

The cloud in our final project successfully followed the user and lit up according to our objectives. Although there were some bugs which affected the experience for some users, the majority of users were patient enough to experiment with the movement of the cloud and recognized that the cloud was following them. We found that interactions with the cloud were not greatly affected by the presence of a parent as the child could understand the premise of the project and interact with it by themselves. If a parent was present, they would generally explain how the cloud’s movements were controlled by face tracking.

Originally, we had intended for the user to control the position of the cloud by pushing it away rather than for the cloud to follow the person. However, we found that because children wanted to test the outcomes of their own actions, they needed to perceive a result that made sense and was nearly immediate. Consequently, the level of precision required for children to perceive that they were controlling the clouds would not be possible given the time constraint. As a result, after the Children’s School demo, we switched the main objective of the project so that the cloud would follow rather than move away from the user. This allowed for a certain margin of error in the position of the stepper motor while making it easier for children to connect the abstract relationship between the cloud’s movements and their movements. To some extent, however, the problem of requiring immediate, reliable results in order to fulfill the child’s objectives of autonomy and industry carried through to final work, though they were greatly reduced.  During the final demo, we found that the entire interaction was greatly dependent on the child’s first impression. If the child moved and the cloud did not move with them, they quickly lost interest. However, if the cloud moved more reliably, this generally led to a much longer, more meaningful interaction. Over more iterations, this problem could be further mitigated by making the tracking algorithm more reliable and making the LEDs more pronounced.

Another aspect of the project that greatly contributed to the quality of the interaction but that we had not initially considered were the aesthetics of the project. From the first Children’s Museum demo, we found that children were more hesitant to interact with the work if it did not appear finished. Although we accounted for this in the final demo by creating the box structure, hiding a majority of the wiring, adding the LEDs, and adapting the code so that the box could face the room rather than the wall, we think that keeping with the context of the museum and adding more eye-catching features would have made the project appear even more approachable.  In this case, a coat of bright paint or music to hide the harsh sounds of the motors would have made the project more appropriate for a children’s museum.

Contribution

Both of us contributed to the overall design of the project and the electronics.

Jessica contributed to the Arduino/Raspberry Pi code associated with the movement of the clouds and the implementation of the stepper motor.

Jen contributed to the design of the box, rack/pinion/gear mechanism, code and implementation of flex sensor and LEDs.

Videos

Demonstration of the cloud’s movement in response to user’s face:

Demonstration of ‘lightning’ in response to handling cloud:

Citations

Supporting Material

(Note that these drawings were produced between beginning and intermediate project progress reports, and do not reflect the final experience.  Many aspects in the following drawings were changed/excluded in later interactions of the project.)

 

]]>
https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/05/reach-for-the-sky-final-report/feed/ 0
Tilting Maze Final Report – Henry Zhang, Xueting Li https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/04/tilting-maze-final-report-henry-zhang-xueting-li/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/04/tilting-maze-final-report-henry-zhang-xueting-li/#respond Wed, 05 Dec 2018 02:30:18 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3330 Tilting Maze Project Document

Henry Zhang, Xueting Li

Submission Date: 12/05/2018

Abstract

The tilting maze is a collaborative ball maze game where each player controls one rotation axis. The goal of the game is to follow the designated path and collect all the stars along the way. This game is designed to inspire a sense of cooperation between young players and bring them moments of delight by giving reward feedbacks. From our observations, the game did achieve our goal.

Objectives

  1. The maze board of the machine should be able to perform independent rotational motions in two axes: roll and pitch. The magnitude and rate of the response should be maintained at a suitable level so that the ball can be controlled at users’ will.
  2. The maze should be arranged such that there exist obvious start, end, bonus positions, and paths. The ball should be able to move around the maze without stuck anywhere.
  3. The control method should be obvious and intuitive, encouraging children to manipulate the machine with designated controls and use them in the right way.
  4. The device should have strong and obvious feedback to the users when they achieved certain goals.
  5. Overall, the game should encourage players to work together and inspire their senses of cooperation.

Implementation and Design Choices

Mechanical and Mechanism

  1. The main chassis, maze board, controller stations, and wiring shield are made from 6mm plywood. The main fabrication method for the parts above is laser cutting. We choose plywood as our main material because it is relatively lightweight compared with metal, and easy to manufacture with a laser cutter. Plywood is also the material with the easiest access for us.
  2. The rotation mechanism is achieved by implementing a 2-axis gimbal, with two hobby servos each controlling one axis of motion. Considering the small inertia of the rotating frame, and to reduce complexity and increase maintainability of the device, we attached the rotating frame directly to the servo motor, without any gearing.
  3. We use M4 bolts and lock nuts for the support of the rotation axes. We chose not to use bearings because of the small mass of the system.
  4. Captive screws mechanism is used to secure connections between vertically mating faces.
  5. Arcade push buttons are used as the control interface and are located at each corresponding control axis.
  6. A wood wiring shield box is placed on top of the breadboard to protect the circuits.
  7. Maze components (straight bars, circles, etc) are made from plywood and laser cutting. They are then arranged and put on the maze board with double-sided tape. Example paths and star bonuses are drawn in different and vibrant colors.

Electrical

  1. Proximity sensors are placed at the entrance of each bonus location (the star) to detect if the ball gets there. The proximity sensors are then connected to the analog pins of the Arduino.
  2. LED arrays are placed at the walls of each bonus location (the star). The positive and ground are then connected to the collector and the emitter of a transistor respectively, which are then connected to the 9V and ground bus. The Vin of the transistor is connected to a digital out pin of the Arduino.
  3. Arcade switches are connected into the circuit such that when the switch is closed, the Arduino analog pin will read high and read low otherwise. 1K pull-up resistors are used in the circuit.
  4. A speaker is connected directly to the Arduino 5V digital output.
  5. A pack of six 1.5 volts AA batteries is used as the power source of the device.

Software

  1. To avoid using delay() function, time elapsed is calculated after each loop. Other functions may take in time elapsed to keep track of duration.
  2. A threshold of 100 is set on the proximity sensor reading. A reading below the threshold will trigger corresponding LED strip and pre-defined reward tone from the speaker.
  3. A “pitches.h” header file is included to the main file. The header file includes macros for Arduino analog output value to corresponding pitch. It is used in the definition of the reward tone.
  4. An auto-move mode is implemented. The auto-move mode is a set of predefined control inputs that are automatically performed by the device when no user plays it for 40 seconds. The auto-move mode terminates once either of the control axes receives input from a user. This feature is designed to attract potential players when no one has played it for a while.
  5. A keystroke delay of 80 ms is implemented. This feature is designed to guide young players perform meaningful control, instead of rapid random keystrokes.

Outcomes

Functional Outcomes

  1. The servo motors provide enough torque to drive the gimbal frames, providing independent and responsive motions. Objective 1 is met.
  2. The maze is arranged reasonably and the color plays a significant rule for players to define their objectives (spot the stars, follow the lines, etc). Objective 2 is met.
  3. Proximity sensor, LED, speaker and other electronics function without exception throughout the entire visit session of two hours. The pack of six 1.5 volts AA battery lasts the entire session.

Non-functional Outcomes

  1. The arcade buttons attract players from all ages. The players can easily grasp the controlling intention of the buttons and start playing the game by pressing them intuitively. The keystroke delay achieved nominal control guidance from our observation of different actions of children from first and second Children’s Museum visits. Objective 3 is met.
  2. The players can easily feel the sense of achievement when they see the LED of the stars lit up and hear the reward tone from the speakers. The user feedback from the device is obvious and easy to get. Objective 4 is met.
  3. From our observation, the separation of the control buttons guide the users to collaborate with each other and encourage communication between players. Pairs with mutual goal and collaboration usually result in better gaming experiences, which also helped other players to realize the cooperation component of the game. Objective 5 is met.

Surprising Outcomes

  1. We observed that when the two players could not collaborate accordingly, an instructor can help them work together effectively. Usually, the instructor is the parent of the children. Either the parent plays as the teammate of the child or guides two children play with each other, the instructor’s presence enhances the gaming experience significantly.
  2. We also observed that some children modified the rule of the game but still enjoys it. Though in this case, the game did not meet its objectives, it at least brings laughs to the player, and we are still happy with this outcome.

Contribution

Henry Zhang: Electronics; Aruidno programming; mechanical design and manufacturing of the button housing.

Xueting Li: Mechanical design and manufacturing of the chassis, gimbal frame, and wiring housing; wire fabrication.

Supporting Media

Videos

Video 1: Good collaborative gaming experience

Video 2: A parent who doesn’t want the kids to intervene

Images

Figure 1: System Overview

Figure 2: Children Interacting with the device

Citations

Examples of tilting ball maze design:

https://www.instructables.com/id/Labyrinth-From-a-Single-2×4/

https://www.instructables.com/id/DIY-Android-Ball-Maze-An-Introduction-to-the-And/

Supporting Design Documents

Solidworks CAD Files and Mechanical Drawings:

Solidworks files and drawings are here!

Arduino Code:

Arduino Code is here!

 

]]>
https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/04/tilting-maze-final-report-henry-zhang-xueting-li/feed/ 0
Magnetic Car – Final Report https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/01/magnetic-car-final-report/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/01/magnetic-car-final-report/#respond Sat, 01 Dec 2018 20:00:44 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3296 Sorry, but you do not have permission to view this content. ]]> Sorry, but you do not have permission to view this content. ]]> https://courses.ideate.cmu.edu/16-223/f2018/work/2018/12/01/magnetic-car-final-report/feed/ 0 Bottle Organ: Intermediate Progress Report https://courses.ideate.cmu.edu/16-223/f2018/work/2018/11/08/bottle-organ-intermediate-progress-report/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/11/08/bottle-organ-intermediate-progress-report/#respond Thu, 08 Nov 2018 20:46:11 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3196 Analysis

During the visit, our valve mechanism did not work very well. The cardboard flap we had inside the fan duct failed to successfully cut off air flow, which led to our project not being able to modulate air properly. In the end, we just let the flap open so that the air can flow continuously over the bottles to make sound the whole time. However, in the future, we want the flap to be functional so that it can produce different air modulation patterns in order to open the opportunity for more interaction between human and machine.

We found that children found delight in spinning the turntable. Adults were quick to understand the concept of our project and explained to children how changing the water levels would change the sound when air passes through the bottle. Some adults were interested in our setup with the fan and even tried spinning the turntable as well.

It was surprising to see how uninterested most of the children were in changing the water levels. They seemed to be more enticed with turning and watching the turntable as opposed to the sound-producing aspect of the project. However, near the end of our time at the museum, one child was interested in hearing how a bottle would sound if it was filled to the brim with water. So she filled a bottle up with water and placed it under the fan to see how it would sound. Finding that no sound was being produced, she kept pouring a little bit of water out until it finally did produce sound (see: Summary Video).

In order to highlight the sound production aspect of the project, we plan on adding a functional  and more aesthetically pleasing fan duct as well as flap to modulate air. Additionally, we plan on installing lights to highlight the bottle that is currently being blown into so that it emphasizes where the sound is coming from. Our original setup did not make clear the location of the bottle producing sound and so we believe that aided in masking the sound production aspect of the project. It also did not really facilitate pouring water in and out of the bottles to change the water levels so we plan on establishing a “water station” to make that more clear and lowering the height of the turntable by either using a smaller table or making our own stand to make the project more accessible to smaller children.

Seeing that the children are more interested in the turning aspect of the turntable, we are now shifting our focus from mechanizing the turntable to controlling the air flow to modulate the air so that the physicality of turning the turntable is not removed from the experience. We want to center the project’s  focus on the interaction between human and machine where the fan flap will be programmed to control air modulation patterns while the children can control which bottles are being played and how much water is in each bottle or what tones can be played.

Summary Video

Photos

Revision Plan

For modulating the air, we will be using a solenoid instead of a servo to ensure a clear cutoff and opening of air flow for sound production. We also need to revise the fan duct design to accommodate the solenoid.

The fundamental experience does not need to be modified very much. Children will still be turning the turntable for the main experience but this time, the project will be more active in changing the sound that is produced. The turntable will also be at a more accessible height and a more appropriate “water station” will be set up to make the task of changing water levels more clear.

Air modulation will be the only new capability that will be added to the initial objectives. Lights will simply be an accessory to highlight sound production.

Task List
Additional Items to be Purchased
  • Clear glass bottles
  • Lights

CAD Files

]]>
https://courses.ideate.cmu.edu/16-223/f2018/work/2018/11/08/bottle-organ-intermediate-progress-report/feed/ 0
Children’s School Observation Notes – Sora https://courses.ideate.cmu.edu/16-223/f2018/work/2018/10/18/childrens-school-observation-notes-sora/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/10/18/childrens-school-observation-notes-sora/#respond Thu, 18 Oct 2018 20:18:03 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3164 At the Children’s School, I observed 5-year-olds grabbing a handful of blocks and counting how many blocks they grabbed by placing each block on a table of numbers. The task required them to do this 6 times. Children followed the task and were proud to show the teacher how many blocks they could grab with their one hand. This activity displayed autonomy and industry, where children took the initiative to start and complete this task by themselves and also be proud of what they could accomplish from doing the task. Although simple, this task illustrates children’s ability to independently follow instructions and complete a task to the best of their ability without external influences.

In particular, I noticed one child had tried to hand in his paper, only having done the task once although it instructed the child to repeat the task 6 times. A teacher came over to correct him, but the child asked why he had to repeat the task 6 times. From this, we can see that this activity also illustrates how some children can intuit the repetitiveness of a task and deem it purposeless from only doing it once.

]]>
https://courses.ideate.cmu.edu/16-223/f2018/work/2018/10/18/childrens-school-observation-notes-sora/feed/ 0
ovg – Children’s School Observation Notes https://courses.ideate.cmu.edu/16-223/f2018/work/2018/10/16/ovg-childrens-school-observation-notes/ https://courses.ideate.cmu.edu/16-223/f2018/work/2018/10/16/ovg-childrens-school-observation-notes/#respond Wed, 17 Oct 2018 03:48:47 +0000 https://courses.ideate.cmu.edu/16-223/f2018/work/?p=3122 During the Children’s School visit, I observed the fourth graders in the make shop. The children were constantly engaged in a task when we were there. They were being led by a teacher to drill a hole into two pieces of wood using a hand-cranked drill, screw a bolt through the two holes, secure them with a wing nut, and decorate them using markers.

Due to the nature of the task, most of the focus from the children were directed towards the task at hand (attaching the wood together), both catching their attention and holding it. However, once some of the children began to finish their project, they began to walk around look for things to do. One child gazed at the pencil sharpener, possibly considering messing with it, then shortly decided not to. That same child went to a metal wall on the side of the room with magnets attached to it. This prompted the child to move the objects around and engage with the magnetic objects. This did not last for too long, however.

Working on this task shows their initiative and autonomy in showing that they can do the task themselves. Only basic guidance was given by the teacher, and they were expected to ask questions if they had trouble. They seemed excited when they finished screwing the bolt into the wood, exclaiming it to the teacher with a loud and proud “I’m done!”. This is a sign of industry in that they showed their achievement (in this case, completing the task). One child even decided that they wanted to continue building, exclaiming “I want to build a castle with this!”. This is yet another example of their autonomy and initiative.

]]>
https://courses.ideate.cmu.edu/16-223/f2018/work/2018/10/16/ovg-childrens-school-observation-notes/feed/ 0