Intro to Physical Computing: Student Work Fall 2020 https://courses.ideate.cmu.edu/60-223/f2020/work Intro to Physical Computing: Student Work Wed, 23 Dec 2020 16:25:43 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Art Enabler by Team Amy: Final Project Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/art-enabler-by-team-amy-final-project-documentation/ Fri, 18 Dec 2020 20:48:25 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=12060 INTRODUCTION

Prototype WordPress site

For the final project in 60-223: Introduction to Physical Computing, each group was tasked with developing a project to assist the life of a client living with a disability. Our client, Amy Shannon, is severely paralyzed in numerous parts of her body including her hands. An avid creative at heart, Amy expressed to us during our interviews that she would love to be able to participate in creative activities, including tasks that require extensive manual dexterity like drawing and painting. In the past, she’s tried to use a painting device called a mouth stick, but has found it rather uncomfortable and inhibiting. Our group fixed in on developing a suitable system that can provide an experience as close to actual hand painting as possible in the form of a painting rig and a control mechanism capitalizing on our client’s available motion ranges. Given the restrictions of remote learning, we were unable to produce a final physical product, but this would certainly be an exciting next step!

WHAT WE BUILT

Our Art Enabler project allows the user to paint using a pair of wireless controllers and a mechanical painting rig that can be mounted onto a painting easel. To best explain how this painting experience is possible, we will explore each of the 3 Art Enabler project components: the controllers, the painting rig, and the interface code.

THE CONTROLLERS 
There are two wireless controllers within our project system that act as the primary user interface – the Pressure Controller and the Primary Controller. The Pressure Controller has a simple handle attached to a slider. Moving this slider can control the Z-axis movement of the drawing utensil, making it either be closer or further away from the canvas to let the user have a wider possible range of mark-making. The Primary Controller has a joystick that can be shifted to make the drawing utensil move around the canvas. It also features a clamp that allows the controller to be attached to an average wheelchair armrest and a telescoping neck that lets the user move the controller’s position up or down to better accommodate their arm.
Both controllers also feature a power switch that the user can toggle to turn the device on or off to save battery life!

THE PAINTING RIG
The mechanical painting rig can be attached to the user’s painting easel and holds the desired drawing utensil. It has two belts to control the X- and Y-axis position of the utensil to allow the user to draw what they would like! When they move the controller, the motors on the rig make the utensil holder move up, down, left, and right accordingly. The utensil holder itself also features a spring, which pressure changes based on the input of the Pressure Controller, allowing for the drawing utensil to move closer or further from the canvas.

INTERFACE CODE
This code is what makes the magic happen! Within each of the physical components, there is an Arduino and a radio module, which allows all of them to communicate with each other to send and receive informational inputs. This code takes the positions of the joystick and the slider and then sends these inputs to the rig, which adjusts its position based on these inputs.

By combining all three of these components, the user can paint or draw without needing to pick up a paintbrush or pend themselves!

A quick animation to show how moving the joystick on the right controller moves the painting rig

All three components of the Art Enabler

The painting rig with two belts to control the X and Y-axis movement of the drawing utensil

How the painting rig may look like when in use on an easel

The two controllers: On the left, the Pressure Controller can manipulate the Z-axis position. On the right, the Primary Controller can change the X- and Y-axis position. The Primary Controller can clamp onto a standard wheelchair arm.

As we were unable to create this project physically, we cannot show a video of the interactions that take place within the final project scope. Therefore, we will describe the overall interactions here to better explain the concept in context. This system requires some assisted setup before it can be operated, but, once it is set up Amy can use it almost totally independently from there on out. To complete this setup, someone (most likely either Amy’s aid or parent) must attach the painting rig to the intended drawing easel, place the drawing utensil in the holder, and place the controllers by Amy’s wheelchair or on them, using the integrated clamps. After the setup is complete and Amy decides that she would like to draw,  she can position herself in front of the easel (however close or far away that she would like) and turn on both of the controllers by flipping the power switch. To start to paint, she can push the Pressure Controller’s slider until the utensil touches the canvas at the desired pressure. Then, she can move the joystick to move the drawing utensil and begin to make her marks. The movement of the rig is mapped to the velocity of the joystick’s movement (a common practice in many video games, so hopefully it is a fairly familiar interaction) so that if Amy moves the joystick a little to the left, for example, the rig will start to move slowly towards the left until the joystick returns to the center or changes direction. If she moves the joystick very much to the left, the rig will then move quickly in that direction until the joystick returns to the center or changes direction. The joystick has a spring attached to it so that if she lets go of it at any time, it will snap back to the center and stop the rig from moving further. When she’s happy with the stroke she made, to make a new mark, Amy can use the Pressure Controller to lift the drawing utensil off of the canvas and then the Primary Controller to move it to a new position.

This process of lifting and moving utilizing the two controllers should allow Amy to draw what she would like and when she would like after a little practice to familiarize herself with the controls and movements. Hopefully, after creating a few art pieces for practice, these interactions will become more fluid and natural to allow for greater artistic expression without any need for a mouthstick!

HOW WE GOT HERE

While we came up with the idea for the Art Enabler fairly quickly after our first interview with Amy, the details and interactions of it have evolved tremendously as we continued to develop the project. At the start, we were aiming to have a single, arm-mounted controller that allowed Amy to move her arm and have that be the input that would control the mechanical rig that would be able to move along the X- and Y-axis.

Initial concept sketch

An example of our first interaction concept

However, after talking to Amy again, we realized that it would be tiring for her to hold her arm up for long enough to create a painting. Furthermore, as we continued to flesh-out our concept further and began to explore different types of sensors that could make a user input wirelessly control the painting rig, we realized that an arm-mounted controller wouldn’t provide the desired smooth drawing experience we were aiming for (this is, even if we could get such a device to work using IR sensors). Amy also expressed concern about being able to control the pressure of her mark-making like she would be able to achieve with a mouth-stick.
In light of these new insights, we pivoted our concept, deciding we needed to rethink the controllers and also the rig’s range of movement.

Sketch Page of brainstorming new types of controllers and interactions

Ideation of Prototype and detailed plannings for each Mechanical Sections

For the controllers, we decided to use potentiometer values to control the axis positions. We had to devise a way to create our own version of a joystick – which would resemble a paintbrush – in accordance with this, along with thinking about how to design the form of the controllers in order to best suit Amy’s range of motion (we would have changes in this area if we are to continue this project, but that will be touched upon in the Reflections and Lessons Learned section of this documentation page). To make the wireless communication work, we employed to use of 3 Arduinos – one in each project component – and 3 RF transceivers which could be utilized to have the Arduinos “talk” to each other.

Working on the controller’s CAD to ensure that all components could fit into the exterior frame and that the joystick could move the two slide potentiometers

Working on the controller’s CAD to ensure that all components could fit into the exterior frame and that the joystick could move the two slide potentiometers – clear view of the controller

For the mechanical rig, we decided that we needed to accommodate for some type of pressure sensitivity. As we were still working through the problem of how to lift the drawing utensil off and onto the canvas to better replicate a typical drawing experience (which isn’t only one continuous line), we decided to integrate these two issues into one solution: the Pressure Controller and adjustable spring within the rig’s drawing utensil holder. These would allow the user to move the utensil as close to or as far away from the canvas as they would like, allowing for them to “lift” the utensil off the canvas or draw fainter stokes.

The rig’s new spring

We also worked closely with our professor, Zach, to iterate on the mechanical rig’s CAD to ensure that there was smooth, discrete movement along every axis. With our first rig design, which only featured one belt, when Amy moved the controller, the rig would move smoother and quicker in one axis compared to the other. After talking with Zach about this issue that came to light, we again pivoted towards our final two-belt design, with each driven by its own motor.  One of these belts would allow the rig to move along two threaded rods to go left and right. The other drives the up and down motion of the device.

First Rig Design

Final Rig Design

Once we figured out our components and sensors, it was time to write the code to bring it all together. However, we discovered that it is hard to write code without being able to physically test its viability. Utilizing TinkerCAD, we were able to make some interaction mock-ups to test our ideas but, in the end, this part of our project has the most theoretical functionality.

A document detailing the different electronic components and what each should do to ensure proper alignment with the physical components and the coded components

REFLECTIONS and LESSONS LEARNED

After the final presentation, we got a lot of positive feedback from other people. Someone said this would be a great idea and the most marketable product out of the presented items. However, to produce this as a market product, the rig section seemed a little unstable compared to the controller section. This might have happened due to less supporting structure for the rig connected to the canvas. Therefore, if we have a little more time with this final project, we would have built and designed the supporting structure section of our machine.

Also, working remotely with other people created a lot of communication issues among group members. We mostly used zoom and text messages to communicate with each other about our project’s progress. We also used class time to divide our works and share comments between our original works. However, we did not have a chance to combine all of our works until the day before the actual presentation. Also, it took several hours to receive a response from team members after sending a text message. Fortunately, we had enough time to discuss our own products and share ideas with team members but we had a lot of communication issues between team members.

Lastly, working with Amy provides more insight into different types of clients. Through the interview, we were able to learn what kind of hardship do paralyzed people face in daily life. For instance, Amy was having a hard time holding some items due to her spine injuries. To draw pictures, she used a special mouthstick to hold brushes. However, this action made her jaws tedious and limited her drawing hobbies. Through this project, we have suggested some possibilities she might able to use to solve her uncomfortable situation.

TECHNICAL DETAILS

Code

/* Art Enabler Prelim Code */

// Preface to the code shown here:
/* First and foremeost, this code is intended to drive a system
 * that can't realistically be assembled in TinkerCAD.
 * As a result, there is a lot of 'hacking' going on to
 * to at least produce the requisite result for the project.
 *
 * The reason that it can't be assembled in rather simple -
 * none of the required components exist in TinkerCAD,
 * and the closest availabe replacements aren't nearly wieldy
 * in the TinkerCAD environment.
 * 
 * All single lines of code in multiline style comments are
 * for operations not supported in TinkerCAD
 *
 * | PINS |
 * POSPIN_X : Reads sensory data to set internal X position
 * POSPIN_Y : Reads sensory data to set internal Y position
 * POSPIN_Z : Reads sensory data to set internal Z position
 *
 * PX : Workaround pin to directly send X pos info to motor
 * PY : Workaround pin to directly send Y pos info to motor
 * PZ : Workaround pin to directly send Z pos info to motor
 */

// Radio control code adapted from:
// https://create.arduino.cc/projecthub/MisterBotBreak/how-to-communicate-using-433mhz-modules-9c20ed

// Smoothing code adapted from:
// https://courses.ideate.cmu.edu/16-223/f2020/text/code/FilterDemos.html

// Libraries
/* #include "VirtualWire.h" */

// Pin assignment
const int POSPIN_X = A0;
const int POSPIN_Y = A1;
const int POSPIN_Z = A2;

/* const int TRANPIN = 2; */

// Following pins are used for sending data to the "stepper" motor
// as a TCAD workaround
const int PX = A3;
const int PY = A4;
const int PZ = A5;

// Global settings
/* In a world where this is being built, it would be worthwhile
 * to add some additional hardware components to adjust settings
 */
static float resolution = 100; // Mapping resolution, gets around map type constriction

static float scale_z = .1; // Scales motion off canvas
static float scale_x = .001; // Scales planar motion
static float scale_y = .001; 

static float min_x = 0.0;
static float max_x = 1.0;

static float min_y = 0.0;
static float max_y = 1.0;

static float min_z = 0.0;
static float max_z = 1.0;

static float pos_x;
static float pos_y;
static float pos_z = 0; // Corresponds to off canvas

// These variables were shoddily taped on after realizing TinkerCAD
// is insufficient:
static float prev_x = 0;
static float prev_y = 0;
static float prev_z = 0;

void setup() 
{
  // Default to center
  pos_x = max_x / 2;
  pos_y = max_y / 2;
  
  pinMode(A0, INPUT);
  pinMode(A1, INPUT);
  pinMode(A2, INPUT);

  pinMode(2, OUTPUT);

  // Setup radio transmission
  /* vw_setup(2000); */
  
  // Patch for TCAD
  pinMode(PX, OUTPUT);
  pinMode(PY, OUTPUT);
  pinMode(PZ, OUTPUT);
}


// Smooths input based on a previous value
float smoothing(float input, float prev, float coeff=0.1) 
{
  float difference = input - prev;  // compute the error
  prev += coeff * difference;       // apply a constant coefficient to move the smoothed value toward the input
  return prev;
}


// Used to clamp ranges
float clamp(float input, float rangemin, float rangemax) {
  float value = max(input, rangemin);
  value = min(value, rangemax);
  return value;
}


// Obtains and maps a given pin, assuming the input is a position
float map_pin_read(const int PIN) 
{
  int reading = analogRead(PIN);
  float map_val = map(reading, 0, 1023, (int) -resolution, (int) resolution) / resolution;
  return map_val;
}


float update_pos(float pos_v, float min_v, float max_v, float scale_v, const int POSPIN_V) 
{
  float dv = scale_v * map_pin_read(POSPIN_V); // Get pin input and map to a val
  pos_v = smoothing(pos_v + dv, pos_v);
  pos_v = clamp(pos_v, min_v, max_v);
  return pos_v;
}


void update_poses() 
{
  pos_x = update_pos(pos_x, min_x, max_x, scale_x, POSPIN_X);
  pos_y = update_pos(pos_y, min_y, max_y, scale_y, POSPIN_Y);
  pos_z = update_pos(pos_z, min_z, max_z, scale_z, POSPIN_Z);
}


void transmit_pos() {
  float poses[3] = {pos_x, pos_y, pos_z};
  /* vw_send((byte *) poses, 3 * sizeof(float)); */
  /* vw_wait_tx(); */
}

// In all honesty these functions could absolutely be broken,
// but it exists only as a compatability measure for TinkerCAD
// after having written other code intended for a real system.
int SPEED_COMP(float old_val, float new_val, float v_max) {
  float res = resolution;
  float diff = new_val - old_val;  
  return map(diff, 0, res * v_max, 0, 1023 * res) / res;
}

// Compatability function to circumvent lack of IR receivers
void TCAD_COMP() {
  // Calculate difference between new val and old for speed calc
  int X_SPD = SPEED_COMP(pos_x, prev_x, scale_x);
  int Y_SPD = SPEED_COMP(pos_y, prev_y, scale_y);
  int Z_SPD = SPEED_COMP(pos_z, prev_z, scale_z);
  
  // Store previous values
  prev_x = pos_x;
  prev_y = pos_y;
  prev_z = pos_z;
  
  analogWrite(PX, X_SPD);
  analogWrite(PY, Y_SPD);
  analogWrite(PZ, Z_SPD);
}

void loop() {
  delay(20);
  update_poses();
  /* transmit_pos(); */
  
  /* BELOW INCLUDED FOR TCAD COMPATABILITY */
  TCAD_COMP();
}

 

Schematic and design files

 

Controller CAD Google Drive Link

https://drive.google.com/drive/folders/19qEuezqMGOownMQQUnk6_TxUlWhi9VCs?usp=sharing

Rig CAD Google Drive Link

https://drive.google.com/drive/folders/1j7hFiTiPuqBnq-hJt8R1rr6JDi5l3Oel?usp=sharing

]]>
Danger Sensor by Brenda Team 2: Final Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/brenda-team-2-danger-sensor/ Fri, 18 Dec 2020 20:14:27 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=12225 1. Introduction

For the final project of our Physical Computing class, we were tasked with building an assistive device specifically catered towards one person with disabilities in partner groups. Our group was partnered with a client named Brenda, and upon learning about the circumstances of our assignment, we initially interviewed her to get a better sense of her disabilities and brainstorm ideas of potential assistive technologies that could assist her in her daily life. 

More details about our interview results can be found here: https://courses.ideate.cmu.edu/60-223/f2020/work/interview-with-brenda/

After ideating, revising, and finally settling on an idea, and due to the remote nature of our class, we individually created prototypes to test various functionalities of our idea, more details of which can be found here: https://courses.ideate.cmu.edu/60-223/f2020/work/team-brenda-2-prototype-documentation/

 

2. What We Built

Our final project assistive device is a danger sensor, akin to the back dash cams that cars possess, meant to help Brenda detect any obstacles or fallen objects behind her out of visibility due to her chair. More specifically, our device takes advantage of distance sensors mounted behind Brenda’s assistive chair to sense for any inconsistencies behind her and alert Brenda of any obstacles through a vibration and a light visualizer to gauge the position and distance of those items sensed behind her.

IR Option (perspective)

IR Option (Side)

IR Option (Top)

Ultrasonic Option (Perspective)

Ultrasonic Option (Side)

Ultrasonic Option (Top)

Communication Module (Vibrator + Three LED’s)

Left: Mounting for IR sensors Right: Mounting for US sensors

 

3. How We Got Here

Our project development could be largely broken down into five parts following the initial interview with Brenda: Initial Design Idea, Pivoting Part, Prototyping, Research and Design Development. 

  1. Initial Design Idea

We had the first interview with Brenda to gauge what kind of trouble that she has in her daily life and to brainstorm ideas for solutions. She presented us with a multitude of problems that we could try to address. Only problem was that most of them seemed addressable only through mechanical means rather than electrical. (Full interview documentation could be found here)

Out of the several different ideas that she gave us, we decided to address her problem with the current footholders she has. These footplates are hard for her to reach down – and she also has a problem of not being able to fold them up herself. She always needs an assistant to do so, which made her wish the process could be automated, and that she doesn’t have to lift her feet every time. 

As an initial design, we decided to develop an automated foot-holder system that uses the IR remote control and a telescoping system that would expand and contract at her wish. 

However, we soon realized that the solution would include a mechanical system that would be too hard for us to solve. The system itself would be hard to design, and we would have to prove that it would hold the weight of her feet. These things were way out of scope of our capabilities. We had to make a choice here to abandon this idea and pursue a different one.

Sketch describing our initial design idea – an automatic foot holder!

Telescoping Mechanism (front, folded)

Telescoping Mechanism (side, folded)

Telescoping Mechanism (side, unfolded)

  1. Pivoting Part

After realizing that the initial idea wouldn’t work, we decided to hold a second interview with Brenda. Fortunately this time, after speaking with her and letting her know that we needed a solution that includes electrical components rather than mechanical, we were able to get quite a few new ideas that fit the scope of this project. (Full second interview documentation could be found here) Out of the few options that we had, we decided to address her problem of having difficulties seeing behind her because of the fixed headrest and the immobility of the left side of her body. She’s just learned to be careful when looking back or moving backwards. We thought of making something of a car backcam that would let her know of what’s behind her. But instead of a camera, we decided to use a set of ultrasonic sensors to sense the objects behind her. As an output system, we decided to use a set of LEDs and a speaker to let the user know of the proximity to the objects.

  1. Prototype

Following the general description of our initial design sketch, we developed a prototype as illustrated below. The ultrasonic sensors would be attached on the back of the wheelchair. The speaker + visualizer module (made of three LED’s with a diffusing material on top of them) would be mounted at the end of the armrest and would communicate with the user about the proximity of the objects. The visualizer would be in a shape of a semicircle, divided into three with each third representing each ultrasonic sensor. Each of them would have three states: white (no objects close by) dim red (pink) (objects are kind of close by) and red (objects are very close). The speaker would have a corresponding output. No sound (no objects close by) slow beeping (objects are kind of close by) and high-pitched, fast beeping (objects are very close). 

Division of Work for Prototype

Close-up of the single ultrasonic sensor I had modeling the part of the device meant to have the arrangement of 3 ultrasonic sensors.

Close-up of the LED arrangements and speaker placed there modelling the alert/visualization system for the user depending on the ultrasonic sensor feedback.

Overall scale of my prototype with all the essential electrical components mounted together.

Visualizer display

Visualizer (deactivated)

Placement of the sensor

Visualizer (activated)

After presenting our prototype and receiving feedback, we outlined the general direction that we want to head for the next step in the project. We received more photos from Brenda that show blindspots for her wheelchair that we could address using our new device. Following are the changes we decided to implement on our design moving forward. 

  • Change to vibration instead of auditory beeping feedback
  • Lower detection range of ultrasonic sensors to detect pets & fallen objects
  • Keep version with ultrasonic sensors all active, visualizer will show which direction detects closest thing corresponding to light intensity
  • Keep awareness about armrest prone to easy damage, add to inside of pocket attached to armrest instead
    • Will get image of armrest area and blind spot on back of chair
  • Dogs might be able to hear the ultrasonic ping and not like it – find a higher-frequency ultrasonic device.
  1. Research

The Research part and the Design Development were done simultaneously, following the Gantt Chart that we made after finishing the prototype presentation.

Gantt Chart

The research was primarily for finding a component that could replace the 40kHz-ultrasonic sensor since the dogs might be able to hear up to 60kHz-ultrasound be irritated by it. We were able to find a good alternative to the ultrasonic sensors for this project – the IR sensors! They use Infrared waves to measure the proximity to the objects. However the IR sensors have a much smaller cone of range than the ultrasonic sensors, as illustrated in the diagram below. 

Range for US Sensor vs IR Sensor

(The full document for alternative sensor could be found here)

However, we figured that the smaller cone of range could be compensated by putting several ones of them in a row. 

  1. Design Development. 

In the Design Development part, we made changes to the design appropriate for our new concept – a detecting system that would sense pets or fallen objects that may get stuck in the space below her wheelchair or would be extremely difficult for Brenda to see. 

Instead of placing the sensors in the middle of the back of her wheelchair, we decided to place them lower. Following the feedback that the speaker would irritate both the user and the animal friends, we decided to use a pancake vibrator instead of the speaker. We decided to leave the 3-LED module in to have a bigger range of communication capabilities. But instead of each one representing each sensor, we changed the interaction so that each LED would represent the level of proximity. At the end, we decided to include options for both ultrasonic sensor and infrared sensor as you can see in the renders. Ultrasonic sensor has an advantage that it would be able to scan a wider range – 180 degrees all around without no blind spot, but IR sensor would be able to sense not just a fallen object but a grade change and alert the user of it.

 

4. Conclusion and Lessons Learned

After the final presentation, we had a wonderful experience receiving insightful and thoughtful feedback from everyone, many of which appraised our idea and process, making us overjoyed for having our strenuous efforts recognized. In two separate comments, our extensive research process was complemented, being called “really thoughtful and well-researched” and “lloved seeing all the considerations”. Concerning the more specific feedback we received about our design process, we received a comment about our auditory vs. sensory feedback system considerations, saying that they “really enjoy the super high pitched sound,” however we think that the writer may have actually slightly misunderstood our presentation and regarded the speaker as our final settled decision when we actually finalized it with the vibration motor. In another aspect, someone commented on our considerations for our sensor research, citing how “it was really interesting to think about the pets’ state,” which definitely made us feel all our efforts were with it.

On the topic of the process, due to the nature of the semester, our project was done entirely through us collaborating remotely, which was definitely an interesting but challenging experience. What worked most effectively for us was definitely our Zoom collaboration sessions where  we just stayed on call with each other and finished our specific allocated sections of assignments. While they were very productive, continuously setting up a time to meet up that worked for both of us purely through text was frustrating at times, and in the end, actually caused us to do more of splitting up work and finishing on our own time, which evidently did slow down productivity but still effectively got the work done. Towards the end, we had less and less collaboration sessions due to the busy nature of our other classes, so as a future consideration, we could have put in more effort towards the end to maintain our prime productivity rate.

Not only was the experience of collaborating remotely somewhat new, but it was also our first time working with a disabled person. The most interesting aspect was definitely seeing just how different our lives and mindsets were, since our disabled partner, Brenda, had severely limited functionalities and thus had a completely different experience even with everyday trivial tasks. It was definitely noticeable how her reliance on all these assistive technologies also shaped her mindset of what she considered as problems that we would never have thought of from our perspective, giving us a lot of insight into what life looks like for people in different circumstances than us. This experience, and that of the final project’s concept as a whole has definitely broadened our horizons and taught us to look at things in a different way while practicing thinking from other viewpoints.

 

5. Technical Details

TinkerCAD Screenshot

Schematics

Code:

/*
 * Final Project
 * Arleen Liu (arleenl), Claire Koh (juyeonk)
 * 2 hours
 * 
 * Collaboration: None
 * 
 * Challenge: Figuring out best and most accurate feedback
 * mechanisms from the sensors for the visualizer made of 
 * LEDs and the vibration motor for an intuitive understanding.
 * 
 * Next Time: Research and experiment even further with 
 * different feedback mechanism for ease of the user.
 * 
 * Description: An assistive device meant to detect the 
 * presence of any obstacles behind a person and provide 
 * vibration/visual feedback to the user about the proximity
 * of potential objects blocking the way.
 * 
 * Pin mapping: 
 * 
 * pin |     mode     | description
 * ----|--------------|------------
 * 2    INPUT           Ultrasonic 1 Trig
 * 3    INPUT           Ultrasonic 1 Echo 
 * 4    INPUT           Ultrasonic 2 Trig
 * 5    INPUT           Ultrasonic 2 Echo 
 * 6    INPUT           Ultrasonic 3 Trig
 * 7    INPUT           Ultrasonic 3 Echo
 * 8    INPUT           Ultrasonic 4 Trig
 * 9    INPUT           Ultrasonic 4 Echo
 * 10   INPUT           Ultrasonic 5 Trig
 * 11   INPUT           Ultrasonic 5 Echo
 * A2	OUTPUT          Red LED 1
 * A3	OUTPUT			Red LED 2
 * A4   OUTPUT          Red LED 3
 * A5   OUTPUT          Vibration Motor
*/ 


const int MOTOR_PIN = A5;
const int LED1_PIN = A4;
const int LED2_PIN = A3;
const int LED3_PIN = A2;

const int TRIG_PIN1 = 2;
const int ECHO_PIN1 = 3;
const int TRIG_PIN2 = 4;
const int ECHO_PIN2 = 5;
const int TRIG_PIN3 = 6;
const int ECHO_PIN3 = 7;
const int TRIG_PIN4 = 8;
const int ECHO_PIN4 = 9;
const int TRIG_PIN5 = 10;
const int ECHO_PIN5 = 11;

const int SONAR_NUM = 5;      // Number of sensors.

int TRIG_PINS[SONAR_NUM] = {
  TRIG_PIN1,
  TRIG_PIN2,
  TRIG_PIN3,
  TRIG_PIN4,
  TRIG_PIN5
};

int ECHO_PINS[SONAR_NUM] = {
  ECHO_PIN1,
  ECHO_PIN2,
  ECHO_PIN3,
  ECHO_PIN4,
  ECHO_PIN5
};

void setup() {
  Serial.begin(115200); // Open serial monitor at 115200 baud to see ping results.
  pinMode(MOTOR_PIN, OUTPUT);
  pinMode(LED1_PIN, OUTPUT);
  pinMode(LED2_PIN, OUTPUT);
  pinMode(LED3_PIN, OUTPUT);
  
  pinMode(TRIG_PIN1, OUTPUT);
  pinMode(ECHO_PIN1, INPUT);
  pinMode(TRIG_PIN2, OUTPUT);
  pinMode(ECHO_PIN2, INPUT);
  pinMode(TRIG_PIN3, OUTPUT);
  pinMode(ECHO_PIN3, INPUT);
  pinMode(TRIG_PIN4, OUTPUT);
  pinMode(ECHO_PIN4, INPUT);
  pinMode(TRIG_PIN5, OUTPUT);
  pinMode(ECHO_PIN5, INPUT);
}

long currentmillis;

void loop() {
  float minDist = 100000000.0;
  for (uint8_t i = 0; i < SONAR_NUM; i++) {

    // Clears the trigPin
     digitalWrite(TRIG_PINS[i], LOW);
     delayMicroseconds(2);
     // Sets the trigPin on HIGH state for 10 micro seconds
     digitalWrite(TRIG_PINS[i], HIGH);
     delayMicroseconds(10);
     digitalWrite(TRIG_PINS[i], LOW);
     // Reads the echoPin, returns the sound wave travel time in microseconds
     float duration = pulseIn(ECHO_PINS[i], HIGH);
     // Calculating the distance
     float distance = duration*0.034/2;
    //Serial.println(distance);
    if (distance < minDist) {
      minDist = distance;
    }
  }
  Serial.println(minDist);
  if (minDist >= 80) {
      digitalWrite(LED1_PIN, LOW);
      digitalWrite(LED2_PIN, LOW);
  	  digitalWrite(LED3_PIN, LOW);
      Serial.println("No LED's");
  	}
  if (minDist <= 80 && minDist > 60) {
    analogWrite(MOTOR_PIN, 255);
    digitalWrite(LED1_PIN, HIGH);
    Serial.println("1 LED lit up");
  }
  if (minDist <= 60 && minDist > 40) {
    analogWrite(MOTOR_PIN, 255 * (3 / 4));
    digitalWrite(LED1_PIN, HIGH);
    digitalWrite(LED2_PIN, HIGH);
    Serial.println("2 LEDs lit up");
  }
  if (minDist <= 40 && minDist >20) {
      analogWrite(MOTOR_PIN, 255 * (1 / 2));
      digitalWrite(LED1_PIN, HIGH);
      digitalWrite(LED2_PIN, HIGH);
      digitalWrite(LED3_PIN, HIGH);
      Serial.println("3 LEDs lit up");
    }
  if (minDist <= 20) {
    Serial.println("LEDs blinking");
    if (millis()- currentmillis >500) {
      	digitalWrite(LED1_PIN, LOW);
      	digitalWrite(LED2_PIN, LOW);
      	digitalWrite(LED3_PIN, LOW);
      	currentmillis = millis();
    } else {    
      	digitalWrite(LED1_PIN, HIGH);
      	digitalWrite(LED2_PIN, HIGH);
      	digitalWrite(LED3_PIN, HIGH);
      }
  	}
  
  delay(10);
  //Serial.println();
}

 

Design File (DXF) Download here

]]>
ToneFlex by Team Elaine: Final Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/toneflex-by-team-elaine-final-documentation/ Fri, 18 Dec 2020 00:54:59 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=12056 What is ToneFlex? Our product is a musical device that caters to those who struggle with grip force and strength. Before we were able to come up with our final concept, there was an extensive process. Starting off with interviewing with our client to get a better stance on how we can work together to design a device that caters towards them to brainstorming potential ideas. Unfortunately, we were stuck for a while about what direction we should go with the project. A project that is useful for an important task? or a fun activity? Because of all of our questions, we had no clue where to start. This type of confusion was not only because of how many potential directions we could take, but also because our client has had so much experience in this field, because of her education in biomedical engineering and rehabilitation science, we were unsure of what the expectations were. The suggestions given by our client were too complex for our expertise, so we had to pivot and look in a different way to work with Elaine. As we talked and got to know Elaine better, we were able to find more about her own personal life like hobbies and interests. Overall, we were able to learn that Elaine enjoys music but has always struggled with finding instruments that were accessible to her. So, that’s how we started ToneFlex! Her experience helped us make our device goal based on her description of how instruments were inconvenient and difficult to use. From then, we were able to start prototyping and doing user research to finalize our product. Our device’s goal is to aid Elaine, who has difficulty with devices that require the use of grip force and strength, to play musical devices as if she was playing those actual instruments.

Device Summary

ToneFlex is a musical device that uses both sense and touch to manipulate pitch and volume. The end goal of this device is to allow users who struggle with grip force and strength to have an easier and more efficient way to create music. There are two parts to the product. The left side uses sensors to change the pitch based on how far or close a user’s hand is. On the other side of the device, it has two pushbuttons and a potentiometer to play, stop, and change the volume. The two parts are able to be clipped on and attached to the arm of the chair or any surface.

This is our final device design!

Before getting to our final product, we went through various types of research to help us conclude what we needed. As a team, we were able to conclude that having extra feedback meetings with Elaine was extremely helpful as they would aid to get rid of any assumptions we would have while designing. Our biggest takeaway for the design was making sure that the device uses both hands as Elaine has varying mechanical applications for each hand. She emphasized how it is much harder to only use one hand with her because of the constraints she faces with her hands, arms, and overall extension. From those discussions and collaboration, we were able to start prototyping our idea.

A big part of our process was focusing on the specific components to use for the device. On the left side, we used an Adafruit Sensor to get the input for the pitch, because it is much more detailed than the typical proximity sensor. We purposely only used one main interaction component for the left device since that was one of Elaine’s request.

This is our map of where the component is on the left side of the device.

Then, on the right side, we used two pushbuttons and a potentiometer. The buttons are used to play and pause the pitches on the digital synthesizer so our user can make their own music. The potentiometer is to adjust the volume of the pitches which in unison with the manipulation of pitches can produce fun music.

This is the map of the right side of our product.

In order to give a better representation of how our device works, we tried having a quick animation to demo. Due to the limited size of the video, we had to cut the clips into parts. The first clip shows how the input sensor interacts with the user. As you can see there is a hand that comes in a waves in between the device. We made the design of the left device to be curved. This design choice was intentional as we user tested and realized that having a flat space was much more uncomfortable because the user would have raise their arm slightly to reach the distance. Having the space curves, allows the user to comfortably rest their arm on the device while pivoting side to side through the space to change the pitch.

This is an animation showing the physical interactions with our device

With our final model, we used SketchUp to render it. That platform was extremely useful in working remotely together, because it allows various people to work on the file together.

Here is a gif of our device from various angles. The quality is not as strong because it was converted to this format, but is a great representation of our design.

Here is a front view of our device.

Here a the backside of our final device design!

Here is the left device’s side profile.

Here is the right side’s profile view.

For our TinkerCAD experience, we were unable to show it exactly the same, so we used slightly different components to still explain our overall interactions. What happens in the real device is that the readings received from the sensors are converted to MIDI format and then sent to a digital synthesizer over the serial port from which the music is played. However, TinkerCAD does not have a Serial port communication to external applications as an option and so instead we used another Arduino that is connected by a serial to the main Arduino which receives the signal in MIDI format and displays it on the LCD to indicate what was sent.

This is a diagram of components in our TinkerCAD!

In our demo, you are able to see how the components are supposed to interact to get a better understanding of our device’s goals. On the left bottom corner, you are able to see what is specifically going on with our Arduinos.

This is our TinkerCAD demo!

Lastly, because we used an outside synthesizer, we were unable to show the sounds on TinkerCAD, but we can show a quick demo of the actual software being used for our device. The options on this software allows users to create actual instruments from wind, brass, and even strings!

This is our software sound demo! Please lower your volume, because the sound of this video is quite loud.

This is our storyboard of how we imagine Elaine using our device!

Our short story gives a small glimpse of how we would want Elaine to interact with our device. While Elaine listens to music, she gets inspiration and wants to create her own! All her life, she has been unable to play most instruments due to her limited movements, but she has always still wanted to at least try one day. That is when she uses ToneFlex by attaching it to her chair and being able to produce her own music. Once she is done making her music, she is able to share it with anyone and continue her day.

Process

Getting started with this project, we were given the opportunity to interview our client, Elaine Houston, so that we can collaborate together too design an assistive device that caters towards her. Before our meeting, as a team, we sat down and debriefed our intent, goals, learning objectives, and secondary research. Because we had already learned a little bit about Elaine through our secondary research, our conversation was extremely comfortable and we were able to relate and get things started right away. This stage did take longer than we expected, but in the end, it was extremely beneficial as the extra preparation helped us when we had a problem arise later on in the process.

Here is the brief we created before starting the project.

This was the document that we put all of our secondary research about Elaine.

 

 

 

 

 

 

 

 

From there we were able to learn more about Elaine.Our interview with Elaine took many turns, covering subjects we could not have foreseen. Elaine shared a lot of her knowledge and wisdom of past projects she has been a part of in the world of inclusive design. She continued to come back to an emphasis on making things experiences that aren’t just accessible to one group of marginalized people, but rather accessible to all including those that happen to be marginalized. Between anecdotes, we discussed her experiences with a service dog, difficultly with fine motor objects, and how awkward others may feel the need to make social interactions with her.

 If we were to go back and interview Elaine again for the first time, we would approach our structure differently. The questions we asked to nudge the conversation were super open ended in nature, which can be good until the conversation wanders out of the scope of the project. Our structure would benefit from picking a tight set of objectives we needed to hit and walk away with and building guiding questions around them. It would be best to state these objectives  at the top of our conversation, almost like a brief outline. That way, everybody involved at least has a sense of what we want to hit and the pace at which we are hitting those points. 

After our interview, we were somewhat stuck. Many of the issues we discussed with Elaine fell closer to the realm of mechanical engineering. These were still valid problems, but likely too big for us to chew as undergraduates in an intro level course. Elaine’s is always tinkering, and many of the small needs she has encountered in her life she been proactive in finding solutions to for herself. To have a lack problems in Elaine’s life for us to tackle was a strange problem, and we were frankly stuck. 

We spent time taking a step back. Maybe if we can’t pinpoint a need to take on, we can find other ways to enrich Elaine’s life through her interests. Visiting her personal website, we found a page detailing what Elaine considers to be her favorite “theme” songs. We knew at least listening to music was an important part of her life, and we began to think about how the experience of making it would be for her.  With the theremin as a starting point of inspiration, we began to prototype different aspects of the experiences.

We divided our explorations into areas of ergonomics, interaction, and software. We knew that all of which would come together to inform our final product. In the ergonomic testing, we spent time understanding Elaine’s range of motion firsthand. Connor built a quick device out of an exercise band and rope which would limit the wearer’s range of motion. We knew that Elaine had a maximum extension of 90 degrees, but this activity revealed there was no need to push that vertical range of motion to its max.

This is a gif our interaction.

Here is a diagram of our ergonomic exploration.

Jina worked on interaction prototyping which focused on manipulating conductivity on electric tape. Different graphic arrangements of tape laid a map for different journeys of motion and ways to manipulate sound. The activity raised useful questions about where this device will be located, which complemented the ergonomic prototyping. In addition, we looked another way of getting input by using sensors. Before divining into the design elements, as a team, we sat together and wrote done elements that we thought would work and then was able to iterate.

Our first design meeting notes about what direction to take.

This is a diagram of our first design prototype that use the function of touch!

These were the sketches made before actually physically making the prototype.

This is our second prototype that uses sensors.

This is a birds-eye view of the second prototype.

These are our sketches for our second prototype design.

Sruti did work with software prototyping to understand how we could translate our physical interactions to manipulate MIDI sounds. This was fundamental to giving the instrument a voice that someone would want to interact with.  The work done revealed how we might manipulate volume and tone as  flexible variables.

Here is a screen grab of the software that was being tested for the synthesizer.

From those prototypes, we were able to user test and get feedback about our direction. It was extremely effective as there was co-creating element where the users actually helped by sketching their feedback and potential next steps. On the left, you can see some of the notes we took while observing our users. On the right, you can see the suggestions that we received by the testers. This type of exercise was extremely helpful in aiding our team to understand what were the gaps that we failed to see while working.

Here were the overall notes from one of the user tests.

We were anxious to present our progress to Elaine, as we had taken liberties in the direction we had moved since our interview and were hoping it would be a pleasant and engaging surprise. Luckily for everyone, our direction resonated with her. Elaine shared stories about her time as a young kid in her school’s music class learning to play the recorder. The instrument requires ten fingers to play all notes on, so Elaine was frustrated she couldn’t participate. She showed us an adapter which was recently designed that she was able to 3D print to play the recorder today. It used a mechanical solution similar to that of a clarinet or saxophone, where certain tabs could be pressed to cover multiple holes on the instrument at the same time. Our project was of interest in that it differs from the start, where there is no need for an adapter to make it different from anybody else’s.

Elaine still had useful feedback for us after we shared our prototypes. In terms of ergonomics, Elaine shared valuable insights to add to our knowledge of her range of motion. She talked about how gravity is difficult to constantly work against, so when her arm is at a certain elevation it is easiest for her to continue working with it at that elevation. However, she noted that it is not comfortable to be interacting with interfaces directly in front of her as it requires muscle strength she does not have. As for interaction, she was interested in the theremin as a starting point and gave us the green light on proximity sensors as a primary interaction.

We kept all of this feedback in mind as we began to bring our prototyping together. In developing the form it would take, we knew it would work best attached to the arms of Elaine’s chair. That is where her arms already naturally rest. This rules out earlier prototypes with interfaces on the side of the chair and earlier ones we had teased that were set up like a centered control board. Attaching the instrument to the chair had to be easy, yet structurally sound. The controls of the instrument had to embody the tone and volume controls which Sruti had been working on manipulating. Drawings were initially done with a rough concept in mind, which were then iterated many times digitally in Sketchup to arrive at our final product.

This is our sketch for our final design direction!

This was the physical model we quickly made to test out if the interaction made sense.

This is a gif to quickly explain the movement we were planing for our device.

Here are all the types of models we tried our in sketches before getting to our final!

After making these mock-ups, we were thankful enough to do one more user testing with the people we tried earlier on in our process. From there, we were able to get feedback on how having the left device only straight brought constraints to their arms as they would have to slightly lift them when moving father away from the sensor, causing us to further iterate and make our final left device version curved to bring about more comfortability.

Reflection

After our final presentation, we were able to have wonderful discussions and get lots of feedback on our device. It was extremely helpful to get extra time to talk with new people and get new perspectives on what we did. First and foremost, Elaine’s feedback was the top of our priority! When we talked with her at our final demo presentation, she emphasized how she appreciated how interesting our concept with the changing components on our device was. In addition, she mentioned that all the specific constraints she told us about we all met! She suggested on having a bigger knob for the volume or even using a slider to make the that component have more dexterity. As of right now, our knob works, but it would be much easier for her with the newest suggestions.

Slides were super clean (and gorgeous!!) and easy to understand which allowed for your communication of concept to really shine. I enjoyed all of the prototype iteration you did to work towards an intuitive and comfortable experience.

Adding on, we got a lot of positive feedback from the other classmates and clients to the point where they asked how this device could be accessible to them. Those types of questions helped us have an eye-opening experience to understand how different everyones’ abilities are and how everyone prefers different types of interactions and constraints. Though our focus for this project was solely on Elaine, when talking with other people, we realized that others face different disabilities that may cause them to find Elaine’s constraints their main way of functioning. For example, two people we talked to mentioned that they prefer having only one hand in use for the device rather than two. Our decision of using two separate devices were due to Elaine’s request, because it is easier for her to use simple movement with her left hand and slightly more complex ones using more strength with her right hand. But from discussing with other people, it was extremely interesting to hear about different people’ abilities, causing us to reflect on how difficult accessibility is to implement overall.

Later on, we were able to reach out to Elaine to talk to her about potentially furthering the accessibility use of our device as the other client’s gave us feedback based on their constraints. When we brought up the one handed device suggestions, she emphasized that it is much more difficult for her. She continued to talk to us about how those types of decisions are very dependent on the nature of the disabilities. “Some people will prefer both hands which require finger control that some may not have, while others maybe prefer only one hand.” From that discussion, we realized how important it is to find a sweet spot with our overall mechanism to allow various types of users participate like using eye-tracking or head movement which makes the device have a wider range of users which is something we would love to look into and potentially add it into our device.

I think it was a very advanced project but with an intuitive mechanism. Loved how you guys connected it to another audio software.

The overall project experience went much better than expected! A big concern that our team had was the varying levels of expertise in software, electrical, and fabrication, but we were all able to be transparent and truly use each others’ strengths to better the project. Though we were remote, because we were continuously communicating with one another, we did not face any problems as a team. In addition, if someone needed help, it was always comfortable to reach out and get advice from others in our team. The biggest takeaway from this experience was truly learning all these various types of co-creative platforms to help working remotely. For example, SketchUp is flexible enough for all of us to collaboratively work together on the final design, allowing us to make sure that this device concept and design was created as a whole team. There isn’t anything from a teamwork standpoint where we struggled and would continue to work like we did if we were to work again together.

Moving on, as a team, we appreciated the high level of expertise that Elaine has, allowing her to push us with our thinking and overall project. At first, it was concerning to us because we were worried that we would not be able to meet up to her expectations, since she has so much experience with developing devices especially in rehabilitation science. However, once we started to work together, Elaine brought so many insightful feedback and new knowledge that really helped our device get to another level. One thing that we did have a challenge with was keeping in communication with Elaine, because of her busy schedule, causing us to delay our timelines during our process. In the future, it would be better if we could schedule a weekly meeting time with our client so that we are able to keep each other informed about what is going on, rather than having to wait until there is a response. In addition, we hoped that in the future, the guidelines and overall goals are aligned with our client because in the beginning there seemed to be some misunderstanding with the overall level of the project we were meant to make. Further iterating, it seemed that the problems that Elaine brought were much more difficult than our skill level, causing us to have to pivot and be stuck for a little bit of our process. In the end, we really appreciated Elaine’s time and contribution to our overall collaboration as we were all able to learn and use the lessons from the whole semester.

Not only did this project help us utilize and learn new skills, it has helped us become more aware about inclusiveness for those with disabilities. All of us were aware, but was not as knowledgable as we are know with the struggled that people with disabilities face. Through hearing about Elaine’s tough experiences facing accessibility issues, it was extremely sad to hear how there are also  so many other people who also face those hardships. It is unfortunate to see how in most designs today, they neglect the needs of the disabled. This experience has helped us truly focus and understand what it means to work and make a Universal design. When designing, especially in the beginning stages of our process, there were times where the mobility issues would slip out of our minds, causing us to have to take steps back with our progress to fix those issues. These types of experiences were really great learning lessons for us overall. While we were user-testing, it was interesting to see how designing for inclusion can help develop better products for all. For example, our first design user testing, we asked our testers to just use the device without telling them the constraints that the primary user has. After getting their feedback, it was fascinating to see how their critique was very similar to what Elaine later told us, reinforcing this idea that inclusive design is not just helping those with disabilities, but makes anyone’s experience better.

In the end, our team was able to truly reflect on how objects and interaction around us influence our ability to participate in society which should give equal opportunities to everyone. The people who design those participations are often the ones who decide who can and cannot participate based on abilities and other factors. As students, we should start to raise awareness to this issue so that future products truly include all abilities. If we were to do this project again, a big thing we would love to do is do more research on inclusiveness and accessibility to enhance our devices experience even further, because that knowledge is something that we still have gaps on since we are unable to truly understand the experience of those with disabilities.

Next Steps

Based on the feedback we got from our final demo and from our overall team discussion, there are definitely aspects of our device that we would like to continue iterating and researching. Due to our remote situation, there are aspects of our device that we were unable to execute which includes physically building the device, material research, and user testing. In the future, we plan on looking into making the physical design of the device to be more flexible based on setting and user, which is something we would revise through physical user-testings to see any gaps that are not visible through the remote realm. In addition, we would love to explore potential materials that the product could be made out of so that it is comfortable to rest your arm on. Right now, the final rendering looks like plastic, but we would prefer to use another type of material that is more comfortable or adding cushion paddings onto the design. Another aspect that we have not quite yet approached was error prevention, so being able to understand what are the problems that users will face while using our device and how might we create simple solutions that they can use to execute on their own. Lastly, the software can only use one instrument at a time, but it can be changed manually through the digital synthesizer. We would like to look into other softwares that are mire flexible with the ability to switch musical instruments through our devices components and not within the actual music software. In the end, we understand that the constraints that we faced with this project were super helpful in helping us think in new ways to work with them. As a team, we agreed that this device is definitely something that we would love to further refine and get to Elaine once the pandemic gets better!

Technical details

For our technical details, we will display our TinkerCAD, codes (TinkerCAD Version and Music Software Version), and schematic drawings (TinkerCAD Version and actual device map). There are still some parts of our technical aspect that we would like to further refine as there are other ways to solve the problems we are looking at.

Link to the TinkerCAD project: https://www.tinkercad.com/things/5noXteuGj3d

Below, you will see our main TinkerCAD set up to get a better understanding of our breadboard and electronics set-up. Please keep in mind that our TinkerCAD is just a way to express the overall goal of our device, because the platform is unable to completely function the way that we want it to like missing certain parts and unable to do certain functions.

This is our main TinkerCAD setup.

Code

/*
* Project: Tone Flex
* By Jina Lee, Connor Mcgaffin, Sruti Srinidhi
* 
* The code below takes an ultrasonic distance sensor 
* and a potentiometer as input which are mapped to 
* appropriate ranges to get the pitch and volume
* of the note being played respectively. The pitch
* and volume is then sent in the MIDI format over the
* serial port to the other Arduino which then displays
* the signal it receives on the LCD display.
*
* Pin mapping:
* 
* pin   | mode        | description
* ------|-------------|------------
* 2       input         start button
* 3       input         stop button
* 7       input/output  ultrasonic distance sensor pin  
* A0      input         potentiometer
* A1      input         potentiometer for volume
*      
* Code for the ultrasonic distance sensor was taken and 
* modified from David A. Mellis's project
* http://www.arduino.cc/en/Tutorial/Ping 
*/

/* SENDER */

// Initializing pins
const int VOLUMEPIN = A1;
const int pingPin = 7;

void setup() {
  pinMode(VOLUMEPIN, INPUT);
  Serial.begin(9600);
}

void loop() {
  
  // Get duration between sending and receiving signal for 
  // ultrasonic distance sensor
  long duration, cm;
  pinMode(pingPin, OUTPUT);
  digitalWrite(pingPin, LOW);
  delayMicroseconds(2);
  digitalWrite(pingPin, HIGH);
  delayMicroseconds(5);
  digitalWrite(pingPin, LOW);
  pinMode(pingPin, INPUT);
  duration = pulseIn(pingPin, HIGH);
  // Covert duration to a distance which is used as pitch value
  int pitchval = microsecondsToCentimeters(duration);
  
  // Read potentiometer for volume
  int volumeval = analogRead(VOLUMEPIN);
  
  // Map pitch and volume to appropriate ranges
  int tone = map(pitchval, 0, 340, 1, 127);
  int volume = map(volumeval, 0, 1028, 1, 127);
  
  // Send MIDI play message
  MIDImessage(144,tone,volume);
  delay(500);
  // Send MIDI pause message
  MIDImessage(128, tone, volume);
  delay(500);
}

// Send MIDI signal in appropriate format over serial
void MIDImessage(int command, int MIDInote, int MIDIvelocity) {
  
  Serial.print(command);//send note on or note off command 
  Serial.print(',');
  Serial.print(MIDInote);//send pitch data
  Serial.print(',');
  Serial.println(MIDIvelocity);//send velocity data
}

long microsecondsToCentimeters(long microseconds) {
  // The speed of sound is 340 m/s or 29 microseconds per centimeter.
  // The ping travels out and back, so to find the distance of the
  // object we take half of the distance traveled.
  return microseconds / 29 / 2;
}
/*
*Pin mapping:
* 
* pin   | mode   | description
* ------|--------|------------
* 2       output    LCD - DB7
* 3       output    LCD - DB6
* 4       output    LCD - DB5   
* 5       output    LCD - DB4
* 11      output    LCD - E
* 12      output    LCD - RS
*      
* Code to receive serial data completed 
* with the help of Professor Zacharias
*/

/* RECEIVER */

#include<LiquidCrystal.h>

LiquidCrystal lcd(12, 11, 5, 4, 3, 2);


int numReceived;
int secondNumReceived;
int thirdNumReceived;


void setup(){
     Serial.begin(9600);
    lcd.begin(16, 2);
 
}

void loop(){
  // if data has been received on the software serial
  if (Serial.available()){
    // read it and save into numReceived
    numReceived = Serial.parseInt(); 
    // read second number and save it into secondNumReceived
    secondNumReceived = Serial.parseInt(); 
    // read third number and save it into secondNumReceived
    thirdNumReceived = Serial.parseInt();
 
  }
  // Write to LCD
  lcd.clear();
  lcd.setCursor(0,0);
  if (numReceived == 144){
  	lcd.print("Play "); 
  }else {
    lcd.print("Pause ");
  }
  lcd.setCursor(0,1);           
  lcd.print("Pitch= ");
  lcd.setCursor(7,1); 
  lcd.print((String)secondNumReceived);
  
  lcd.setCursor(7,0);           
  lcd.print("Vol= ");
  lcd.setCursor(12,0); 
  lcd.print((String)thirdNumReceived);
}

Schematic and design files

For our schematic drawings, we created two. The first one represents our mapping of components that our actual devices uses and the second version was what our TinkerCAD schematic is. We purposely made both versions to verify and make sure that our overall interactions make sense are are able to handle one another. These schematics were made while we were designing our device, which we found extremely helpful as the map aided to know if there was some type of electronics that would not make sense and not work. The difference between the drawings are that the TinkerCAD drawing has an LCD.

Adding on, for our device mockups, we used SkecthUp.

Here is the file for our renderings: Design SkecthUp Files

This is our main device schematic drawing!

Here is our TinkerCAD schematic drawing.

]]>
Remote Water Adjustment Variable Enabler (RWAVE) By Brenda Seal Team 1: Final Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/remote-water-adjustment-variable-enabler-rwave-by-brenda-seal-team-1-final-documentation/ Thu, 17 Dec 2020 20:42:12 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=12098 Introduction:

Our final project for the class IDEATE: Introduction to Physical Computing, is a Remote Water Adjustment Variable Enabler (RWAVE). We made this assertive technology for Brenda Dare. We interview Brenda over zoom about her life and asked about what things we could make for her and what daily tasks does she have a lot of difficulty doing. After the interview our team decided that we could make an assistive device that would help Brenda turn on the faucet in her shower without another person helping her.

See here for information about our initial interview with Brenda: https://courses.ideate.cmu.edu/60-223/f2020/work/brenda-seal-team-one/

See here for information about our prototypes: https://courses.ideate.cmu.edu/60-223/f2020/work/brenda-seal-team-1-prototype-documentation/

 

What We Built:

Our project allows a user to turn the faucets in their shower without having to physically turn them. A user can turn the faucet then using a remote control that controls the faucet knobs like how a tv remote controls a tv. Our projects has motors that controls the temperature and pressure knobs in Brenda’s shower and that is able to receive information from a remote. The remote is able to send information to the faucet controller based on the buttons the user presses. For example if a user presses the button to increase the temperature of the shower water on the remote the motor attached to the temperature knob will move to reflect that and the temperature of the water will increase. The purpose of this device was to allow Brenda to change the temperature and the pressure of her bath water independently. Without this assistive technology Brenda would not be able to bath independently.

Final Appearance:

The way the electronic would look for the remote control

Shows the final design of the remote controller in Solidworks from the front

Shows the remote controller from a side view in SolidWorks

This video shows how the remote would control the motors to change the temperature and pressure of the bathtub.

 

TinkerCad Interaction: 

Play the following video at full volume to hear about what is happening. Furthermore the output of this simulated remote is different color light on a neopixel strip. This is the simulated output because on TinkerCad is was impossible to send an output signal using an IR light as would be done if this remote was to be actually made.

Physical Mockup:

Demonstrates how the remote, that was resized after the prototype, would fit in someone’s hand

Demonstrates some of the changes made from the prototype of the remote on the left to the final version on the right

Shows how a user would turn on the remote

Demonstrates what the remote would look like when it is turned on

 

In addition to the remote models, there was also a proof of concept physical model that was made:

This is the physical prototype proof of concept.

This was the only physical element that was created for the motor because the rest of the semester was dedicated predominantly to electronic software design and development of the other parts of the project.

 

 

Narrative Sketch:

After a long day at work, dealing with many clients, Brenda decides that she wants to come home to take a bath. Due to Brenda’s condition, she requires assistance from a machine to make her way into the bathtub. After being situated on the opposite end of the bathtub from the faucets, she grabs a remote hanging from a string suction-cupped to the wall and presses the power button. A red LED turns green, and a small screen turns on denoting the pressure and temperature both set to zero. She pushes the temperature buttons to set the temperature to a steamy 90*F. After the perfect temperature, determined by angle and prior testing for now, she starts to press the pressure button, turning the water all the way up to 100 to fill up for a while. Simply turning down the pressure back to 0, then pressing the power button to turn off the system.

How We Got Here:

 

So, as a refresher, this first picture shows the first ideation for the remote control that we would be manufacturing to fit our needs. This ideation is meant to function more as the “feels like” aspect in order to make the device in the shape that would be easiest to use for someone who might not have the same motor capabilities as someone who does not have Cerebral Palsy. This is the base of where we started. These sketches help influence the first prototype that was 3D printed for the actual touching and holding.

Sketches of possible layouts for the remote

 

This prototype had a lot of great features: clear buttons and easily identifiable sections because of the letters. Because we are trying to make this device as something that could potentially be marketed, one of the thoughts and considerations made post designing and building the second device would have been adding braille to the remote’s cover. It would not be difficult to 3D print, however, this would have been post-Thanksgiving hiatus from in person for those who decided to return home.

Mimi’s roommate holding the 3D printed remote while giving feedback about its design. Demonstrates how the remote fits in someones hand

After all of the revisions, the remote was a little bit longer and a little bit skinnier in order to accommodate people who have shorter thumbs and are unable to reach all the way across the remote in order to press a button. In hindsight, there were a couple of changes that could have also been made. It could have taken a more ergonomic fit to the hand, enabling the easiest means of pressing the buttons. There could have also been buttons on the backside of the remote to make it more space efficient. A lot of these things could have been added onto if only we had put more thought into how Brenda, or any other person, might end up using the device. It should be mentioned that these could have been accomplished much easier if we were able to be in person and having Brenda “use” the remote, but alas, here we are. 3D printing would definitely be the way to go in order to make small sets for a small number of individuals; however, if this were going to hit the markets, there would probably be better manufacturing methods to produce these cheaper.

 

The following was derived as a means of creating a circuit that could be turned into a PCB. When we thought about designing our own remote, we thought it would be best to program it ourselves as well. The IR remote that was provided by the kits was unreliable and difficult to press. The buttons were sometimes difficult to tell if they had been pressed or not, so there would be a change of button choice in order to provide a tactile and auditory feedback for the user, in addition to a visual in order to cover all bases.

Testing voltages on the remote control simulation to figure out where the problem is

 

The next few images are going to be looking at the housing and the electronics that go into the actual device that the remote is going to control. All of the components were decided by the type of circuit that is shown in the TinkerCAD drawing later in the post.

 

This picture shows the EAGLE CAD sketch that is the start for the PCB that could be placed inside fo the box to allow for continuous use similar to a tv, potentially enabling wireless use.

This is the sketch that outlines all of the parts that go onto EAGLE to generate the PCB.

 

The following are the designs that were iterated through. Each one changes slightly from the last. At this stage in the project, the design was not fully fleshed out for manufacturing. The hypothetical trumped the need to physically build due to the nature of the presentation. Should there be more time, or we were able to go back and do this over, it would have made more sense to try and make the physical in order to help influence the later iterations of the designs. There could also be more consideration for material and piece design in order to manufacture on a large scale should this be intending to be commercialized.

 

These are the box designs that changed to optimize the space that was being used. It was rather difficult because it was all hypothetical and there was not a physical.

 

Overall, the plan was followed pretty closely. If anything, the hardest part was communicating with each other about potential ideas and design stuff. Either we were both busy or the platforms created difficult means of communicating effectively. It was also busy with preparing for the final push for finals, so some of the tasks might have been shortened in order to compensate the time. There is more research that can be done for this project, but it is very sastisfactory for the situations and the resources available.

Conclusions:

After the final product presentation we got a lot of positive and constructive feedback from many people. One piece of feedback said, “I think the idea is really great and am excited to see how it will turn out when physically built!” This feedback showed us that the idea of our idea was really valuable and our product could be really useful if it were to be physically built. Another piece of positive feedback we received was, “I appreciate you guys kept iterating and iterating to arrive at the best design you thought of. I think this could be such a useful and practical device for so many groups of people.” This piece of feedback also mentions that our idea would be really useful for many different people. They also commented on the fact that through our iterations we were able to improve the parts of the project we were each working on.

Feedback from a different person was, ” I’d be excited to see how this idea would play out as a physical device! I think the way that you explored the different CAD models to have a better, more universal fit to accommodate for a wider variety of baths shows a lot of great consideration.”  This piece of feedback comments about how we made our CAD models universal. From this comment and our discussions after the final presentation it seems like it would be very useful if after we made the final product in real life for Brenda we altered our design so that we could have different standard sets for different types of bathrooms. We made some of our design considerations based on Brenda’s shower setup but there are a lot of different types of bathroom faucets. So in the future it would be nice to make our design more universal or have different subsets of our product for different types of bathrooms.

A final piece of feedback we received was “The device captures the issue that was presented. I would’ve liked more emphasis placed on materials because waterproofing is so critical.” This comment raises the issue of waterproofing our product. We thought about this throughout the design process but did not spend a lot on time on it. We both spent most of our time making sure electrically the product worked and the other parts of the design were good. It was also harder to do this aspect with everything being virtual because we were not really building anything physically.  In retrospect we could have devoted more time to making sure everything would be waterproof because this is really an essential part of making sure the product works and is safe to use. Furthermore, going forward with this project making sure everything worked in a water environment would be our next concern. 

Overall working remotely this semester for this final project was not as hard as anticipated. It was actually easier to find time to meet and work on this project due to lack of other time commitments like extra curricular activities. It was harder to communicate effectively and understand other people over virtual platforms like zoom. It was also harder to work on the project while not being in the same physical space. In the end we kind of had two separate projects that coexisted to form a final project rather than just have one final project. On the other hand this also made the project easier to work on because we could each work on our own part without needing to schedule time with the other person or wait for the other person to finish their part. Lastly, it was overall disappointing that we couldn’t physically make the final project. Making all the plans and simulating the project was the best that could be done because nothing compares to all the learning done when actually making the product.

We found it vey rewarding to work with Brenda and learn about her life. We enjoyed being able to use the skills we have gained in this class to work on a project that would allow Brenda to be more independent. Furthermore from the feedback it seems like this would be a really useful product for make people and would allow them to live more independently. Hopefully in the future we will be able to physically make this product for Brenda and other people who would benefit from it.

Finally, we really enjoyed working on this product. Going forward, as stated above, we would want to make our product more universal so it could be used in bathrooms with many different types of faucets and make more waterproof design considerations. Thank you Professor Zach for all the help on this project and throughout the semester.

Technical Details:

Mimi:

TinkerCad Link

Final electronic setup in TinkerCad

Remote Code:

/*
 * Remote Water Adjustment Variable Enabler (RWAVE)
 * Remote Code
 * Mimi Marino (mmarino)
 *
 * Description: This is the code to control the remote control.
 * There is a button to turn the remote on and buttons that 
 * correspond to increasing and descresing the temperature 
 * and the pressure respectively. The neopixel strip is there 
 * to simulate the signals that the remote would output 
 * because you can't use an IR light as I would if I were
 * to make this in real life. Note: it is intentional that if 
 * you hold down the button the light stays on.
 * 
 * pin   | mode   | description
 * ------|--------|------------
 * 1      input     button PRESSURE_DOWN  
 * 2      input     button PRESSURE_UP
 * 3      input     button TEMP_DOWN
 * 4      input     button TEMP_UP
 * 5      output    neopixel ring
 * 
 *
 * Credit : Used this tinkercad link to help program the 
 * attiny. 
 * https://www.tinkercad.com/things/d6fJABMd27t-attiny-pwm
*/

#include <Adafruit_NeoPixel.h>
const int PIXEL_PIN = 5;
const int NUMPIXELS = 12;
Adafruit_NeoPixel pixels = Adafruit_NeoPixel(NUMPIXELS, PIXEL_PIN, NEO_GRB + NEO_KHZ800);

const int TEMP_UP= 4;
const int PRESSURE_UP = 2;
const int TEMP_DOWN= 3;
const int PRESSURE_DOWN = 1;

void setup()
{
  //Sets Neo pixel Strip
  pixels.begin();
  pinMode(PIXEL_PIN, OUTPUT);
  pinMode(TEMP_UP, INPUT_PULLUP);
  pinMode(PRESSURE_UP, INPUT_PULLUP);
  pinMode(TEMP_DOWN, INPUT_PULLUP);
  pinMode(PRESSURE_DOWN, INPUT_PULLUP);

}

void loop()
{
  //checks if TEMP_UP is being pressed
  if (!digitalRead(TEMP_UP)){
    //changes color to RED
    for (int i=0; i < NUMPIXELS; i++) {
        pixels.setPixelColor(i,255,0,0);
        pixels.show();
    }
  }
  //checks if TEMP_DOWN is being pressed
  if (!digitalRead(TEMP_DOWN)){
    //changes color to BLUE
    for (int i=0; i < NUMPIXELS; i++) {
        pixels.setPixelColor(i,0,0,255);
        pixels.show();
    }
  }
  //checks if PRESSURE_UP is being pressed
  if (!digitalRead(PRESSURE_UP)){
    //changes color to PURPLE
    for (int i=0; i < NUMPIXELS; i++) {
        pixels.setPixelColor(i,128,0,128);
        pixels.show();
    }
  }
  //checks if PRESSURE_DOWN is being pressed
  if (!digitalRead(PRESSURE_DOWN)){
    //changes color to GREEN
    for (int i=0; i < NUMPIXELS; i++) {
        pixels.setPixelColor(i,0,255,0);
        pixels.show();
    }
  }
  // if no button is being pressed
  else {
    //delay to make tinkercad work better
    delay (50);
    	//turns off neopixel strip
        for (int i=0; i < NUMPIXELS; i++) {
        pixels.setPixelColor(i,0,0,0);
        pixels.show();
    }
  }
  
}

Remote Schematic:

Schematic for the remote

Remote CAD Files:

Remote Design Files

(Includes STL file, SolidWorks part file and SolidWorks drawing file)

Carl:

Basic proof-of-concept design for the mechanism meant to show the remote adjusting the motor and changing the LCD.

 

TinkerCAD Link 2

Motor Code:

// Libraries
#include <LiquidCrystal.h>
#include <IRremote.h>
#include <Servo.h>

// Servo Pins
#define PRESSURE 6

// LCD Pins
#define regsel 13
#define enable 12
#define d7p 11
#define d6p 10
#define d5p 9
#define d4p 8

// IR Pins
#define iroutput 7

// LED Pins
#define offLED 5			// Red
#define onLED 4				// Green

IRrecv irrecv(iroutput);
decode_results results;
LiquidCrystal lcd(regsel,enable,d4p,d5p,d6p,d7p);
//Servo tempServo;
Servo presServo;

// Variables
//unsigned int temp = 0;
unsigned int pres = 0;
bool start = 1;


void setup(){
  Serial.begin(9600);
  irrecv.enableIRIn();
  pinMode(offLED,OUTPUT);
  pinMode(onLED,OUTPUT);
  while (start == 1){
    digitalWrite(offLED,HIGH);
    digitalWrite(onLED,LOW);
    if(irrecv.decode(&results)){
      unsigned int active = results.value;
      if(active == 255){
        start = 0;
        //Serial.println("HERE.");
        break;
      }
    }
  }
  lcd.begin(16,2);
  presServo.attach(PRESSURE);
  //tempServo.attach(5);
}

void loop(){
  digitalWrite(offLED,LOW);
  digitalWrite(onLED,HIGH);
  if(irrecv.decode(&results)){
    //Serial.println(results.value,HEX);
    unsigned int button = results.value;
    //Serial.println(button);
    setVal(button);
    int newPres = map(pres,0,100,0,180);
    presServo.write(newPres);
     displayPres();
    irrecv.resume();
  }
  onOffStatus(start);
}

// Changes the values based on pressed button
void setVal(unsigned int buttonPress){
  switch (buttonPress) {
    case 24735:
      pres++;
      break;
    case 8415:
      pres--;
      break; 
    /*case 255:
      start = 1;
      break;*/
  }
}

void displayPres(){
  lcd.setCursor(0,0);
  lcd.print((String)"Pres:"+pres);
}

void onOffStatus(bool temp){
  //Serial.println(temp);
  if(temp != 0{
    lcd.noDisplay();
  } else {
    lcd.display();
  }
}

 

Circuit Lab schematic of the Motor assembly using Servo Motors.

]]>
Brenda Seal Team: 1 – Prototype Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/brenda-seal-team-1-prototype-documentation/ Tue, 24 Nov 2020 14:17:44 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=11925 Introduction:

This purpose of this post is to document the different prototypes associated with the AutoBath. The AutoBath will allow Brenda to independently change the temperature and pressure of her bathtub using a remote. See previous documentation here: https://courses.ideate.cmu.edu/60-223/f2020/work/brenda-seal-team-one/ 

Mimi:

This prototype was trying to answer the question: how should Brenda control and change the temperature of the bath?

The final prototype for controls was a remote control that was 3D printed. The remote was modeled in SolidWorks. It includes an up and down button for temperature and pressure as well as a power button and an LED to communicate that the remote is on.

Mimi’s roommate holding the 3D printed remote while giving feedback about its design. Demonstrates how the remote fits in someones hand.

Shows the 3D printed remote is 6mm by 5 mm.

The top right circle represents the power button. The top left circle represents an LED that would be on when the remote is powered on.

Demonstrates how to increase the temperature of the water.

Process Images:

Sketches of possible layouts for the remote

A model of the remote with an LCD. After deciding to make the remote with an IR sensor, we realized we didn’t want an LCD on the remote.

Screenshot of the remote control modeled in CAD before being 3D printed.

Thoughts:

The feedback on this prototype revolved around the ergonomics and the usability of this remote. For the physically proximate prototype presentation I showed the 3D printed remote to my roommate. Her feed back was that the remote was too squareish thus making it hard to hold comfortably. She also said it would be nice to have some sort of feedback to know how far you are turning the temperature knob and the pressure knob like you would have if you were physically turning the knobs. Lastly, she recommended the buttons be spaced out a little more to prevent the possibility of hitting the wrong button.

Brenda’s (our client) feedback was similar. She was not opposed to making the remote easier to hold, though it was hard for her to tell because she could not hold the prototype herself. She liked the tactile feel to the buttons and suggested making that more prominent because it easier for her to feel which button she needs to hit rather than see it. Brenda also suggested I think about the storage of the remote. She said it would nice if it could be attached to the wall or the side of the tub.

I will be attempting to implement all of this feedback into the the remote design going forward. The only surprising thing during the prototyping experience was how hard to it was to reconcile the dimensions on SolidWorks and real life. It was hard to tell how big the remote was going to turn out and feel when just looking at a computer screen. That made the 3D printed model more useful than I initially thought.

Carl:

The question that I was trying to have answered was the physical demonstration of pushing buttons that would then translate towards moving a motor. The prototype itself was made up of a simple display screen, a motor tha has an input of angles, and a small sensor that is able to read infrared signals. The user would press a button and the motor would turn CCW by some degree that was adjusted from 0-180 degrees. “Pres” refers to “Pressure” as Brenda describes the temperature hardly changes, it is only the turning on and off of the faucet that is truly important. While both are important for the safety of the user, this was only a proof of concept.

Shows size of remote in hand.

Shows the size of each component relative to each other. This is important because it shows the capability to design a small PCB that would be much easier for the future case for the device.

A big design choice was to go with the LCD I2C 20×4 versus the LCD 16×2 because of size and capabilities. Should alternate languages need to be used as well. This choice simply made the most amount of sense for the direction of the project.

 

 

This is the TinkerCAD version of the prototype.

 

This was a rough idea for the design. It was just going to be sleek and compact. After the creation of the device, it makes more sense to use a PCB and then keep everything closed off.

 

This is the finished look. Thispicture differes because it is more focused on the final rather than the remote being placed in the hand.

 

Thoughts:

My biggest take away from this was the difficulty of designing the enclosure. Once the prototype had begun working, it was challenging to think of a next step. There were several components of the design that were contingent on being in the space and taking proper measurements and finding the structure that Brenda’s house would have been built on and so on. That being said, this prototype was a success for what it was meant to do. It was meant to demonstrate the simple capabilities of turning a motor when there was an input from a remote control.

The feedback that I receieved that I was looking more into was simplifying everything down, which touches into the design element of things. Where is the LCD going to be held? How is the sensor going to be waterproof? All of these questions that I had yet to think of. The feedback that was less insightful, and thus warranted not being taken, was more of the critique with the remote itself. This was the section that I was not going to be working on because of the work that Mimi had already undergone, so it was useless for me to consider those. Kind of like a trust thing where my partner was going to get it done regardless, and we can address that situation at a completely seperate time.

Moving Forward:

Overall, the prototyping part of this project went well for our team. Prototyping remotely was challenging at times but seeing pictures from Brenda was extremely helpful. We were able to get a working prototype for the mechanism that will move the temperature and pressure knobs and  a working design of the remote that will control those functions.

For the remote prototype the next steps will start with a redesign of the remote in SolidWorks. The feedback from the in person and client critique will be implements to make the remote easier to hold and more tactile. Furthermore, some thoughts and research will be done to create a good way to store the remote. Also the electronic components of the remote will be modeled in tinkercad as if the remote was going to be physically made.

As for the motor mechanics, future steps then go into establishing a means for integrating the motor system into the physical bathroom. Because of the need for a more powerful motor, the final product would likely need to feature a stepper motor rather than a servo motor. Using the stepper would then require the use of a specific driver, needing a 12V power supply. This power supply is typically plugged into the wall, but that would make it a little bit more dangerous. Integrating a batter or a series of batteries would allow for this to work out a little bit better and narrow in on the number of moving parts. After a working model is created, the next step would be to optimize the entier system to allow for universal usage for potentially all faucet types. This would enable a greater marketability for the product as a whole to help impact more people’s lives.

In the future, it would nice to jump from the idea phase to the actual making phase earlier. We had our idea for our project pretty early but it took us some time to make physical, tangible progress with our idea. Once we started actually prototyping we ran into problems that we had to fix. For example, the remote prototype was originally going to be just a radio module prototype, but we realized it would be much easier to do a IR sensor remote. If we started physical prototyping earlier than we would have discovered and fixed these problems earlier leaving more time to work on our final prototypes.

 

https://docs.google.com/spreadsheets/d/1EqbDtPq6eH5k-xbRc-XUx_uFbc1yXL8K7nxA7on8S0o/edit?usp=sharing

]]>
Team Amy Prototype Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/team-amy-prototype-documentation/ Tue, 24 Nov 2020 14:03:18 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=11907 Project Recap and Prototype Introduction

For the final class project our group, Team Amy, is working towards creating an art enabling device to help our client, Amy, draw on a canvas with greater ease and independence. After our initial interview with Amy – which you can read about here – we discovered this project opportunity revolving around art when she mentioned how uncomfortable her current method of painting or drawing is due to her needing to employ the use of a mouthstick. We came up with a few concept iteration sketches to explore how we may enable Amy to make art more comfortably and then had a subsequent meeting with her to discuss which ideas she believed would be the most useful to her and how we could optimize that concept to suit her level of mobility.

Based on this, our group decided to work towards an assistive device that allows you to draw through the use of a mechanical rig and arm-mounted controller device. When Amy moves her arm, the controller reads the changing position and then sends the position data to the rig, which then moves the position of the drawing utensil accordingly. 

This post works as documentation for our prototype milestone, where we worked towards answering some key questions that we had in regards to this device’s functionality and overall user interactions. As each of our team members have a fairly different skill set, we decided to each answer a different question to cover more ground, as this is a fairly ambitious project concept, and play to our strengths. Dani, an industrial designer, decided to explore the controller’s ergonomics, as the controller would be the user’s primary interface with the device. Evan, an electrical-computer engineer, worked on the driving code which would provide the interface between the controller and the mechanical rig. Finally, Daniel, a mechanical engineer, explored how the mechanical rig itself would function to allow for drawing to occur.

The Prototypes

Prototype 1 – Controller Interaction

This prototype was meant to answer the question of: How might we create an arm-mounted controller that is simple to put on, secure, and comfortable to wear for a long period of time? 

This particular prototype focused on the interactions that Amy would have to contend with to operate the drawing enabler device. This was done by creating an idealized “looks like” and “feels like” model of the controller out of light-weight, high-density foam and elastic along with a model of the mount the controller would rest upon when the device is not in use. I wanted to explore what proportions for the device would be comfortable to use for an extended period of time, the form factors the controller would have so that the direction you should hold it in is intuitive, and the ways in which Amy could put on this arm-mounted controller without employing the help of others.

Sketches of the prototype concept to think through the different components.

The arm-held controller in its controller mount

Controller mount by itself – has space to put your hand into and cut-outs to hold the controller in place

Controller by itself. The armband is a piece of soft elastic to make it comfortable and lightweight

“Wizard-of-OZing” the controller interaction

As demonstrated above, when the user is wearing the controller, their arm movements would dictate the position of the rig, which would then drag the drawing utensil across the canvas allowing the user to draw.

I tested out this device’s interactions by presenting the rig on the controller to friends and asking them “how would you put on this controller by only using one hand?” and then seeing what they did. Through this process, I discovered that while people understood the overall interaction that I was hoping they would (where they stick their hand into the controller’s band and then lift up to release it from the mount), the interaction was a bit awkward as the holder was too narrow to comfortably allow for all hand sizes to lift out.

A prototype test leading to an awkward interaction

Notes from prototype session

Through the creation of this prototype, I learned that while people understood the overall interactions they would have with the device, some of the particular dimensions were a bit awkward and hard to transition into. Furthermore, through the testing, I learned that while the controller itself had a comfortable design that people noted to be suitable for its purpose, the action of holding your hand out for so long without any support may grow to be tiring very quickly – especially for Amy who is still working on building up her arm muscle strength. This is a design flaw that we, as a team, later discussed and are currently working on reconsidering (more on this particular note, however, will be in the next steps section of this documentation).

Furthermore, this current prototype works under the assumption that we can achieve a similar controller-to-rig communication as a Wii remote by employing the use of two MEMS accelerometers and an infrared sensor for position triangulation but this will most likely not be the case. We are currently working through alternative sensor options to make this controller-to-rig communication more accurate, which in turn implies a redesign of the general controller interactions based on the inputs the new sensors would require. Overall, while this prototype did work towards answering the question ” How might we create an arm-mounted controller that is simple to put on, secure, and comfortable to wear for a long period of time? “,  as it showed that people did understand how to put on the controller and then use it, there are still further edits to be made to the design to ensure proper functionality and a comfortable, non-tiring experience for arm-controlled interactions.

Prototype 2 – Mechanical Brush Holder

Different from the previous prototype, this prototype was meant to answer the question of: How can we give free 3D-movement on the brush by using motors?

This particular prototype focused on the physical movement of the brush that Amy would like to move. The basic idea was brought from the 3D printer’s structural model. Since there were limitations with the materials for this prototype, I used cardboards to demonstrate how the structural parts will move in order to achieve the free x,y, and z-direction movements.

General Overview of Mechanical Rig Prototype

The threaded bar section of the prototype. (This will control the y-axis movement of the machine)

Since the top general picture will be hard to understand, here is a part I may use for this section of the prototype.

Threaded Rod From google

The Servo motor will control the z-direction of paint brush motion.

Basic Idea of how the machine will work. (Since the prototype is made with cardboard, it breaks easily.)

Ideation of Prototype and detailed plannings for each Mechanical Sections

Ideation how to plant the Prototype ideation into Actual use. Brainstormed some limitations that might come while I am building this.

Failed Prototype for motor 1 Mechanical Section. (Tried to demonstrated threaded Rod by twisting the cardboard but it failed because it did not have enough force to handle that twisted shape. If we see carefully, we can see the twisted shape on the cardboard.)

So instead of just making a threaded rod, I brought an image from google to give a better understanding of how my parts would be shown.

During this prototyping process, I was surprised that the free x,y,z movement rig requires a lot of custom parts. At first, I thought it would be easy since there are many samples like 3D printers or robot arms that can rotate freely around the 3D space. However, after deciding to draw and plan the prototype for this mechanical movement rig, I realized that most sections require specific parts such as chain and threaded rod to give rotational motion around the rig. As a result, I was not able to get most of my parts before the prototyping since most of the parts need to be precisely custom made based on the planned model.

Also, during the presentation, I got feedback about the z-direction movement of the brush for this prototype. In the prototype, I used a servo-motor to give a slight touch of the brush on canvas. However, after the meeting with Amy, she told me that she needs to control the brushstroke by changing the pushing force of the brush. As a result, I might need a slight change with the idea for z-direction movement.

Prototype 3 – Driver Code/Smoothing

The goal of this prototype was to test some code to smooth the movement output by the arduino regardless of the input. That is, it was meant to ask the question “how do we account for errors in measurement and extra unintended motion to allow for a smooth stylus experience even without as fine motor coordination for the task?”

This prototype focused on the functionality side of the project and is rather simple to boot. It is not intended as a prototype for anything but the software, hence the fully simulated input and output. It’s merely a joystick wired to an arduino. From there, the arduino merely sends a message to unity which is then parsed and turned into a position. The final version would instead rout the output to electrical signals to control the motors.

The full prototype wiring setup – nothing very fancy.

 

Video demonstrating prototype in action. The olive green dot is the cursor (I couldn’t get the color to improve sadly, as I think I was using the wrong render settings).

The thumbstick component up close

My findings in prototyping was that the smoothing had a mild effect on the motion, but otherwise wasn’t super noticeable. In general, the prototype really failed to answer many useful questions as it really only simulated the input and output, and poorly at that. Without the finalized input, its effectively impossible to expect any trend in the motion – there might be noise that’s unique to the control apparats. Additionally, the output, which is a mechanical rig that receives the input after smoothing, could have its own share of mechanical difficulties that can’t be easily accounted for, like perhaps max safe movement speed.

Other than the rather exciting revelation that unity can talk to arduino, I suppose I was a little surprised to learn just how little you can learn about the utility of control software without actually having accurate, or even mild approximation of inputs. In simulating, it was found that there isn’t a noticeable difference between the smoothed motion and the unsmoothed motion, and that could be for a whole variety of reasons that could vanish once the software sits between hardware components operating in a real space.

Without seeing how something like a brush or a pencil is controlled by the rig and software, its a hard call to say that the prototype’s findings could be useful.

MOVING FORWARD

Our team learned a lot about what we want to make through the prototyping process, both through the answering of our initial questions and the raising of new ones. After the prototype presentations, we were able to talk about where we were to Amy and discuss her questions and concerns. Based on this conversation, we realized that there are many nuances to the painting or drawing experience that we had not yet considered such as: how do you switch drawing utensils easily? can you adjust the pressure of the utensil? is there a way for the user to change the scale of their movement to the output of the rig? With these questions in mind, we decided to simply where needed (perhaps a pivot towards a “drawing machine” rather than a “painting machine” as that requires less changing of paint colors and wetting the brush) and potentially add an additional switch box device which can let the user control the scale of their movement to output and the pressure of their utensil (low, medium, or high). We are currently working towards finding what is a proper level of ambition in these added components and what we should consider being out of our project scope.

We also experienced some of the challenges that can come with remote collaboration – it can be hard to coordinate schedules between the team and with our client so meetings to discuss progress and work have been a bit difficult to set up. Nevertheless, working through the process of these prototypes has taught us to make sure to plan our meetings well enough that we can hit all of our action items fairly succinctly during the meetings once we are able to find some overlapping time. The creation of our final work plan has also really helped in establishing deadlines and overall team alignment on the project schedule.

Overall, our prototypes have helped us to better understand aspects of our project that we had not even considered and establish new questions that we are working towards answering. With this, we can begin to work more concretely towards our final project and are excited to see how it all turns out when the different aspects of it all come together!

]]>
Team Elaine – Prototype Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/team-elaine-prototype-documentation/ Tue, 24 Nov 2020 12:54:33 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=11904 Introduction:

After interviewing with Elaine, we were able to start the prototyping development to help us better understand our initial device concept. You can see our documentation about our interview process to learn more! We were able to truly connect and better understand a day in Elaine’s life to try to create a device that would be of convenience. Upon thinking and discussing as a team we were able to come to a final prototype concept with a direct problem statement.

Our client enjoys music which is evident through their theme songs consisting of Mountain Lions by Lonestar and many other classic songs! However, Elaine has difficulty with devices that require the use of  grip force and strength, so there are musical instruments that are difficult to use. Elaine suffers with Larsen syndrome, Adams-Oliver syndrome, and Ehlers-Danlos syndrome. These disabilities causes here to not be able to extend her arms fully with her right arm (only 90 degrees). In addition she is missing muscles in both shoulders (deltoid) while having her joints be loose and unstable due to lack of collagen her body can properly make. Lastly, her left hand only has her first bone on the thumb and her right hand is missing most of the thumb muscles. She emphasized that there are times where she uses her mouth in place of her hands. We decided to work with Elaine to build a musical device that would cater to her needs by having flexible inputs while ergonomically focusing on her personalization. 

This documentation is meant to help us summarize and reflect our prototype process and journey to find our next steps and any other problems we found on the way! Because we come from various ranges of backgrounds, we decided to split the explorations based on our strengths.

Prototypes:

The three paths we decided to take for this exploration were ergonomics, design, and software. The goal with looking into the ergonomics of the potential prototype was used to find solutions to make interactions more comfortable and increase productivity which would be applied later in the design. For the design, we used this experience to have a better understand of what are the best design possibilities. Lastly, with software, we wanted to double check that the electromechanics that we thought of in our design prototype would be possible.

  • Ergonomic: What is the most comfortable range of motion to build for? How can our physical artifact be best tailored to meet Elaine’s specific needs?
  • Design: Does a user understand how to use this without being given verbal instructions? Does our design help with the constraints in mind or make the experience more difficult?
  • Software: How well does the MIDI interface work with the Arduino and does the sound it creates resemble a musical instrument enough?

E R G O N O M I C S

Simply put, form needs to follow function. We knew that the instrument should be tailored to Elaine’s unique abilities. The interactions which manipulate sound should not only be engaging, but also comfortable to use. If done right, there will be very little in the way between Elaine and the music. Elaine is able to extend her right arm to 90 degrees, while her left arm has a full range of extension. However, she only has a full hand on her left arm, while her right hand only has the first joint of the thumb. Explorations needed to be done to better define these needs. Which directions of motion are most natural? Are there positions which can be sustained longer than others? What touch points best satisfy comfortable movement while maintaining the magic in sound manipulation?

INITIAL PROTOTYPE DEVELOPMENT

Initial arrangement of different interactions with household objects: magic mouse, coat hangar “theremin”, and shotglass roller ball

The choice of this table was intentional, because of its spacious overhang lending itself practical to wheelchair use.

A very pleasant discovery was how a ping-pong balls snugly rolls in a shot glass, used in place of Elaine’s favored ball mouse.

IMAGES OF PROTOTYPE TESTING

In a prototype shift, I zoomed back from the interactions to better understand Elaine’s conditions untethered from specific context. To do so, I made this wearable out of rope and an exercise band to restrict my movement.

Profile of the band at its fullest extension with a resting arm

I found that the farthest extension which is not fought with resistance is about 70 degrees in, understandably short of the maximum.

CLIP OF PROTOTYPE TESTING

Here is some general arm movement to play with the vertical and horizontal resistance in the device. Horizontal movement was the easiest.

CONCLUSION

The prototyping process was enlightening. Having started with specific touch points in mind, I enjoyed finding lo-fi ways to a diverse set of interactions. I believe that this initial presented some great moments for decision making, even down to where I was going to set up these interactions. Of the three touch points, the theremin predictably held up to being the least demanding interaction. However, I ultimately felt I was getting ahead of myself. These prototypes set the stage for questions, but the experiments were only really for someone with my own abilities. It defeats a lot of the purpose of the project.

Instead, I zoomed out. Building the simple harness to limit my range of motion brought me closer to Elaine’s lived experience. Just experimenting with the exercise band challenged a lot of my assumptions. Sure Elaine could move to a 90 degree extension in her right arm, but the band made it apparent that there is no need to max this out. The edge out any range of motion is usually not the most comfortable. Additionally, the horizontal range of motion became super apparent as something to leverage. In playing with the exercise band, my arm pivoted left to right with ease in contrast to the resistance in its vertical movement.

Elaine’s feedback built off of these insights. She noted that gravity is not on her side, given the extra weight it puts on her scarce tissues. Once her arm is on a certain plane, it is easiest to keep it there rather than dip up and down repeatedly as I did. Another blind spot in my experiments is still in the horizontal movement. Elaine notes that it is easier to move her right arm left to right, but noted there is some strain when reaching directly in front of her. Moving forward, these insights  point to this instrument potentially being in two pieces to customize comfort and minimize hard to reach interaction.

D E S I G N

This prototype was designed to help figure out the design options to cater the most efficient and best experience for Elaine. The question in mind were Does a user understand how to use this without being given verbal instructions? Does our design help with the constraints in mind or make the experience more difficult? These questions were based off of our prototype concerns that we were unable to figure out when brainstorming through sketches.This was done by creating simple prototypes through cardboard and paper while using  the Wizard of Oz method. Before starting all of that, I wanted to evaluate the main goals and concerns the prototypes are meant to deal with. I was able to do so by writing down everything and then decide on the next steps of the process. Personally, regardless if what I write down is helpful and accurate, I prefer to write and think out my thoughts and possibilities to see them all and then move onto the visual iterations. 

Here are the notes that I took in order to start the design iterations.

INITIAL PROTOTYPE SKETCHES

The particular prototypes focused on the interactions that Elaine would have through using touch and sensing as inputs. While considering her settings, we explored ways to make the potential device be flexible. For both iterations, they have two inputs: the volume and tone.

With the first iteration concept, I explored the idea of using electrical tape or electrical paint to be used as the input touch navigation. While trying to make the pattern of the shapes intuitive while allowing users to know that there are changes as you move through them was a thought that I still continue to think about and iterate. The purpose of exploring this touch method was because of Elaine’s emphasis that she prefers to have this device use non-force touch or sensors. We discussed buttons, but she mentioned that it is more difficult to do.

I tried creating various patterns to iterate on the touch base prototype iteration.

This is the second type of device iteration that focuses on using sensors as the input for sound and tone. The left side is mainly used to manipulate the volume while the right side is using the tone of the instrument. While iterating, it was initially focused on only being used by hands, but knowing that Elaine’s feet are useable, I tried looking into how to integrate both parts for this device through proportions. In addition, understanding if we want this device to stand or lay on the platform.

Here are the second concept prototype iteration sketches.

INITIAL PROTOTYPE DEVELOPMENT

From the sketches, I was able to prototype the final iterations with cardboard, paper, and a black marker. The first iteration is focused on the use of touch to manipulate the inputs. The cardboard is the speaker element and the black lines and shape is used for the touch buttons.

This is the mapping of the components of the first iteration prototype.

This gif shows the record on and off button.

Here is the user interacting with the volume.

This gif shows the interaction of the tone input of the prototype.

The second iteration is focused on the sensor. It is foldable so that Elaine is able to flexibly carry and use the device. One of her wants was to have this device have multiple inputs to allow her to use it anywhere with most devices. The left cut out is used to manipulate the volume of the input while the right cut out is used to change the tones of the instrumental sounds.

Here is the design flat down. The creases are used to allow the device to be foldable flat so that Elaine can comfortably more around.

Here is the prototype standing up!

PROTOTYPE TESTING

The user testers that I asked was a former student of the class who I live near. It was helpful to work with her and get her feedback because she comes from a Computer Science background while also having experienced this type of prototype process before. Her feedback was extremely insightful and allowed me to start thinking more deeply with my iterations. In addition, there were many obvious issues that I missed in regards to the constraints your client has. Before starting, the user was informed about Elaine’s constraints so that they are more aware and truly trying to interact with the devices in the perspective of Elaine. There were two parts to the testing in order to help answer our questions. The first part was letting the user on their own interact with the device without any information. The second part was giving more context on how to use the device. I purposely did this so that I could see if the device was intuitive and if there were any new assumption found. 

As demonstrated below, the user is able to have the first type of device lying on the arms of their chair. This was purposeful in considering Elaine’s wheel chair.

This is the user using the device with my rollable chair.

Here is the user trying the second device.

Here is the user trying the device from a birds eye view.

While user testing, I was able to take notes while also asking them to draw and write down their suggestions. The left is my notes and the right are the co-collaborative feedback from the user tester.

Here are the notes from the user testing.

CONCLUSION

The process was an extremely helpful learning experience as I had never worked with physical constraints in my design. In addition, coming from a communication design background, I do not know much about the product development and what is the most effective manner. I appreciate the challenges that I faced in the sense that not being able to physically work with the team and Elaine made me use my own assumtpions whether that ties with the visual or mechanical interactions.

The feedback that I got from the user testing was that the design was not as intuitive especially the touch based design. I had not thought about how the touch components are on the side of the chair, making it difficult for users to see where they are touching. They suggested moving the touch based element onto the arm rest. On the other hand, they thought that the sensor design was much more effective in the sense that the constraints work better with it.

After the presentation, we were able to talk with Elaine. It was extremely helpful to talk with her and get her input. She said that the touch base element is okay to have for the input method, but having a sensor is better for her. Her left hand works better with the manipulation of volume as it does not require much finger movement. She also enjoyed how the first design iteration was split into two inputs so that she is able to play around more with the instrument manipulation. She suggested that it is better to have the device on her arm rests but put an emphasis on not having the sensors go to the midline as she does not have the flexibility with extension. Having a plan on the arm rest is the best option for her so that she does not have to move her shoulders and arm too much while giving her arm a platform to be used as a pivot.

Based on the feedback I received, for our final device, I have a better understanding of how to move forward. I plan on choosing to incorporate Elaine’s feedback, user testers feedback, and the ergonomic exploration and findings that Connor did together to create the most effective experience.

 

S O F T W A R E 

One of the doubts we had as a team was whether the MIDI(Musical Instrument Digital Interface) would work with the Arduino. Since it is not an inbuilt software, we were concerned about whether the connection would work and if yes, how easy and straightforward would the connection be? Another concern was whether the MIDI synthesizer would produce a sound that resembles a musical instrument enough? Or whether we needed to find a new approach. In order to answer these questions, we set up the entire MIDI connection, downloaded the appropriate software and synthesizer. To mimic an analog input, we connected a potentiometer to Arduino and by varying the value of the potentiometer, we got different pitched to be played through the synthesizer.

Images of the prototype

Overall prototype

Software – music synthesizer

Hardware – potentiometer to simulate analog input

Video of the prototype in action

Images of in-person prototype testing

My sister testing the prototype

Getting feedback and suggestions on the prototype

Notes taken from the testing

CONCLUSION

From my in-person prototype demo, I got very constructive and helpful feedback. This project was shown to my sister and she seemed quite impressed by the quality of sound being produced by simply rotating a small knob on the potentiometer. Overall, it did answer my question for this prototype, yes the MIDI connection works fine with the Arduino and the sound produced is very similar to that of a real instrument. My sister played around with the prototype quite a bit and came back to me with a few suggestions and modifications. One of the things she said was that I was not restricting the range of notes and so when I am trying to play a very low frequency (e.g. 10Hz), the sound is very hoarse and unpleasant. She thus asked me to restrict the frequency range of the instrument. She also suggested that I picked one instrument and research its exact range and notes and make the instrument mimic that. Another suggestion, which I would definitely love to implement, but would give a lower priority, is creating discrete notes rather than a continuous range of frequencies. This particular synthesizer already only has limited notes it can play, so it is discrete, however, she suggested I make them discrete when I pass the signal through the MIDI as it would give me control over which notes I want to let the instrument play.

While talking to Elaine about the project, she too was happy with what was done so far. In addition, however, she asked me to add a control to manipulate the volume of each note played. This is a task that is quite straightforward and doable and is a very good suggestion that will allow for more dynamic playing and makes the device more like a true musical instrument. In addition, she also asked if there was a way to choose which instrument she plays, that is vary the instrument. This is definitely possible if she manipulates the software directly but dynamically doing it through the device will be quite challenging but I hope it is possible as it a great feature to add in to the project.

Moving Forward:

Through this process, our team was able to learn a lot about what our main goals of our project is. Before we had an idea, but now, we have a strong and detailed understanding. In addition, we were able to discover the strengths and weaknesses of our prototypes to help us move forward. We used the method of “dividing and conquering” for the prototyping research was effective in allowing us to scope/answer a wider range of questions and gaps we had previously. In the end, we were able to find that ergonomic prototyping not only better mapped Elaine’s range of motion, but also pinned thresholds of comfort firsthand. The software was a lot less straightforward than expected. In addition, designing with constraints was much more difficult than expected! There would be purposeful design decisions, but the constraints weren’t fully thought out, causing issues during the user tests.

After the prototype presentations, we were able to talk with Elaine and get her feedback. It was extremely helpful because there were some aspects she really enjoyed and wanted us to continue to use while things she does not want us to have. Elaine talked about how she would love to have the device change instruments because of her past experiences of having a very limited range in band during school. In addition, having access to discreet or continuous change.   As we continue to further iterate, we plan on continuing to work with Elaine to finish our project. It is extremely helpful that she has a strong background in this field so that if we have an obstacle, we are able to reach out to her and receive help.

Overall, we are able to look into new questions that were found through this exploration. Now we have three individual research directions, we are planning on using all the feedback we got to plan our next steps. 

  • Deciding on what type of type of interactions our device will focus on i.e touch, sensor, and mouse.
  • Work on combining the software with the design prototype
  • Final iteration of the prototype based on feedback
  • User test with our client to get final feedback
  • Musical Device is finished and reflecting on the process and potential next steps
]]>
Team Brenda 2 – Prototype Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/team-brenda-2-prototype-documentation/ Tue, 24 Nov 2020 04:56:47 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=11951 Arleen Liu, Claire Koh

 

1. Brief overall introduction for context

Following the first interview with Brenda, the first week of prototyping development saw a slow progress in our efforts to develop an adequate mechanism for our initial design concept. The mechanical nature of the foldable foot-holder that we wished to develop at first presented us with a variety of problems that we were not familiar with, and so we decided to hold a second interview with Brenda to present her our current issues and possibly to discuss new design ideas for the project. 

Upon completing the second interview with Brenda (full documentation on both the first and second interview could be found here), we now had a broader spectrum of ideas to choose from – some of which would require mostly electrical components than mechanical as we had hoped for. Among several different problems Brenda mentioned, the most addressable one seemed to be her inability to see behind the wheelchair – caused by the headrest that’s fixated on her chair as well as the lack of mobility in the left side of her body. She told us that she’s just learned to be careful when looking back or moving backwards. 

To address this issue, we decided to make a device that would help her, to some degree, become better aware of what is behind her. Of the few possibilities for the device, we chose to incorporate a set of three ultrasonic sensors that would capture almost the entirety of her hindsight and a combination of speaker and LED lights as an output device that would provide a visual and auditory feedback for Brenda.

 

2. Prototype

Arleen:

This prototype was designed to help figure out the electrical composition of the device as well as experimenting with different control versions to find the most intuitive one.

My prototype was essentially composed of purely the electrical components necessary for it to perform all its intended functionalities. More precisely, it consisted of 3 LEDs making up a visualizer corresponding to the detection device of the 3 ultrasonic sensors, a speaker for obstacle alerts, a push button for manipulating control settings, and the 3 ultrasonics for detection as mentioned earlier. While the full extent of the interaction could not be modelled by my purely electrical prototype (particularly the placement of the device parts for the most accurate results), the interactions of the different electrical parts were simulated well so interacting with one part could immediately have its impact seen on a different part (like the button control for the LEDs).

Close-up of the single ultrasonic sensor I had modeling the part of the device meant to have the arrangement of 3 ultrasonic sensors.

Close-up of the LED arrangements and speaker placed there modelling the alert/visualization system for the user depending on the ultrasonic sensor feedback.

Overall scale of my prototype with all the essential electrical components mounted together.

The above is a video shot of the final button-LED visualizer mechanic based on my subject feedback for being the most intuitive version.

Screenshot from our planning google doc for what each of our prototypes would consist of. I built my prototype based off this initial plan.

Progress stage of my prototype with only the LED lights fully wired up with the push button as well.

Picture from my feedback and testing stage with my completed prototype featuring my test subject, my mom, experimenting with different LED control mechanisms.

Before Claire and I split up the prototyping focuses, we were debating on the exact mechanism and purpose of the LED visualizer, which ended up being the highlighted focus of my prototype, as we couldn’t decide on whether having both auditory and visual feedback from the ultrasonic sensors was overkill or not. Therefore, we surmised to test whether having the visualizer indicate ultrasonic feedback or just show the active ultrasonic angle would be better. That led to my main prototype focus being on the controls of the LED visualizer, which I had my mom experiment and test for me, and she ended up giving feedback that separating out the auditory and visual feedback into different purposes (so auditory feedback would be specifically for proximity detected by the ultrasonics and the visualizer being for which ultrasonic view was active) rather than having both serve the same purpose just displaying data in different forms. While I did receive an answer that made sense in regards to my mom’s reasoning, it was rather surprising when I received the opposite feedback from Brenda in which she preferred the visual feedback from the ultrasonics, leading me to realize just how different viewpoints could be given each person’s circumstances.

Based on the feedback I received, for our end product, I chose to incorporate Brenda’s feedback as she is our client who will use this device (ideally). This meant that moving forward, my prototype did not quite end up capturing the device behavior accurately and revisions will need to be made to change the purpose and functionality of the LED visualizer feedback, in addition to the other overall feedback to improve our device design and make it even more useful and effective.

Claire:

This prototype was designed to illustrate the interaction between the person and the device as well as the general appearance and placement of the system. I focused on creating the prototype for the visual and auditory feedback-system and how it could interact with the person. 

The visual + auditory feedback system (visualizer) will consist of an LED screen and a speaker. The LED screen (shown as a semicircle divided into three) in the prototype represents three distinct LED lights that will be connected to each of the three ultrasonic sensors. The speaker will be connected to the whole set of all three ultrasonic sensors. The LED will have three states: off (white), low on (pink), high on(red) – and they correspond to the proximity of the wheelchair to object(s) behind. If it’s far enough, the screen will be white; closing in, pink; very close, then red. The speaker will follow the same logic. If far, the sound will be off, closing in then slow beeping and if very close, then faster beeping.

Three ultrasonic sensors will be attached on the back of the chair

The system is at rest when the visualizer is laid off

The system is activated when the visualizer is lifted

Below: process photos

 

 

Initial sketch for the device

GIF I made for the screen

Feedback from Brother Koh

 

Some of the things I found while working on the physical prototype was a slight issue of an user interface. I realized that sometimes you would want to turn the device off in case you’re backing into something on purpose or if you NEED something/someone behind your wheelchair and you don’t want the device to be constantly beeping. That is why I made the system where if you lay down the visualizer, the system is deactivated and when you lift it up, it is activated. I thought about making a button for it but then realized the position of the visualizer is more recognizable and intuitive (once the visualizer is down, you won’t be able to see it so you’d know that the system has been deactivated) than a small LED light describing the on/off state or the position of the button (pressed/not pressed).

When I showed the prototype and explained it to my brother: the one thing that he kept insisting was that there is already a camera commercially available for this kind of issue. This is what inspired me to start thinking about what could be done to make the device more specific to Brenda’s problems as it will be explained later in part 3: Moving Forward.

3. Moving Forward

The prototyping process was a good way for us to discover the strengths and weaknesses of our concept as well as the drawbacks of the initial interaction design that we missed when only thinking about it theoretically. It was not until after we built the physical working prototype that we realized we need to focus more on the designing of a better user interface – make sure that the design and arrangement of parts is as intuitive as possible. Consequently, some of the future considerations include: what would be the best way to arrange the controls for easiest use? How to make the device more practical (there are already back video cameras for cars, etc. – so how to develop it more so it’s unique, and much more customized to Brenda herself?) and does more than what the commercial video cameras can already do? 

Upon completing our prototype presentation and critique, we were also able to analyze the feedback we got and outline the general direction that we want to head for the next step in the project. We received more photos from Brenda that show blind spots for her wheelchair that we could address using our new device. Following are the changes we decided to implement on our design moving forward. 

  • Change to vibration instead of auditory beeping feedback
  • Lower detection range of ultrasonic sensors to detect pets & fallen objects
  • Keep version with ultrasonic sensors all active, visualizer will show which direction detects closest thing corresponding to light intensity
  • Keep awareness about armrest prone to easy damage, add to inside of pocket attached to armrest instead
    • Will get image of armrest area and blind spot on back of chair
  • Dogs might be able to hear the ultrasonic ping and not like it – find a higher-frequency ultrasonic device. 

Picture from Brenda (back of her wheelchair)

Picture from Brenda 2 (back of her wheelchair)

Picture from Brenda (the blind spot under her wheelchair where things/pets can get stuck

We feel hopeful about the feedback we got, the changes we decided to make and the direction we are heading towards about finalizing our concept. We believe that repurposing our concept and making a new low-range ultrasonic device that could detect pets and fallen objects will be more specific and helpful to Brenda – and it wouldn’t require us to change too much of our initial concept and is perfectly within our ability to produce. 

]]>
Team Amy – Interview Documentation https://courses.ideate.cmu.edu/60-223/f2020/work/team-amy-interview-documentation/ Tue, 10 Nov 2020 07:25:00 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=11872 Interviewing: Amy Shannon 

Team: Dani, Evan, Daniel

INTRODUCTION

This project was designed to build a functioning device that can improve or help the daily life of disabled people. Our team, team AMY, was set to design an assistive device for Amy Shannon. Shannon had a spinal cord injury because of an accident before her 13th birthday. From that accident, she injured her C4 spinal cord, and half of her body got paralyzed especially with her legs and her hands.

In this interview, we are trying to learn her daily life route and her life so that we can understand her situation and build some device that can improve her life. Daniela Delgado, Daniel Moon, and Evan Tipping are planning to derive several ideas after finishing her interview, which will be discussed at end of the word press.

AGENDA

Before our interview, we prepared some possible questions to ask her in anticipation of how she might respond or how we might help her extract and identify problems in her daily life should the need arise. Our agenda is below:

Before we started an interview, we have discussed possible questions to ask Shannon to improve the quality of her interview. We used a google word document to gather some questions online and performed our meeting by using the Zoom meeting.

Here are some example questions we wrote on our Word Doc.

Question 1.

Can you walk us through a typical day in your life? What are some of the major events within your day and do you have any daily rituals?

Question 2.

Do you find certain daily tasks to be more frustrating to complete than others? Why?

Question 3.

Is there anything that you want to get into? A new hobby maybe?

Question 4.

If you had a device to help you with this task, how prevalent within your living space would you like it to be? Do you want it to be portable and on your person or stationary?

 

Through our interview planning, we have improved our quality a lot.

MEETING SUMMARY & MAJOR TAKEAWAYS

At the start of our conversation when we were introducing each other and ensuring that everyone was on the same page about the project deliverables, Amy began by noting that she did not have any particular ideas as to what type of device she would like or what she would like it to do – furthermore, she said that she wasn’t the best at coming up with ideas on-the-spot, and so she was a bit nervous that we wouldn’t leave the interview with many actionable concepts (Also, as a side note, having her comment that she wasn’t the best at on-the-spot thinking was very helpful to us as the next item on our agenda, after the project timeline overview, was an icebreaker activity where we all would come up with assistive device ideas quickly to get the ball rolling. We scrapped this part of the interview and jumped into questions about her daily life instead to avoid putting her in a potentially uncomfortable situation). We assured her that we would all work together to figure out a concept she was excited about and that she didn’t have to have any ideas for our project to bring to the table, just herself and her story. From here, we jumped into the meat and potatoes of the interview, where she told us about her daily life, her injury during adolescence which lead to her disability, and what she likes to do for fun (both by herself and with her aids and parents).

Throughout the interview, Amy introduced us to her attendant, April, who helps her in the morning and evenings to get ready for the day or for bed, walk her through muscle strengthening exercises, and assist her with tasks such as scrolling through shows on Netflix or completing her jewelry designs. Amy also spoke about her current assistive devices, such as her joy-stick controlled wheelchair, her splits which allow her to maintain wrist and ankle placement, her voice recognition software which helps her navigate her computer easier, and the camelback which she keeps attached to her chair so she doesn’t have to ask people to help her get water.
One interesting thing that she noted was that while you may think that something is a useful assistive device, such as a voice-controlled smart environment (i.e. an Amazon Alexa), in reality it’s not as helpful to a disabled person as you may believe. For instance, she doesn’t use voice-controlled speakers instead of asking April or another aid to use a remote to choose a TV show to watch because sometimes she doesn’t know what she specifically would like and just wants to see the options. A voice-controlled device requires a specific command, and so she doesn’t find them very helpful most of the time, even though she said most people would think that it would be helpful.

During the second half of the interview, we shifted from what she currently does in her day towards what she likes doing, even if it’s not a daily thing for her. This is when she got on the topic of art! She has always had an interest in all types of arts and crafts and carried that with her into life, getting her Masters’ degree from CMU in Arts Management. She likes different types of painting, and right now is interesting in acrylic pour styles of painting, but has stopped painting like she used to because there is a mismatch of assistive devices.
To paint, she needs an easel to rest the canvas on, her wheelchair, a mouthstick (which she described as a mouth guard-like device which has a paintbrush attached to it, allowing her to paint with her mouth), and someone to give her the paints. Her mouthstick is uncomfortable to use for prolonged periods of time, but she would still use it to paint until she got a new wheelchair which doesn’t let her close enough to her easel to paint.

We began to see if she’d be interested in a device that would allow her to create art in a more independent way since she loves it but is currently facing many barriers to it. Amy seemed to be really excited about the idea and so, with that, we began to wrap up the interview since we were about to go over an hour and it was late evening and so we didn’t want to keep her. We let her know that we would be in touch and are currently working on maintaining an open line of communication with her to make sure that she’s just as excited as we are in the project’s direction!

While this documentation outlines a basic overview of our meeting, we also decided to record the interview, with Amy’s consent, for future reference and personal documentation purposes. Furthermore, Dani and Daniel also typed notes throughout our conversation with Amy to see if there were any differences when it came to takeaways or things to note amongst the group.

Dani’s Notes Document

Daniel’s Notes Document

Major Interview Takeaways:
  • Amy has limited movement due to a C4 spinal injury she got as a young teen. She needs assistance throughout the day to complete her daily tasks because of it
  • Amy needs to use both hands to pick objects up but has enough movement with her hand to use a joystick
    • She is currently going more strengthening and mobility exercises and is excited about her progress
    • She uses many types of assistive devices, ranging from physical splints to speech recognition software
  • It’s important to ask if something is actually helpful to a person with a disability instead of making assumptions
    • Some things that you think would be helpful actually are not (or at least is subjective)
    • Don’t make assumptions on what people can or cannot do – let them tell you
  • Amy loves all types of art and would like to do more without having to ask for as much assistance as she currently has to
    • There are many mismatches between her assistive devices which are currently causing barriers for her when it comes to painting

REFLECTION AND TEAM THOUGHTS

Overall, our interview with Amy took a more conversational tone; we used our interview agenda as a guide to lead the overall structure of the interview, but didn’t stick to a strict “next question on the agenda and answer” format, instead opting to ask impromptu followup questions based her comments and seeing where this lead us. We looped back to our agenda on occasion to ensure that all of the topics we wanted to cover were discussed, but most of our interview was us talking to Amy about her day-to-day life, what she likes to do, and what she wished she could do more during this time. While this proved to be an effective way for us to generate ideas with Amy since she wasn’t sure as to what she wanted, this approach may not be as helpful when interviewing other people who may prefer to give shorter and less anecdotal responses.

Furthermore, we wish that we could’ve incorporated more generative interviewing tactics to have a more active brainstorming session of ideas but, as we were unsure as to what her capabilities were to draw and she mentioned that she didn’t like on-the-spot thinking too much, we thought it was better to not interrupt the flow of conversation (as there weren’t many lulls in our interview which was a pleasant surprise!) by trying an activity.

After the interview and discussing the general direction of what type of project we wanted to make – an assistive device that facilitates the creation of art – we each began to generate concepts as to what this could look like. This allowed us to see where we were aligned as a group in terms of interests and where we weren’t: we all liked the idea of making painting easier for her by alleviating some of the mismatches she is experiencing when painting, but we didn’t have exact alignment on what type of device can provide the most succinct solution. We are currently leaning towards having there be a rig which moves a paintbrush by detecting and exaggerating her hand movements, but since we want to keep a clear line of communication with Amy to ensure that she’s happy with our project directions, we have sent her some of our sketches and are trying to coordinate another follow-up Zoom call to discuss her thoughts on them.

Ultimately, while our interview ended up deviating from our exact plan, we had a robust and fun conversation with Amy which let us get to know her better as a person and as a client. We are excited to continue to work with her throughout this process and see how far we can go with our project concept!

]]>
Elaine Houston: Interview https://courses.ideate.cmu.edu/60-223/f2020/work/elaine-houston-interview/ Tue, 10 Nov 2020 07:23:10 +0000 https://courses.ideate.cmu.edu/60-223/f2020/work/?p=11868 Introduction: 

In preparation for our group to start the final project, our team of Sruti Srinidhi, Jina Lee, and Connor McGaffin, interviewed our client, Elaine Houston, so that we are able to design an assistive device that caters towards her, a person with physical disabilities. Before meeting with Elaine, we sat down and debriefed on our intent, goals, learning objectives, and secondary research. Our team purposefully went through this process so that when we interviewed with our client, we would have a strong understanding of what we need to ask and know. From our research, we were able to enter the interview with a lot of background information about Elaine, making it easier for us to emphasize and have a comfortable conversation. Due to the current situation, we understand the constraints that we will face, but we plan to create a design that is as accurate as we can make through various mediums like a simulator version and digital renderings of our prototype.

 

Agenda

Before our interview, we prepared by making an interview script and doing research about Elaine Houston through her website and other online sources. Our team wanted to make sure that we were all on the same page with the project, so created our own overall brief. This was extremely helpful for all of us to get a better understanding of what we plan our outcome to be at the end of this project. In addition, getting started through this process was insightful and allowed us to thoroughly prepare for our interview! After doing research, we created “How Might We Questions” to develop a stronger understanding of what we would like to takeaway for the overall project.

Here is the brief that our team created!

Here are the resources we used to learn more about Elaine!

From the preparation, we were able to start creating interview questions. Our overall goal from these questions was to help her identify problems in her daily life that she wants an intervention in. We initially wanted to record our session, but due to some reasons, we were unable to. Fortunately, we were able to have our whole team note-take and interview at the same time. We split our questions into three sections: Warm up/About, What Accessibility Means to Elaine, and Wrap up. We split up the questions this way so that we were able to get the best understanding about Elaine. We weren’t able to ask all the questions due to our interview running longer than expected, but we were glad we had this structure. In the future, we plan on following a stricter structure so that we can touch up and talk about all the topics to help our project.

Here is the interview script that we had planned.

After our interview, it was interesting to see how we all note-take slightly differently. Some only used the document to type up the notes, while others used handwriting to jot notes while also typing. Seeing this difference was nice to get to know each others’ work styles! Here is our link for those that would like to get a more in-depth view of our interview and process: link.

Here is a part of the napkin that one of the team members wrote on so that after the interview, they would be able to go back to the document and calmly reflect and add their notes.

Here are all the notes combined on a google docs!

Summary and key takeaways

The interview was quite informative and eye opening to us. As a team, we were able to learn a lot about how tasks that seem so trivial to us, like rolling a dice, can be a challenge to others.

Most of the interview was consumed by Elaine talking about the different projects she has worked on and how these devices help her in her daily life. This really helped us understand her lifestyle, interests, and her problems. She talked about how her service dog who has a RFID reader and can identify and press different buttons for her, like in an elevator. Adding to that, she mentioned that her service dog often gets confused when there is a label on a button. This concern was extremely interesting in the sense that the confusion came from human-made choices i.e someone putting a sign covering the button or the button and the platform it is on are both metal. These examples were things that we all thought were surprising and unexpected, because of the irony that it was another human that caused the problem, when we typically expect that we are supposed to help and not make more difficulties. She also talked about how she struggled to play laser tag with the regular devices and so built herself a switch that she could easily use to play the game. With that device, to make it more fair, able-body users would have constraints to “make the playfield more fair.” This type of thinking was very new to the whole team, but very eye-opening and helpful. 

After a long conversation about her different experiences, Elaine began explaining the problems she faces on a daily basis. She started off by talking about how she struggles to open clamshell packaging. She emphasized that she constantly has to use her mouth to open packaging but even that sometimes gets very challenging. Some of her friends come to her house and meal prep, and in the process, they close her jars too tight, which she then struggles to open. Furthermore, she talked about how she finds it challenging to roll a dice and play games and would love for there to be a device that could simulate the roll of a dice with just one button. She then continued to emphasize the idea of cross ability, which is essentially bridging the gaps between people’s abilities and placing them on similar playing fields, which was quite interesting and something we are definitely looking into.

Reflection

Overall, the meeting did not follow the plan we had in mind. Initially, we wanted to follow a more structured framework, hence our preperation, but as we started the interview, it started to become more conversational. The interview lasted much longer than we expected. It was less than an hour and a half. Due to Elaine’s extensive experience with engineering, it was much harder to extract what she found difficult, because she had or was in the process of solving those problems herself. It seemed that the majority of the meeting involved Elaine talking about the different devices she has built and how she went about doing that. It was great to listen to as she is very knowledgeable and we learned a lot from it. However, it was not particularly helpful in giving us the insight required to build an assistive device for her. In addition, it was quite difficult to steer her in the direction we wanted as she passionately spoke about the work she has done for long periods of time, making it difficult for us to steer her back to the original questions. Eventually, she did come around to talking about the problems she faced but the solutions to her problems were more mechanical than those that required the arduino. Or they were solutions that did not require electronic devices at all. Many of the problems she suggested seemed to be out of our control and/or were too broad and complex for us beginners. It was extremely interesting for her to educate us about creating devices that help cross able-body and non-able body users. This type of thinking was very new to all of us and was a big interest, however, due to our lack of experience it is an extremely difficult and complex concept for us to work on. After the interview, as a team, we were able to discuss and analyze the data we collected from our interview. However, the results that we got weren’t exactly what we had hoped for. In the end, since the whole team had the same takeaways from the interview, we were unable to come up with a good enough device idea to build for the project. Looking back to our secondary research, we were able to go through Elaine’s website where she talks about her interest in music. We then decided to take this route and proposed the idea of a button based musical instrument for her. Because we want our product to cater towards Elaine, we sent her our proposal for the musical device and are still waiting for a response. We want to create a device that not only helps us exemplify the skills we learned from the class, but also be something that Elaine is interested in. Our main goal for this project is to develop a device that Elaine would want to use, so her feedback is extremely crucial to our next steps.  

 

 

]]>