Crit 2 – Stabilizing Device for Tremors

Premise

My family and I struggle with a progressive nervous system disorder that causes an essential tremor that starts in your hands when you’re younger (i.e. me), and migrates throughout your body as you get older (i.e. my mom).

Proposal

For this project I wanted to look into ways to help stabilize things you’re holding if you have a tremor. I made this device that uses an accelerometer to detect movement, and offset that movement by using 2 servo motors to control the x and y rotations.

To do this I researched Quaternions and Spatial Rotations.

There are three different state options: stabilizer: help for when you need to hold something still; pouring: help for when you need to pour something; normal: device does nothing.

Proof of Concept

Because of the complexity of offsetting movements and the fact that I am not knowledgeable enough with physics, I found it really difficult to make the stabilizer and pouring states work together. Hence, the demonstration above only shows the stabilizing state.

Adding to that, I struggled with the adjustment between the relationship of input data and sensitivity/stability of the device. In other words, I didn’t know how to make sure the device doesn’t jitter as much while reading live data. For future iterations, learning how to normalize the input data should help.

Fritzing Circuit Sketch

stabilizer_code

Critique 02: Assisting Individual Body Training through Haptics

Problem

When training our body and build muscles at a gym, it is really important to maintain the accurate and balanced pose and gestures – not only for not being injured but also for maximizing effects and keeping muscles balanced. However, when we go to the gym by ourselves, it is sometimes really difficult to reflect ourselves whether we are using the tools in the right way or not.

Solution

I thought about a device – that could be a smartphone with an armband or a smartwatch or the other devices that are attached to our arms – that provides haptic feedback to us so that we can keep the right pose.

For example, when we are doing push up on the ground, when we go down, the device checks the degree of our arms and time. After we go up and go down again, the device provides haptic feedback (various intensities of vibration) to signal to us that we have to go down a bit further to reach the right degree. When we reach that, it vibrates shortly, to let us know we did well.

Proof of Concept

I tried to code through Swift and use the gyro sensors in the iPhone. I believe I could develop this idea much further – it could check various degrees of our bodies when we are doing exercise, even stretching to increase our flexibility. Also, if it could keep collect data throughout the time, it will understand our capabilities of a certain exercise or part of our muscles, so that it could guide us to eventually increase our capabilities with the appropriate tempo without harming our bodies.

Videos and Codes

degreeChecker – code

 

Emotional Haptic Feedback for the Blind

Problem:

Haptic feedback as a means of delivering information to the visually impaired isn’t a new concept to this class. Both in class assignments and in products that already exist in the real world, haptics have certainly become a proven tool. However, I feel that there has not been much consideration as to the more specific sensations and interactions that haptics can provide.

Proposed Solution:

With this project, I attempted to create a haptic armband that adds another dimension of feedback: spacial. By arranging haptic motors radially around the arm, I was able to control intensity, duration, as well as surface area in order to create different sensations. Controlling these variables, I recreated the sensations of tap, nudge, stroke (radially), and grab.

In terms of applications, I think time keeping could be a great illustration as to how different sensations can play a role. For example, a gesture such as a tap or nudge would be appropriate for situations such as a light reminder at the top of the hour — on the other hand, a grab would be more suitable in situations such as an alarm or if a user is running late for an appointment. Other more intricate gestures such as a radial stroke could be for calming users down in stressful situations.

Proof of Concept:

Arduino Code and Fritzing Sketch

Crit 2: Key Fob Reminder System

Problem:

The blind and memory impaired can often have issues remembering small objects.  Keys, phones, and wallets are all easily misplaced items. Forgetting commonplace but important items can be especially frustrating and cause issues for people, especially if the behavior is repeated.

Solution:

A system that relies on RFID tags embedded in a keychain or fob can remind users if they left there devices on tables as they were leaving the house, as well as causing the device to ping when approaching household points of entry when needing a key, would afford users as to where there common household items were during. Items can become “lost” or misplaced even in book bags, and this system would allow for the user to feel a distinct “ping” for each device.

IMG_0069

Proof of Concept:

A system of RFIDs that signify whether a user is exiting or entering a common entrance would allow the reminder system to ping both the user and the device, and allowing the user to manually ping keys by pressing a button and triggering a dime motor.

Crit2

Your Personal Doorman

Idea

I often want to know who is at home when I’m on campus. If I’ve had a rough day – maybe I’d like to come home to chat with my roommate or maybe I’d like some alone time.

One might argue that Find My Friends has many of the features I require but as an Android user this feature is not available to me.  Another issue to point out is that if you live in an apartment building, like me, then there is some likelihood they they are not in the room – rather in somewhere else in the building.

If I was to extend the project, I would try to use IFTTT to notify me when someone enters and exits. Additionally, I would add temporary keys so that if friends or family are visiting they can temporarily unlock my door when they need to.

Proof Of Concept

To input the password, I decided to use a keypad. Every person has an associated key, so we can track who is entering the apartment. To further incorporate the kinetic requirement of this project – I added an ultrasonic ranger which ‘unlocks the door’ when a person stands near the door from the inside.  There are also modes which can disable certain keys from working if you need privacy.

Crit2 Arduino

Link to videos

 

Kinetic Crit: Touch Mouse

Concept

Whiteboards and other hand drawn diagrams are an integral part of day to day life for designers, engineers, and business people of all types. They bridge the gap between the capabilities of formal language and human experience, and have existed as a part of human communication for thousands of years.

However powerful they may be, drawings are dependent on the observer’s power of sight. Why does this have to be? People without sight have been shown to be fully capable of spatial understanding, and have found their own ways of navigating space with their other senses. What if we could introduce a way for them to similarly absorb diagrams and drawings by translating them into touch.

touch mouse prototype touch mouse prototype

The touch mouse aims to do just that. A webcam faces the whiteboard suspended by ball casters (which minimize smearing of the image). The image collected by the camera is processed to find the thresholds between light and dark areas, and triggers servo motors to lift and drop material under the user’s fingers to indicate dark spots above, below, or to either side of their current location. Using these indicators, the user can feel where the lines begin and end, and follow the traces of the diagram in space.

https://youtu.be/y57xh_YXuHw

Inspiration

The video Jet showed in class showing special paper that a seeing person could draw on, to create a raised image for a blind person to feel and understand served as the primary inspiration for this project, but after beginning work on the prototype, I discovered a project at CMU using a robot to trace directions spatially to assist seeing impaired users in way-finding.

Similarly in the physical build I was heartened to see Engelbart’s original mouse prototype. This served double duty as inspiration for the form factor, and as an example of a rough prototype that could be refined into a sleek tool for everyday use.

1ere souris d’ordinateur

 

The Build and Code

The components themselves are pretty straightforward. Four servo motors lift and drop the physical pixels for the user to feel. A short burst of 1s and 0s indicates which pixels should be in which position.

The python code uses openCV to read in the video from the webcam, convert to grayscale, measure thresholds for black and white, and then average that down into the 4 pixel regions for left, right, up and down.

I hope to have the opportunity in the future to refine the processing pipeline, and the physical design, and perhaps even add handwriting recognition to allow for easier reading of labels, but until then this design can be tested for the general viability of the concept.

Python and Arduino code:

wbtouch

Crit #2 – Grumpy Toaster

Problem: Toasters currently only use their *pop* and occasionally a beep to communicate that they are finished toasting whatever is inside them.  It is also difficult to tell the state of various enclosed elements of the toaster, like if the crumb tray needs to be empty, any heating elements need wiped off, etc.  I believe the toaster could communicate a lot more with its “done” state in ways that would be inclusive to a variety of different user types.

Solution: More or less, a toaster that gets grumpy if it is left in a state of disrepair.  Toasters are almost always associated with an energetic (and occasionally annoying) burst of energy to start mornings off, but what if the toaster’s enthusiasm was dampened?  Because users are generally at least half paying attention to their toaster, a noticeably different *pop* and kinetic output could alert them that certain parts of the toaster needed attention.  For example, if the toaster needed cleaned badly, it would slowly push the bread out, instead of happily popping it up.  Both the visual and audio differences generated by modifying this kinetic output would be noticeable.

Proof of Concept: I constructed a model toaster (sans heating elements) using a small servo and a raising platform.  Because a variety of sensing methods for crumbs did not work, “dirtiness” is represented by a potentiometer.  I’ve substituted a common lever for a light push switch to accommodate a broader range of possible physical actions.

The servo drives the emotion of the toaster.  It can sharply or lethargically push its contents out, providing the user its current state.  Once removed, the weight of the next item to be put inside then lowers the platform back onto the servo.

Files + Video: Drive link

Discussion: This model is ripe for extension.  I originally designed this around the idea of overstuffing your toaster, something I do frequently that not only doesn’t toast the bread well but surely dumps more crumbs than necessary into the bottom tray.  Unfortunately, I couldn’t figure out a way to test for stuffedness, and went with straight cleanliness instead.  But, the overall idea behind designing emotionally (grumpy toaster, fearful car back-up sensor) has helped me understand this class a lot better, and I hope to continue working on that line of thinking with more physical builds like this.

 

Feeling Color

Problem:

For people who either color blind or blind, seeing and comprehending color, which is embedded in many aspects of our lives as an encoder of information, can be very challenging, if not impossible. In addition, color adds another dimension to our experiences and enhances them as well.

A General Solution:

A device that would be able to detect color and send the information to actuators to display the information through vibration. Ideally, the system would be able to vary the specificity and accuracy of the detection (in terms of frame rate, but also in regards to the sample area used for color detection) based on the velocity of the user using it or other input variables.

Proof of Concept:

An Arduino with a potentiometer to represent the velocity of the user and three vibration motors (tactors) to represent the color data.  These physical sensors and actuators are connected to p5.js code which uses a video camera and mouse cursor to select the point of a live video feed to extract color from. The color is then sent to the Arduino where it is processed and sent as output signals to the tactors. A switch is also available, when the user doesn’t want to have vibrations clouding their mind.

Fritzing Sketch:

The Fritzing sketch shows how the potentiometer and switch are set up to feed information into the Arduino as well as how the tactors are connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to the laptop which houses the p5.js code.

Proof of Concept Sketches:

The user’s velocity is sensed which alters the rate at which color is processed and sent back to the user through vibrations. The user data could also be extended to change the range of color sensing that is applied to the live feed to feel the colors of a general range.

Proof of Concept Video:

Files:

Physical Crit_Final

Alarm Bed 2.0

Problem: Waking up is hard. Lights don’t work. Alarms don’t work. Being yelled at doesn’t work. You have to be moved to be woken up. But what if no one is there to shake you awake?

Target Audience: Deaf/hearing-impaired children. Proper sleep is a habit that needs to be taught from a young age to help lead to healthier lifestyles later in life. Children with hearing impairments face another barrier to “learning how and when” to sleep because they miss some of the important audio cues that trigger and aid sleep. First, there is a high level of sleep insecurity for children with hearing impairment. Children that can hear are usually comforted by their parents’ voices during a bedtime story or by the normal sounds they hear around the house; however, children that cannot hear do not have that luxury – they have to face total silence and darkness alone, which is a scary thing for people of all ages. In addition, some of these children use hearing aids during the day which gives them the even sharper/more noticeably disturbing reality of some noise throughout the day and then total silence.

General solution: A vibrating bed that takes in various sources of available data and inputs to get you out of bed. Everyone has different sleep habits and different life demands, so depending on why you are being woken up, the bed will shake in a certain way. How?

  • Continuous data streams
    • Google Calendar: if an accurate calendar is kept by a child’s parents and they can program certain morning routines like cleaning up and eating breakfast, your bed could learn when it should wake you up for work/school
      • Can make this decision depending on traffic patterns/weather/other peoples’ events (kids, friends, etc.)
      • This process could also teach kids how to plan their mornings and establish a routine.
    • Sleep data: lots of research has been done on sleep cycles and various pieces of technology can track biological data like heart rate and REM stage, your bed could learn your particular patterns over time and wake you up at a time that is optimal within your sleep cycle
  • Situational
    • High Frequency noises: if a fire alarm or security alarm goes off, a child that cannot hear would usually be forced to wait for their parents to grab them and go. This feature could wake them up sooner and help expedite a potential evacuation process
    • “Kitchen wake-up button”: Kids do not always follow directions… so here, a parent can tap a button in a different room to shake the bed without having to go into their child’s room
      • Button system also has a status LED that shows
        • Off = not in bed
        • On = in bed
        • Flashing = in bed, but alarm activated
  • Interacting with User
    • Snooze: if sleeper hits a button next to their bed three times in a row, then the alarm will turn off and not turn back on
    • Insecure Sleep Aid: if the bed senses that a child is tossing and turning for a certain amount of time as they get into bed, then it can lightly rumble to simulate rubbing a child’s back or “physical white noise feedback”
    • Parents: if your kid gets out of bed, you can have your bed shake as well if this is linked throughout the house

Proof of Concept: I connected the following pieces of hardware to create this demo:

  • Transducers: shakes bed at a given frequency
    • One located by the sleeper’s head
    • One located by the sleeper’s feet
  • Potentiometer: represents an audio source
    • The transduer turns on to different intensities depending on where the potentiometer is set to
  • Push button 1: represents the “kitchen wake-up button”, works as an interrupt within the program
  • LED : a part of “kitchen wake-up button” that represents status of bed
  • Push button 2: represents the “snooze” feature of the bed, where a sleeper can turn off the rumbling to go back to sleep
  • Flex Resistor: represents a sensor in the bed that determines if someone is in the bed or not

Summary: A device like this could help children (and adults) who cannot hear to feel more secure in their sleep and encourage healthier sleep patterns. One of the biggest potential challenges with the device would be finding a powerful enough motor/transducer that can produce a variety of vibrations across a heavy bed/frame (and the quieter the better, for the rest of the home’s sake). Even with that challenge, though, everyone should have a good night’s sleep and this could be a way to provide that.

Files: KineticsCrit

Crit 2: Kinetic

Due 11:59pm, 28 October.

Combine kinetic inputs, outputs, data, and state machines to create a physically interactive system that changes interaction based on inputs and logic.

The example I gave early this semester was a “doorbell” for someone who cannot hear.

Inputs: doorbell, physical knock, person detector

Interaction: use inputs to determine output.  Doorbell + no person detected means someone rang the bell and walked away, was this a UPS/FedEx delivery?  Knock and person is there, is someone coming to visit?  To sell a product?  “Secret” knock pattern used by friends and a person is there, one of your friends has come to visit.

Output: Create appropriate output for the results of the interaction process.  UPS/FedEx drop off is lower priority than a friend coming for a visit.