Kinetic Crit: Touch Mouse

Concept

Whiteboards and other hand drawn diagrams are an integral part of day to day life for designers, engineers, and business people of all types. They bridge the gap between the capabilities of formal language and human experience, and have existed as a part of human communication for thousands of years.

However powerful they may be, drawings are dependent on the observer’s power of sight. Why does this have to be? People without sight have been shown to be fully capable of spatial understanding, and have found their own ways of navigating space with their other senses. What if we could introduce a way for them to similarly absorb diagrams and drawings by translating them into touch.

touch mouse prototype touch mouse prototype

The touch mouse aims to do just that. A webcam faces the whiteboard suspended by ball casters (which minimize smearing of the image). The image collected by the camera is processed to find the thresholds between light and dark areas, and triggers servo motors to lift and drop material under the user’s fingers to indicate dark spots above, below, or to either side of their current location. Using these indicators, the user can feel where the lines begin and end, and follow the traces of the diagram in space.

https://youtu.be/y57xh_YXuHw

Inspiration

The video Jet showed in class showing special paper that a seeing person could draw on, to create a raised image for a blind person to feel and understand served as the primary inspiration for this project, but after beginning work on the prototype, I discovered a project at CMU using a robot to trace directions spatially to assist seeing impaired users in way-finding.

Similarly in the physical build I was heartened to see Engelbart’s original mouse prototype. This served double duty as inspiration for the form factor, and as an example of a rough prototype that could be refined into a sleek tool for everyday use.

1ere souris d’ordinateur

 

The Build and Code

The components themselves are pretty straightforward. Four servo motors lift and drop the physical pixels for the user to feel. A short burst of 1s and 0s indicates which pixels should be in which position.

The python code uses openCV to read in the video from the webcam, convert to grayscale, measure thresholds for black and white, and then average that down into the 4 pixel regions for left, right, up and down.

I hope to have the opportunity in the future to refine the processing pipeline, and the physical design, and perhaps even add handwriting recognition to allow for easier reading of labels, but until then this design can be tested for the general viability of the concept.

Python and Arduino code:

wbtouch

Crit #2 – Grumpy Toaster

Problem: Toasters currently only use their *pop* and occasionally a beep to communicate that they are finished toasting whatever is inside them.  It is also difficult to tell the state of various enclosed elements of the toaster, like if the crumb tray needs to be empty, any heating elements need wiped off, etc.  I believe the toaster could communicate a lot more with its “done” state in ways that would be inclusive to a variety of different user types.

Solution: More or less, a toaster that gets grumpy if it is left in a state of disrepair.  Toasters are almost always associated with an energetic (and occasionally annoying) burst of energy to start mornings off, but what if the toaster’s enthusiasm was dampened?  Because users are generally at least half paying attention to their toaster, a noticeably different *pop* and kinetic output could alert them that certain parts of the toaster needed attention.  For example, if the toaster needed cleaned badly, it would slowly push the bread out, instead of happily popping it up.  Both the visual and audio differences generated by modifying this kinetic output would be noticeable.

Proof of Concept: I constructed a model toaster (sans heating elements) using a small servo and a raising platform.  Because a variety of sensing methods for crumbs did not work, “dirtiness” is represented by a potentiometer.  I’ve substituted a common lever for a light push switch to accommodate a broader range of possible physical actions.

The servo drives the emotion of the toaster.  It can sharply or lethargically push its contents out, providing the user its current state.  Once removed, the weight of the next item to be put inside then lowers the platform back onto the servo.

Files + Video: Drive link

Discussion: This model is ripe for extension.  I originally designed this around the idea of overstuffing your toaster, something I do frequently that not only doesn’t toast the bread well but surely dumps more crumbs than necessary into the bottom tray.  Unfortunately, I couldn’t figure out a way to test for stuffedness, and went with straight cleanliness instead.  But, the overall idea behind designing emotionally (grumpy toaster, fearful car back-up sensor) has helped me understand this class a lot better, and I hope to continue working on that line of thinking with more physical builds like this.

 

Feeling Color

Problem:

For people who either color blind or blind, seeing and comprehending color, which is embedded in many aspects of our lives as an encoder of information, can be very challenging, if not impossible. In addition, color adds another dimension to our experiences and enhances them as well.

A General Solution:

A device that would be able to detect color and send the information to actuators to display the information through vibration. Ideally, the system would be able to vary the specificity and accuracy of the detection (in terms of frame rate, but also in regards to the sample area used for color detection) based on the velocity of the user using it or other input variables.

Proof of Concept:

An Arduino with a potentiometer to represent the velocity of the user and three vibration motors (tactors) to represent the color data.  These physical sensors and actuators are connected to p5.js code which uses a video camera and mouse cursor to select the point of a live video feed to extract color from. The color is then sent to the Arduino where it is processed and sent as output signals to the tactors. A switch is also available, when the user doesn’t want to have vibrations clouding their mind.

Fritzing Sketch:

The Fritzing sketch shows how the potentiometer and switch are set up to feed information into the Arduino as well as how the tactors are connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to the laptop which houses the p5.js code.

Proof of Concept Sketches:

The user’s velocity is sensed which alters the rate at which color is processed and sent back to the user through vibrations. The user data could also be extended to change the range of color sensing that is applied to the live feed to feel the colors of a general range.

Proof of Concept Video:

Files:

Physical Crit_Final

Alarm Bed 2.0

Problem: Waking up is hard. Lights don’t work. Alarms don’t work. Being yelled at doesn’t work. You have to be moved to be woken up. But what if no one is there to shake you awake?

Target Audience: Deaf/hearing-impaired children. Proper sleep is a habit that needs to be taught from a young age to help lead to healthier lifestyles later in life. Children with hearing impairments face another barrier to “learning how and when” to sleep because they miss some of the important audio cues that trigger and aid sleep. First, there is a high level of sleep insecurity for children with hearing impairment. Children that can hear are usually comforted by their parents’ voices during a bedtime story or by the normal sounds they hear around the house; however, children that cannot hear do not have that luxury – they have to face total silence and darkness alone, which is a scary thing for people of all ages. In addition, some of these children use hearing aids during the day which gives them the even sharper/more noticeably disturbing reality of some noise throughout the day and then total silence.

General solution: A vibrating bed that takes in various sources of available data and inputs to get you out of bed. Everyone has different sleep habits and different life demands, so depending on why you are being woken up, the bed will shake in a certain way. How?

  • Continuous data streams
    • Google Calendar: if an accurate calendar is kept by a child’s parents and they can program certain morning routines like cleaning up and eating breakfast, your bed could learn when it should wake you up for work/school
      • Can make this decision depending on traffic patterns/weather/other peoples’ events (kids, friends, etc.)
      • This process could also teach kids how to plan their mornings and establish a routine.
    • Sleep data: lots of research has been done on sleep cycles and various pieces of technology can track biological data like heart rate and REM stage, your bed could learn your particular patterns over time and wake you up at a time that is optimal within your sleep cycle
  • Situational
    • High Frequency noises: if a fire alarm or security alarm goes off, a child that cannot hear would usually be forced to wait for their parents to grab them and go. This feature could wake them up sooner and help expedite a potential evacuation process
    • “Kitchen wake-up button”: Kids do not always follow directions… so here, a parent can tap a button in a different room to shake the bed without having to go into their child’s room
      • Button system also has a status LED that shows
        • Off = not in bed
        • On = in bed
        • Flashing = in bed, but alarm activated
  • Interacting with User
    • Snooze: if sleeper hits a button next to their bed three times in a row, then the alarm will turn off and not turn back on
    • Insecure Sleep Aid: if the bed senses that a child is tossing and turning for a certain amount of time as they get into bed, then it can lightly rumble to simulate rubbing a child’s back or “physical white noise feedback”
    • Parents: if your kid gets out of bed, you can have your bed shake as well if this is linked throughout the house

Proof of Concept: I connected the following pieces of hardware to create this demo:

  • Transducers: shakes bed at a given frequency
    • One located by the sleeper’s head
    • One located by the sleeper’s feet
  • Potentiometer: represents an audio source
    • The transduer turns on to different intensities depending on where the potentiometer is set to
  • Push button 1: represents the “kitchen wake-up button”, works as an interrupt within the program
  • LED : a part of “kitchen wake-up button” that represents status of bed
  • Push button 2: represents the “snooze” feature of the bed, where a sleeper can turn off the rumbling to go back to sleep
  • Flex Resistor: represents a sensor in the bed that determines if someone is in the bed or not

Summary: A device like this could help children (and adults) who cannot hear to feel more secure in their sleep and encourage healthier sleep patterns. One of the biggest potential challenges with the device would be finding a powerful enough motor/transducer that can produce a variety of vibrations across a heavy bed/frame (and the quieter the better, for the rest of the home’s sake). Even with that challenge, though, everyone should have a good night’s sleep and this could be a way to provide that.

Files: KineticsCrit

Crit 2: Kinetic

Due 11:59pm, 28 October.

Combine kinetic inputs, outputs, data, and state machines to create a physically interactive system that changes interaction based on inputs and logic.

The example I gave early this semester was a “doorbell” for someone who cannot hear.

Inputs: doorbell, physical knock, person detector

Interaction: use inputs to determine output.  Doorbell + no person detected means someone rang the bell and walked away, was this a UPS/FedEx delivery?  Knock and person is there, is someone coming to visit?  To sell a product?  “Secret” knock pattern used by friends and a person is there, one of your friends has come to visit.

Output: Create appropriate output for the results of the interaction process.  UPS/FedEx drop off is lower priority than a friend coming for a visit.

Class notes: 22 October, 2019

Kinetic Input/Output

The Demo”, showing off the mouse, chord keyboard, and social media.

Accessibility / Inclusion

Microsoft’s Inclusion and a PDF copy of their book.

What is accessible?

Are 30mm arcade buttons are accessible? Do they simply nterrupt or do they provide constant state?  Are the buttons convex or concave?  How high are the guards around the buttons?  If you want to use Universal Design, how do you decide how big the button should be and where it’s located?

What is wrong with the E-Stop button in A10?

  • Unlit
  • Recessed button “hidden” in a guard
  • No signage on the wall like we have with fire extinguishers

Are controls like buttons the wrong answer?  Is better output the way to go?

tactile maps

presentation of research data on tactile map comparison

tactile graphics using “swell paper”

3d printing reference objects for the blind — what does a snowflake look like?  A butterfly?  A sailboat?

Assignment/schedule

Kinetic crit on 29 Oct.  Thursday office hours + Thursday class is a work day.

Assignment 6: Balance Checker For Visually Impaired

Problem

I started with a few problems related to balance for especially visually impaired people. Firstly, when they are moving a pot with hot soup inside, it is really dangerous because they cannot check the balance of it. Another situation will be that, when they are building furniture, especially shelves, it is important to maintain the horizontal balance to keep things safe.

Also, for sighted people, there might be many situations when the balance is important. For example, when we are taking a photo.


General Solution

How might we use tactile feedback to let them feel the tilt or unbalanced things? I thought that the vibration with various intensity would be a great way to do it.


Proof of Concept

I decided to use the iPhone for two reasons. First, I realized that it has capabilities to generate a variety of diverse tactile feedbacks using different patterns and intensity, I found it useful to take advantage of its embedded sensors. Lastly, I thought that making a vibrating application will be useful to provide higher accessibility to many people.

I categorized three different groups of the degree to provide different tactile feedback in terms of intensity. When a user tilts the phone 5~20 degrees, it makes light vibration. From 21~45 degrees, it generates medium vibration. From 46~80 degrees, it generates intense vibration. Lastly, from 81~90 degrees, it vibrates the most intensely (just like when it receives a call). I also assigned the degree numbers to RGB code, to change the colors accordingly.


Video & Codes