Schedule is updated to reflect changes in crit dates.
Interrupts and kinetic interaction
Interrupts as inputs to state machines
Human interruption as part of a UI
Environmental interruption as part of interaction, state machines that can self-correct or shutdown
Interrupts that limit motion — end-stop switches
Morse code is commonly received through either visual or audible feedback; however, this can be challenging for those who are blind, deaf, or both. Additionally, I had next to no experience using hardware interrupts on Arduino, so I wanted to find a good application of interrupts for this assignment.
Proposed Solution:
I wanted to create a system that allows morse code senders to quickly adapt their messages into signals that people without sight or hearing can understand. To do this, I created two physical button inputs—the first button directly controls an LED (but could easily be a buzzer) that is used to send the morse code signal; the second button toggles a vibrating motor to buzz in conjunction with the LED. In this way, one can change the message being send from purely visual to both visual and tactile at any time.
This project is inspired by the Ubicoustics project here at CMU in the Future Interfaces Group, and by an assignment for my Machine Learning + Sensing class where we taught a model to differentiate between various appliances using recordings made with our phones. This course is taught by Mayank Goel of Smash Lab, and is a great complement to Making Things Interactive.
With these current capabilities in mind, and combining physical feedback, I created a prototype for a system that provides physical feedback (a tap on your wrist) when it hears specific types of sounds, in this case over a certain threshold in an audio frequency band. This could be developed into a more sophisticated system with more tap options, and a machine learning classifier to determine specific signals. Here’s a quick peek.
On the technical side, things are pretty straightforward, but all of the key elements are there. The servo connection is standard and the code right now just looks for any signal from the computer doing the listening to trigger a toggle. The messaging is simple and short to minimize any potential lag.
On the python side, audio is being taken in with pyaudio, and then transformed into the frequency spectrum with scipy signal processing, and then scaled down to 32 frequency bins using openCV (a trick I learned in ML+S class). Then bins 8 and 9 are watched for crossing a threshold, which is the equivalent of saying when there’s a spike somewhere around 5khz toggle the motor.
With a bit more time and tinkering, a classifier could be trained in scikit learn with high accuracy to trigger the tap only with certain sounds, say a microwave beeping that it’s done, or a fire alarm.
The system could also be a part of a larger sensor network aware of both real world and virtual events to trigger unique taps for the triggers the user prefers.
For those who have impaired vision or are blind, understanding the quality and form of the spaces that they inhabit may be quite difficult to perceive (inspired by Daniel Kish’s TED Talk that Ghalya posted in Looking Outward). This could have applications at various scales, both in helping the visually impaired with way-finding as well as in being able to experience the different spaces they occupy.
A General Solution:
A device that would scan and process a space using sonars, LIDAR, photography, 3D model, etc. which would be processed then mapped onto a interactive surface that would be actuated to represent that space. The user would then be able to understand the space they are in on a larger scale, or on a smaller scale, identify potential tripping hazards as they move through an environment. The device would ideally be able to change scales to address different scenarios. Other aspects such as emergency situation scenarios would also be programmed into the model so that in the case of fire or danger, the user would be able to find their way out of the space.
Proof of Concept:
An Arduino with potentiometers (sonars/other spatial sensors ideally) to act as input data to control some solenoids which represent a more extensive network of physical actuators. When the sensors sense a closer distance, the solenoids will pop out and vice versa. The solenoids can only take digital outputs, but the ideal would be more analog so that a more accurate representation could be made of the space. There are also two switches, one that represents an emergency button which alerts the user that there is an emergency, and one that represents a routing button (which ideally would be connected to a network as well, but could also be turned on by the user) which leads the solenoids to create a path out of the space to safety.
Fritzing Sketch:
The Fritzing sketch shows how the proof of concept’s solenoid are wired to a separate power source and is setup to receive signals from the Arduino as well as how all of the input devices are connected to the Arduino to send in data. The transducer for emergencies has been represented by a microphone, which has a similar wiring diagram. Not pictured, is that the Arduino and the battery jack would have to be connected to a battery source.
Proof of Concept Sketches:
The spatial sensor scans the space that the user is occupying which is then actuated into a physical representation and arrayed to create more specificity for the user to touch and perceive. This system would be supplemented by an emergency system to both alert the user that an emergency is occurring, and also how to make their way to safety.
Rabbit Laser Cutters have dark UV protective paneling to protect users from being exposed to bright, potentially vision damaging light. However, laser cut peices can begin smoking, and even catch fire. This presents a problem, how can user respond to fire and smoke events?
Solution
A visibility detection system paired with a motor would allow users to be afforded of an incoming smoke or fire issue by detecting drastic increases or decreases in visibility. The visibility detection system would be placed inside the laser cutter, while the motor would be attached to a wearable device, or atop the laser cutter to bump into it repeatedly in different patterns, creating different noises based on the situation and vibrations on the user’s person.
Proof of Concept
A series of temperature sensors would serve as the detection system. It would sense whether there was obstructed vision, either being too bright, signifying a fire, or too dim, signifying smoke. A solenoid would tap in a slow pattern to signify smoke, and tap in a hurried, frantic pattern to signify fire. The solenoid would be either attached to a wearable device, or attached atop the cutter itself, to tap against the machine and make noise, signifying to the user to press the emergency stop.
How do you create a universal communication method that can work for everyone, whether they are blind, or deaf, or both? Imagine a universal translation machine that can….
Proposal
To tackle this, I decided to use a tactile way to feel and translate Morse code. This is done through a combination of:
haptic feedback → as someone is communicating to you, the translator device would vibrate the Morse code pattern to you so you can tactically feel it. This is ideal not only for someone who cannot see or hear, but also if you want to be extra discreet and not make any noise or visual distractions.
visual feedback → adding to that, the visual feedback provided is twofold: through letter translation and through the blinking of an LED. The letter translation is especially ideal for someone who might not necessarily know Morse Code.
audio feedback → finally, audio feedback through the buzzer helps you distinguish by sound when what you are pressing is a dot(*) or a dash(-). When you press long enough for the device to recognize that it is not a dot anymore, but a dash, the tone changes.
The hope here is by providing different ways of feedback, the translator can be more accessible.
Lip reading is difficult, and a large portion of the deaf community choose to not read lips. On the other hand, lip reading is a way for many deaf people to feel connected to a world that they often feel removed from. I have linked a powerful video where one such lip reader talks about the difficulty of lip reading but they pay-off she experiences by being able to interact and connect with anyone she wants.
Lip reading relies on being able to see the lips of the person speaking. When you are interacting with one person, this is not an issue, but what if you’re in a group setting? How do you keep track of who is talking and where to look?
Idea
Using 4 sound detector or microphones, detect the area in which sound is coming from. Alert the user of this change in sound by using a servo motor to point in the direction of the sound. This allows people who are hard of hearing to understand who is talking in a group setting and focus on the lips of the person speaking at hand.
Proof Of Concept
To demonstrate this idea, I decided to use 2 sound detectors and a servo motor. My interrupt is a switch which can be used to override the process if, for example, there are too many people talking or they do not need to use this device anymore.
Problem: Waking up is hard. Lights don’t work. Alarms don’t work. Being yelled at doesn’t work. You have to be moved to be woken up. But what if no one is there to shake you awake?
General solution: A vibrating bed that takes in various sources of available data and inputs to get you out of bed. Everyone has different sleep habits and different life demands, so depending on why you are being woken up, the bed will shake in a certain way. How?
Potential continuous data streams
Google Calendar: if an accurate calendar is kept and you can program certain morning routines like cleaning up and eating breakfast, your bed could learn over time when it should wake you up for work/school depending on traffic patterns/weather/other peoples’ events (kids, friends, etc.)
Sleep data: lots of research has been done on sleep cycles and various pieces of technology can track biological data like heart rate and REM stage, your bed could learn your particular patterns over time and wake you up at a time that is optimal within your sleep cycle
Situational data streams
High Frequency noises: if a baby cries in the room next door or one of your home’s alarms goes off, your bed could shake in a faster/more violent manner to make sure to get your attention
“Kitchen wake-up button”: if one of your roommates or family members won’t get out of bed, you can flip a switch in a different room to shake the bed without having to go into their room
Proof of Concept: I connected the following pieces of hardware to create this demo:
Servo motor: represents the shaking bed
Potentiometer: represents a timer, as well as higher frequency sounds (main sources of input/data)
The motor turns on to different intensities/patterns depending on where the potentiometer is set to
Push button: represents the “kitchen wake-up button”, works as an interrupt within the program
Slide switch: represents the “off button” for the bed, works as an interrupt within the program
Chance and I went to Bricolage Production Company’s latest creation, Project Amelia, tonight. It’s “a next-level immersive theater experience that invites you to the R&D lab of Aura, one of the world’s most innovative tech giants, to participate in the launch of a groundbreaking intelligence product like no other.” Their words, not mine. It’s a cool take on traditional theater with lots of clever uses of simple interactive devices that we could make for this class. You wear an RFID tag bracelet throughout the show to interact with various games and demos that are aimed at teaching guests about the power of artificial intelligence. The show runs for a few more weeks, so check it out!