Crit 1: Cycling Tire Monitoring System

Problem

In the manufacturing of physical goods, it is often difficult to test for small defects. In the case of products such as rubber cycling tubing, small, hard to detect perforations can become much more troublesome for clients in the lifecycle of the product. Additionally, it can be difficult for active cyclists to focus on identifying non major leaks and gradual changes in tire pressure on long rides.

Solution

A mounted sensor array focused on detecting both leak frequencies and changes in tire pressure can be used to streamline the tire quality assurance process, and help signal the need for tire patching or tubing replacement on the fly for cyclists. Using microphones to pick up sounds within common frequencies for leaks, as well as using an air pressure sensor to track significant changes from an ideal benchmark can be used in concert with visual indicators to help identify tears and deformations.

crit1

Proof of Concept

The sensor array would have a visual indicator tied to each sensor to attempt to give users an idea of where a leak would be happening, or if tire pressure was being lost. After attempting to use an LCD to provide descriptive error messaging, I decided to use a series of LCDs in concert with microphones to simulate air pressure leaks, as well as a flex sensor to simulate an air pressure sensor.

Crit1

Concert Buddy

Problem

Does loud music damage hearing?

At concerts, the music playing is so loud that you can’t communicate with your friends. You’re feeling claustrophobic and you want to tell your friend that you want to leave, take a break and get some water. Unfortunately, you’ve virtually lost your voice from screaming every song and over the booming music its hard to speak to them. What do you do? It is also hard to find a balance in terms of getting a place near the front and also enjoying the concert at a volume that is safe for your ears.

Solution:

A wearable that can tell you options based on your current situation. One that can be worn to loud situations like parties/concert or even just construction area.

Proof Of Concept:

A device with preloaded responses and a LCD screen.  Each option is associated with a  color which indicates the  immediacy/importance of the message. The potentiometer can be used to select the message. Additionally, the device has a sound detector. The sound detector measures the volume level of the surroundings and illuminates a blue light, alerting the user that the sound level is high enough to be damaging to the ears.

Fritzing and Code

Link to Video

 

Silent Piano

Problem

I consider myself a musician but I cannot read sheet music.  I play more than 6 instruments, but I play by ear since I never got the training or patience to read sheet music. One day I was playing one of my favorite instruments, the piano, and I wondered if there could be an equivalent to “playing by ear” for someone who cannot hear..? How do you take the act of hearing music and turn it into a visual interaction that teaches you as you play?

Proposal

My project takes music and creates an interactive game that simultaneously translates the music into keys. Ideally, the program would have 2 different inputs: 1) audio input ⇒ microphone hears a song and the program recognizes and processes the tones, then translates them into piano keys, 2) sheet music input ⇒ takes digital sheet music and reads it, then teaches you how to play the song.

Play a piano by “ear”

Inputs music sheets or audio, analyzes the rhythm and decodes the notes played, then shows you how to play them.

Proof Of Concept

To prove this concept, I decided to work on the interface of the game. Due to limited time to work on this project, the program reads a randomly generated digital notes and plays that. If I were to take this further, I would add the possibility of pressing more than one key at once.

Support Files

galsanea_SilentPiano

Jogging Partner

Problem:

For people who are listening to music while jogging, they may often times zone out and put themselves in danger as a result. Light is a great communication tool to warn others, as well as the jogger, to be careful and be aware of other people. In addition, when jogging alone, if something happens to the jogger, some people may not notice them unless something drastic happens.

A General Solution:

A device that would control a light that will react differently to the jogger’s actions based on various sensors to ensure that the light is required and to ensure that the light is being used in the optimal setting. The device could also address other issues like the speed of jogging which can be unintentionally changed or the issue of emergencies and attracting others’ attention in critical scenarios.

Proof of Concept:

An Arduino with potentiometers, switches, and a photoresistor to act as input data to control LEDs which represent a more complex visual interface.  When an emergency is detected by the sensors or reported by the user, the visual interface will try to attract attention through the use of the LEDs. On a less serious occasion, the device will detect if the user is running too slow or too fast. Lastly, if the user tries to turn on a flashlight to light their way, the device first considers how bright it is outside. If it is still bright out, the device will ignore the command, assuming the user pressed the switch by accident. If it is dimmer outside and the user switches the light on, it will blink to attract the attention of vehicles and bikes to make sure that they can see the jogger. If it is quite dark out, the light will turn on automatically, whether the user flips the switch or not.

Fritzing Sketch:

The Fritzing sketch shows how the various inputs are set up to feed information into the Arduino as well as how the LEDs are connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to some battery source.

Proof of Concept Sketches:

The sensors collect data which help to inform the user and embed intuition into the outputs, trying to provide a variety of features which make use of the information to improve the user’s jogging experience.

Proof of Concept Videos:

Prototype Full Demonstration:

Prototype Silent Demonstration:

Files:

Crit_Visual_Final

A Travel Companion for the Hearing Impaired

Problem:

Throughout much of the travel process—especially revolving around airlines and the flying experience—hearing is critical in order to get you to your destination. Audio announcements are constantly being made over airport intercoms such as flight changes, boarding calls, and for lost items. On the plane as well, audio announcements are used by the captain and flight crew to communicate things to the passengers throughout the flight.

Proposed solution:

I propose a handheld device that people who are hearing impaired can pick up during the check in process that replaces all the audio announcements they may hear while traveling with tactile (haptic feedback) and visual (LCD screen) notifications. The reason for a dedicated device is that cellular reception/service is often unreliable, especially when traveling outside of the country. Once users reach their destination, they can simply return the device before exiting the airport.

This solution would certainly require airports and airlines to change the way they operate in order to create a more inclusive environment; however, I think such a system would be very beneficial for the community, and may even have benefits for helping with the language barrier during international travel.

Proof of Concept:

In order to prototype my concept, I created an Arduino circuit using a LCD screen, a haptic motor, as well as three input buttons to simulate different scenarios that one may run into while traveling.

When one of the input buttons is pressed (representing an announcement being made), the haptic motor will vibrate in a specific pattern before a textual message is displayed on the screen such as, “Your flight SW815 is now boarding at gate 11!”

Messages are kept short so that users can receive the information they need easily, and they can go online or to a help desk if they need further assistance. My hope is that users who travel frequently will be able to learn the different vibration patterns for different messages in order to create a more seamless notification system.

Arduino Code and Fritzing Sketch

Crit #1 – Fun

Problem: This turned out to be a hard assignment for me, since I had difficulty coming up with a problem that should be solved.  I ended up considering “fun” in general, and how play would be different if in a blind world for hard of hearing users. I figured in that world sound would be the primary way to playfully engage and communicate with each other, and was drawn to the piano scene from the movie Big. The floor piano has no tactile feedback, staying flat on the ground, and is only fun because multiple people can be on it at once. So, my problem to solve was how to offer a different type of creative fun using this musical structure. Probably too tall an order on my end.

Solution: Essentially, collaborative music visualization.  Nothing novel, but it pushed me to understand this entire pipeline and actually learn p5.js, something I needed to do. Users would need to be able to interact with a visualization system that responded to their keyboard inputs, reacting differently based on a potentially varied amount of states.  This would need to capture the feeling of creating ripples in an existing system, and having your inputs matter differently at different times.

Proof of Concept:

A microswitch “keyboard” was built to handle inputs. Compared to a floor keyboard, this is relatively miniature. Foam keys rested on lever microswitches that fed back to the board.  In a final build, I envision RFID readers embedded in each key that could determine who was pressing what key, highlighting the actual collaboration that could take place.

Chance Crit 1 Piano

The keys then affect a pattern on a browser window, based on what key and how many keys are pressed.  The visualization interacts back with the user(s) by becoming more “resistant” to input the more it receives, and less the longer it has been unused.  It further has time-interactive elements like color, frequency, and so on dependent on how users interact with it.  Code partially based on an existing p5.js library, wavemaker.

chance Crit 1 Gif

With the RFID or other tracking mentioned above, this system could be extended to drive more into the feeling of collaborative creation I’m trying to capture.  Different users could have different “heat” signatures they apply to the waveforms, different speeds, or different interactions with each key or section.

Files:

The Arduino code is extremely simple, basically just reading and passing values, with the bulk in the p5.js files.  lytle_crit1.

Critique 1: Visual Interaction – Speak Up Display

I recently saw a talk on campus by Dr. Xuedong Huang, founder of Microsoft’s speech technology group. He did a  demo of the latest speech to text on Azure, combined with HoloLens  and I have to say I was impressed.

The went from this failure several years ago:

To this more recently (Speech to text -> translation -> text to speech… in your own voice… and a hologram for good measure):

 

This got me thinking that a more earthbound and practical application of this could be prototyped today, so I decided to make a heads up display for speech to text that functions external to a computer or smartphone.

If you are unable to hear whether due to a medical condition or just because you have music playing on your headphones, you are likely to miss things going on around you. I personally share an office with four other people, and I’m often found tucked away in the back corner with my earbuds in, completely unaware that the other four are trying to talk to me.

Similarly my previous research into common issue for those that are deaf, is being startled by someone coming up behind them, since they cannot hear their name being called.

With this use case in mind, I created an appliance that sits on a desktop within sight, but the majority of the time it does its best not to attract attention.

I realize it would be easy enough to pop open another window and display something on a computer screen, but that would either have to be a window always on top, or a bunch of notifications, so it seemed appropriate to take the display off screen to what would normally be the periphery.

The other advantage is a social one, if I look at my laptop screen while I’m supposed to be listening to you, you might think you’re being ignored, but with a big microphone between us, on a dedicated box with simple text display, I’m able to glance over it as i face you in conversation or in a lecture.

When it hears text it displays it on the LCD screen for a moment, and then it scrolls off leaving the screen blank when the room is quiet. This allows the user to glance over if they’re curious about what is being said around them:

Things get more interesting when the system recognizes key words like their name. It can be triggered to flash a colored light, in this case green, to draw attention and let the user know that someone is calling for them.

Finally, other events can be detected and trigger messages on the screen, and LED flashes.

The wiring is fairly simple. The board uses it’s onboard Neopixel RGB LED to trigger the color coded alerts, and the LCD screen just takes a (one way) serial connection.

Initially the project began with a more elaborate code base, but it has been scaled down to a more elegant system with a simple API for triggering text and LED displays.

A serial connection is established to the computer, and the processor listens for strings. If a string is less than 16 characters it pads it for clean display, and if it has a 17th character, it checks it for color codes:

void setled(String textandcolor){
  if(textandcolor.length()>16){
    switch(textandcolor[16]) {
      case 'R':
        strip.setPixelColor(0, strip.Color(255,0,0));
        strip.show();
        break;
      case 'G':
        strip.setPixelColor(0, strip.Color(0,255,0));
        strip.show();
        break;
      case 'B':
        strip.setPixelColor(0, strip.Color(0,0,255));
        strip.show();
        break;
      case 'X':
        strip.setPixelColor(0, strip.Color(0,0,0));
        strip.show();
        break;
    }
  }
}

A computer which uses the appliance’s microphone to listen to nearby speech can send it off to be transcribed, and then feed it to the screen 16 characters at a time, watching for keywords or phrases. (This is still in progress, but the communication bus from the computer to the board is fully functional for text and LED triggers)

After some experimenting, it seems that the best way to display the text is to start at the bottom line, and have it scroll upwards (a bit like a teleprompter) one line at a time every half a second. Faster became hard to keep up with, and slower felt like a delayed reaction.

listenup.zip (Arduino code + Fritzing diagram)

I’d love to expand this to do translation (these services have come a long way as well), and perhaps migrate to a Raspberry Pi to do the web API portion so that the computer can be closed and put away.

UPDATE:

I made the system more interactive by making the microphone (big black circle in the images above) into a button. While you hold the button it listens to learn new keywords, and then alerts when it hears those words. Overtime keywords decay.

The idea of the decay is that you would trigger the system when you hear something it should tell you about, and if you don’t trigger it the next time it hears it, it becomes slightly less likely to trigger again. This also begins to filter out common words from more important keywords.

This weight system is merely  to be a placeholder for a more sophisticated system.

STT Update

Crit #1: Honest Visualization of Intoxication

State Machine: Bar Patrons

Problem: In undergrad, my friend sent me a picture of myself from a night out the week before. I was in the middle of a crowded bar, looking for what could have been a beer, the people I came with, or my own sanity. The caption could have been “an island that cannot hear in an ocean that cannot see.” 

Bars are perfect examples of places where design needs to realize that IQs and senses diminish rapidly, in addition to inhibitions. While people usually go to bars in groups, those groups quickly dissipate as people go the dance floor, the bathroom, to find other friends present and more. Combine that with usually spotty reception and barely enough room to operate your phone in the sea of bodies, communication is tough; communication about the health and safety of your friends is even tougher. Even if by some miracle you find your friends to check on them, can you really believe them if they’ve said they’ve only done one shot, but look like they are ready to fall over? Your sense of perception is off and the resulting communication is therefore unreliable. 

Is there a way to represent your and others’ state-of-being at a bar at a glance? Can that system or product be smart and take advantage of certain pieces of data to give the most accurate diagnosis?

General solution: A sensor and LED-outfitted bracelet that takes pieces of environmental and user-entered data to determine how intoxicated you or your accompanying friends are. Using a color-coded system, you can easily recognize who is doing well, who needs your help, and who is ready to go home without pulling out your phone or searching around.

Proof of Concept: My system utilized the following components:

  • 2 RGB LEDs (and resistors) – both on your bracelet; one to represent your intoxication level, another to represent your friend’s
  • 2 push buttons (and resistors) – one on your bracelet to allow you to signify another drink (+1 press), you want to go home (press until light is blue, you are in danger (spam press until white flashing); another to be used for a demo to signify your friend changing their state
  • 2 analog temperature sensors – one inside your bracelet to record your body temperature; one used for the demo to signify your friend’s
  • p5.Js – monitoring for those who do not have a bracelet

Following research from different sources, I wanted to apply various environmental factors that affect someone’s level of intoxication or their perception of it. One study cited body temperature as one such factor; if someone is drinking somewhere where it is very hot or very cold, alcohol typically masks your internal temperature such that you could be drinking yourself into sickness without knowing it. Because of that, I wrote the code in such a way that if your body temperature leaves a certain range, then your intoxication/danger levels rises faster than it would otherwise. Other studies show that a simple passage of time allows some people’s intoxication levels to drop naturally; therefore, there are caveats in my code that say if a certain amount of time has passed between drinks, your intoxication level actually goes down.

Challenges: Trusting people as they drink. The whole system is predicated on people being honest about when they have another drink, which I’ll admit, could potentially be a bit of a stretch to expect out of people. There are also lots of other factors that go into determining a person’s intoxication level – type of alcohol, food consumed, water consumed, weight, gender, etc. – that cannot be tracked with sensors in a bracelet.

Opportunities: With further development, this would probably be best served to be paired with an app interface that allows people to build their own profile to be linked to their bracelet. You could enter more of your own data (like weight, gender, location, etc.) that would affect the aforementioned “ranges” for things like body temperature. The app would also allow you to track your habits over time and, with the advent of more robust machine learning techniques, your app/bracelet could help you stay safer or smarter while you’re out by altering when your color codes change based on your tolerance or suggesting you call an Uber at a certain time in the night. Also, I attempted to add in a p5.js component that could eventually evolve into a more sophisticated app that allows friends back in the dorm or at home to know what is going on – whether you are on your way home, whether they should have a Gatorade ready for your return, or more depending on your state. 

Assignment5 Files (Fritzing, Arduino, p5.js)

Critique 01: Visualizing Coming In/Out of Roommates (+ Status of Door)

Problem

When we are wearing headphones and listening to music in a loud sound, we become deaf, unable to listen the sounds from our environment. Sometimes, we cannot even notice people’s, especially the roommates’ existence. We become surprised at their sudden appearance or disappearance.

 

Solution

I tried to visualize the surrounding sounds and give feedback to the user. For this critique, I focused on the movement of the entrance door of a home. The status machine visualizes whether the door is open or not so that the user could notice his/her roommate is going out or in. Also, if the door is kept open for a long time, it gives feedback to let the user close and lock the door for the security purpose.

 

Proof of Concept

By using a potentiometer, I could detect the degree of the door when it is opened or closed. After a certain degree, the status machine decides the door is open and gives feedback to the user. When the door is kept open, it also lets the user know that the door is kept open for a while and enable the user to close the door. When the door is open and then closed, the status machine decides that someone comes in or out. It also gives feedback to the user so that he/she could notice this fact.

Fritzing

I also used P5.js to change the laptop screen.

 

Video/Codes

jay_door_status

 

 

Critique 1: Visual Interaction

“An island that cannot hear in an ocean that cannot see.”

Due 11:59pm, 25 September.

Use vision to make an interaction accessible to someone that cannot hear.   This is more than the simple state machines in the weekly assignments — for this crit we want a full interactive experience where the device interacts with a person.  “Cannot hear” is not defined only as deaf or hard of hearing, it can be a condition where listening/hearing information is impossible.  Examples:  at a Baroque symphony, wearing ear protection while using loud construction equipment like a jackhammer, at night in a dorm room when the roommate is sleeping.

The inputs that can be used for this interaction are open to whatever makes the interaction work.

Take a look at the syllabus for more information on crits and the goals of this class before starting your project.  Remember what we talked about in the first classes, the differences between reactions and interactions.  The IDeATe Lending Library is also a good resource (and some of the staff have taken Making Things Interactive!).

Email me if you have any questions or hardware problems.  I will be on the road most of Thursday but will have my laptop out at the conference taking notes on presentations.