For gym enthusiasts lifting heavy weights, bad form can result in weeks off from the gym. However, injuries are not always instant and even the slightest odd angle in a squat, repeated over time, can result in debilitating pain.
A smart weight that “talks” to the user, using sound, would be ideal to help correct lifting form. Gyroscopic sensors that detect movement, angle and altitude can be used for a variety of exercises to determine proper form. If the user’s form is off, the weight will speak to them, making a different sound for different aspects of form that are not correct.Proof of Concept
A gyroscope and microphone simulates a barbell weight. When the bar is tilted left, the pitch of the sound goes up, when the barbell is tilted right, the pitch goes down. A button is used to simulate grip, telling the system to begin recording accelerometer data.
I often set alarms and timers to leave for class or meetings. What I’ve realized over time is that I misjudge the amount of time that has passed on the timer and I am often not ready to move onto the next task/ leave the house when the timer goes off. Traditionally, you would have to physically go check the timer and now you can ask Alexa how much time is left on your timer. However, both of these require initiation of the interaction from the user and not from the timer. I wanted to create a timer which would notify me every quarter of the way through the set time.
I used a Arduino Uno, a Piezo Speaker and a potentiometer. The potentiometer was used to set the length of the timer. The Piezo speaker played the arpeggio of a scale. After a fourth of the time had passed it played the 1st note, after half the time it played the first 2 notes etc. This gave a sense of finality to the end of the timer as it would start at C4 and end at C5.
A suggestion was made to link this to my google calendar which I thought it was a great idea. Another awesome suggestion was have it notify me exponentially so that as it got closer and closer to the end of the timer there would me many more notifications.
Audio announcements are often used to deliver information to a large group of people; airports, restaurants, stores, and museums are all prime examples of areas where this is common practice. However, since people have different preferences as well as linguistic and cognitive ability, these sorts of audio announcements would be made more accessible if people were able to control certain aspects of this audio.
My proposed solution is demonstrated rather simply, and takes advantage of the Arduino tone function’s use of internal time keeping. This allows a looped audio track (that represents an announcement in this case) to be interrupted at any time to change its speed. Using a potentiometer, users can change the speed of the audio by a factor between 0 and 2 times.
For this assignment, I focused on gesture interaction – which is my current interest – and I thought that gesture interaction has powerful strengths compared to others. It is fast and quiet. Also, it overcomes physical distance so that we could control things without actually touching them.
I came up with a situation when we are listening to music through a speaker or watching TV, we become difficult to hear other sounds. When someone calls us or other situations that we have to urgently stop the music and focus on other sounds, it is sometimes really difficult to do it fast. If we are using a laptop to listen to music, we have to find the mute or pause button and then push it with our hands – which takes much more attention (sight, physical) and time. When we watching TV or listening to music through a smartphone, it will be similar.
Thus, I decided to use gesture control to stop the music quickly and urgently. By raising a hand and making a fist, which I thought that it is intuitive for humans to stop something, users could pause the music. After it is paused, using the same gesture, they could play the music again.
Proof of Concept
In order to track the hand gestures, I used the Leap Motion sensor. It is really easy to pause and play music using simple gestures. I wanted to design new types of gestures to make the gesture interaction more natural and intuitive. However, other than the created database of hand gestures, it seems like it requires to create my own database, which was challenging to me.
iPhone’s have embedded in them a system for notifying users of when serious events are happening nearby (ex. AMBER alert, flash flooding, dust storms, tornadoes). Because the notification system is the same sound and vibration pattern across all of the scenarios, people may be numbed to the alert over time and can begin to ignore it. In addition, because the alerts are the same, the sound and vibration don’t convey information that may be more specific to inform users, for example people who are visually impaired, of what is wrong.
A General Solution:
A device that would interrupt the current happenings of the device to communicate the alerts to the user using specific tonalities to convey the urgency of the scenario, as well as give an indication as to what is happening. This would be ideally done through a combination of sound and vibration to make sure that people would be able to both hear the audio feedback and feel the vibrations to comprehend the scenario.
Proof of Concept:
An Arduino with potentiometers to represent the volume that the user prefers to listen music to and their proximity in relation to the danger in the case of a flash flood, tornado, or dust storm, three switches to serve as input for what alert is in effect, and one button to represent when there is an alert in effect. In regards to output, the system will need to output sound through a speaker and would ideally be extended to include vibration.
The Fritzing sketch shows how the potentiometers, switches, and button are set up to feed information into the Arduino as well as how the speaker is connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to a power source.
Proof of Concept Sketches:
The user’s phone receives information regarding the alerts automatically and that information is converted into specific audio and (hopefully) vibration feedback for the user, so that they immediately know the situation and whether or not they should respond.
If you’re at a dance or somewhere crowded, how do you alert the masses without causing a panic?
This project uses interrupts to stop a melody and buzzes, slowly increases in pitch to gain people’s attention. In future iterations, the device could sense when room is quiet the switch to playing the message that was needed to get everyone’s attention.
Proof of Concept
I used the potentiometer to control the speed of the melody and the buzzer intervals.
With this in mind I attempted to create a prototype to play a melody with the goal of encouraging the user to raise or lower their heart rate to match a target.
The program measures time between beats and translates that into a BPM measurement. This measurement is averaged with the target to create a match beat halfway between the target and measured BPM. As the rates converge the music plays in time with the heart beats of the user.
Beat measurements are simulated with a pushbutton, and a modulated sound output plays an F6 arpeggio at the match rate. F6 was chosen from experience to work well both as an uplifting and calming chord for either raising or lowering heart rate.
The circuit is pretty basic. A button pulls down pin 3 to trigger an interrupt which calculates the time since the last interrupt to measure current heart rate (this represents a heart monitor). Audio output is on pin 13 (with a 100Ω resistor in series), and the onboard RGB LED cycles along with each note in the sequence.
Problem: Info monitors on the back of seats in airplanes provide nice-to-have information, such as total flight time, time till destination, and nearby locations on the ground. These monitors are often visual-only, assuredly to not disturb nearby guests, but this makes them inaccessible to the visually impaired. Converting this information to audio inside earbuds or headphones would be an easy and unobtrusive fix.
Solution: Because these headrest monitors already have audio jacks, reusing them to communicate this information would be easy using established screen reading tech or more elegant selection methods beyond a touch screen. This introduces the difficulty of limiting audio, and potentially distracting from important announcements. This leads to the second possible part of this assignment: making often garbled announcements more understandable for those hard of hearing with this audio jack. More or less, this all boils down to an interruptable info stream of important flight info.
Solution: The solution is simple, two psuedo-threads of audio, switchable between with a button press simulating a pilot or flight attendant’s announcement. The “information” like distance to destination is simulated with a tone from a pot right now, since I have no idea about playing samples yet. Being interruptable lets any outside announcement alert the user to plug in and listen, actually give them the info, or more.
For this assignment, using an Arduino, generate sound-over-time and sound-by-interrupt that conveys meaning, feeling, or specific content. You can generate sound with a speaker or a kinetic device (ex: door chime) or some other novel invention.
This is a good time to use push buttons or other inputs to trigger sound and another input to define the type of sound.
My example in class was how my phone *doesn’t* do this well. If I am listening to music and someone rings my doorbell at home, my phone continues to play music *and* the doorbell notification sound at the same time. What it should do is stop the music, play the notification, then give me the opportunity to talk to the person at the door, then continue playing music.