I started with a problem area related to sound. For example, when a conductor is conducting an orchestra, how might they control the volume, tempo, or pitches through the gestures? However, this problem is pretty complex for me to tackle in a week, so I narrowed down to a more specific and small problem – how might we control the music through gestures. I came up with making a different sound based on the distance or proxemics so that we could play the piano through gestures.
I come up with an idea that what if I could play the piano through gestures or body movements. Rather than playing the piano with fingers, I thought that there might be a way to enable people to play the piano using their bodies. While thinking about it, I thought an idea of mapping piano notes on the distances. By moving our arms and hands we could play with sounds.
Proof of Concept
In order to track the distance, I used the ultrasonic sensor and Arduino. When I go further, it wasn’t accurate enough to detect further distances. However, when it is close, I realized that it is pretty precise in distance. I tried to use a potentiometer to change the distance manually, but it didn’t work really well. Thus, I just focused on the ultrasonic sensor.
To control the computer, I used a terminal to run python codes. Through python codes, I could control the keyboards and mouse, which allowed me to play the online piano using keyboards.
I have an idea on a final project, please let me know if you have any advice or feedback- I would greatly appreciate it!
I want to create a smart watch interface for someone with anxiety and panic disorder. I plan on using real-time biometric data from sensors and using the data to trigger and display things using p5.js.
The watch has two modes: Normal Mode & Panic Mode.Normal Mode includes a watch interface that displays the time and date, in addition to the sensor data in an artistic, data-visualization way (I am thinking something similar to a mood visualizer type of thing). The panic mode can be triggered through two ways: a panic button the user presses or sensor data that indicates the user is having a panic attack. In Panic Mode, the canvas cycles through the following anxiety relieving techniques:
Deep Breathing Exercise: using calming graphics to help guide the user through a deep breathing exercise. I will use online resources to figure out how the breathing exercise need to be in order to work, like WebMed’s Techiques for deep breathing.
Body Scan: using the body scan technique found here.
Distraction/Game Technique: using a jigsaw puzzle or some sort of mind occupying game that reduces stress but still allow you to channel your overactive brain somewhere.
5 Senses Technique: using the 5 senses to ground you, as shown below:
If all of the following techniques do not work, then this triggers a “call emergency contact” state, which calls someone you designated as a person to reach out to. For example, “calling your mom…”
The biometric sensors I am thinking of using are: a heart rate (PPG) sensor, a GSR sensor, and a respiratory rate sensor. The last one, I might not need, I am waiting to confirm with a specialist…
I kept working on the Leap motion sensor for this week, too. The problem that I focused on was recognizing hand signals for transportation. When a road reconstruction is going on or a traffic accident happens, it is usual that normal people or police officers temporally control the traffic on the road. However, it is very dangerous that it causes additional accidents because sometimes it is difficult to notice signals from a far distance.
I came up with multi-sensory feedback (for this assignment, auditory feedback) of hand signals on the road. The device that is attached to the front of a car could read the gestures of people on the road and make different sounds according to the signals. Thus, a driver could notice the gesture much precisely as well as easier and fast. I believe this system could help to prevent accidents in advance. Also, it could be attached to the autonomous car as well and the car could automatically read and react to the signals.
Proof of Concept
Because of the limited options in SDKs, I could just implement pinch and fist gestures to make sounds. For the fist gesture, I used two different sounds according to the time – the first fist and the second fist gestures sound differently. I thought that hand signals could be a series of movements, and this feature could also read them and make different sounds.
This project was very challenging for me so I decided to take a different approach to the problem. As an engineer, I find it very difficult to think about physical structures. Keeping this in mind, I designed my structure for making form first before deciding how I would use it and then I further modified it to exactly what I needed.
A concept difficult difficult for kids, and adults, is understanding how close they are to their goal. Visualization is often a good technique to better understand where they stand. I decided to use visualization paired with sound effects to aid the user in their understanding of their goals.
To accomplish this, I used an Arduino Uno, a push button, a servo motor, some marbles and a glass bottle. The idea is that whenever the user gets closer and closer to their goal they can press the button and the marble will be added to the bottle. This gives the user the satisfaction from pushing the button and a hearing the clink of the marble in the bottle. Additionally, the user can always glance at the bottle to see their progress.
If this could be linked to you bank account you could have it automatically add more marbles as you earn more money
It could be time based – a countdown to your vacation
In today’s world, there are many carefully considered alarm tones that are designed to be played from mobile phones or other speakers in order to wake users up slowly and gradually. However, some people are such deep sleepers that these alarms have no effect; and something much stronger and more visceral is required.
With this project, I sought to create a visceral sound using kinetic output that is both loud and jarring enough to wake even the deepest of sleepers. To do this, I drew on my own experiences with balloons; and how when they pop, everyone in the room is stunned into silence. Using a servo motor with a connected pin as the actuator, I also integrated a timer and a start button using a simple potentiometer and a push button. Using these things, users are able to set a timer that terminates in the loud popping of a balloon.
Problem: I’ve driven off after my friends have gotten stuff out of my car trunk but before they’ve closed it. Thinking about this problem, its baked into my usual wait period when dropping someone off, the weirdness of hearing the thunking of the trunk when you’re not the one opening or closing it, and the fact that its all directly behind you. Audio feedback would be a good way to help differentiate the trunk’s state when the driver is not the one operating.
Solution: More or less, different audio cues based on trunk status. Traditionally there is a slap slap slap done by someone on the side of the car that means “you’re good to drive off now,” but this could be communicated better. It is important this does not confuse the driver though, and should be noticeably distinct from any relevant “trunk open” or “door ajar” sounds that they may also be hearing during normal trunk procedures. So, the system differentiated through kinetic sound when the drunk is closed, and more electronic audio when the trunk is ajar or being fuddled with. The former is accomplished with a solenoid, and the latter, a normal speaker and ideally a weight sensor.
Both in private and public settings, the use of physically actuated faucets can be a confusing when users encounter automatic/motion-actuated faucets in their daily lives as well (is this just me?). Remembering to shut off the faucet, especially completely, is often overlooked and is a significant waste of water depending on the duration it is ignored. Because the sound of water can easily be tuned out given how often we encounter it, it is important to communicate this information to users whether for the environment or the bill payer’s sake. In addition, most people use (I hope) soap when they wash their hands, but they don’t need the water to be running while that is happening, so ideally, the water should be shut off briefly while that event is happening to save water as well.
A General Solution:
A device that would represent the flow of water using percussion to signify when it is still running. The tempo of the percussion should convey whether water is running. Ideally, it should be a sound that is distinct from the sound of water running from a faucet. It should take into account when other events are happening as well, such as soap being dispensed. Potential augmentations of the system could be using sensing when people are in front of the sink vs. leaning to dispense soap or other tracking of the users position to understand when the user needs/doesn’t need water.
Proof of Concept:
An Arduino with a potentiometer to represent the faucet being turned on/off, an LED to represent the water (whether it is running or not), and a switch/button to represent when soap is being dispensed. The potentiometer being rotate on will cause a Servo to sweep faster which hits straws to create a beat. The LED, based on the press of a button, will switch on and off regardless of the potentiometer, but unless the button is pressed, the LED will show the potentiometer’s reading of how much water the user wants.
The Fritzing sketch shows how the potentiometer and button are set up to feed information into the Arduino as well as how the LED and Servo are connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to a power source.
Proof of Concept Sketches:
The user’s turns on the faucet and is met with a drumming that conveys the flow of the water. If they are sensed to be dispensing soap, the water stops briefly. The system is meant to remind the user to make sure that the faucet is completely closed when they leave. There are however, many additional features that could be added on as the scale of the intervention increases, for example, the drumming could be active only if it is sensed that there is someone present in the space.
Inspired by Jet’s examples of doorbells with physical chimes, and my last post about Ballet Mecanique, I wanted to make something both percussive and tonal that responded to a digital event stream. I thought it would be fun to reproduce something like the chimes used in theaters to signal the end of intermission, but have it triggered by my Google Calendar rather than an usher.
To accomplish this I made a python program based heavily on the quickstart offered by the Google Calendar API, which checks the time until the next meeting on my calendar, and when that meeting is a certain number of minutes away, it plays a chime to indicate the amount of time remaining (5, 4, 1, or 0 minutes were chosen arbitrarily).
The only hardware used is a pair of servo motors, and the Sparkfun board connected to the laptop (for serial and power connectivity). When the board is sent a character corresponding to a note (A, B, C, D, E, or F… G was too far), it first rotates the base to the appropriate angle to line up the mallet with the note, and then the other motor strikes the note. This allows the Python program to dictate the note sequence as a simple string of characters, and does not require reprogramming the microcontroller to make new chimes.
Problem: Door bells are usually intrusive noises. They are sudden and loud and usually disturb the peace, or at least peace hoped for, in your home. In addition, most door bells only have two states: at the door or not. Because of this, guests usually must wait at the door for the host to arrive at the door and the host usually must drop everything they’re doing immediately to go and get the door. Is there a way to create a simple doorbell that does not alarm everyone in the house and also gives you a sense of where people are relative to the door?
General solution: A door chime system that created more calming/less disturbing sounds in order to alert the host. The system can also relay more information, namely, how far away someone is from the door to symbolize how much time the host has to finish whatever they are doing before making their guest wait at the door. As a guest gets closer and closer to the door, the chimes will grow louder and louder. Also, doorbells are traditionally known, at least at my house, for alarming dogs and setting them into a fit. To avoid this, these chimes can also be turned to a “do not disturb” mode for when people aren’t home or are asleep, so they chimes do not make noise to disturb the peace.
Proof of Concept: My system utilized the following components:
Ultrasonic Sensor: used to measure distance to simulate guest proximity to door
Fan: the driver of the chimes, causing them to knock together and create noise depending on the fan speed determined by the distance sensor
Transistor: acts as a voltage gate to allow the fan to be controlled at various speeds
Fan has 4 different states of sound production: just fan noise with no air movement, light air with light chimes, medium air with medium chimes, heavy air with heavy chimes
Push button: acts as an interrupt that changes the “mode” of the doorbell
Using an Arduino and mechanical/kinematic devices, create sounds that map to machine states or input streams. Interrupt driven sounds are also good. Create a language of sequences or sounds that you can easily demonstrate in class.
Look for diversity in sound sources: bells, knockers, spinning motors or steppers that drive devices that make repetitive sounds.