Crit #3: Vision Impaired Evacuation Demo

Problem

To users unfamiliar with building  and who may be visually impaired, exiting a building during an emergency can be hazardous and confusing. Most of the time, exits are signaled by glowing “EXIT” signs with symbols next to them. How can users exit a building in a calm and organized fashion?

Solution

A tactile and auditory guidance system based on sensing user location would afford the visually impaired and those unfamiliar with there surroundings a sense of direction in the case of an alarm going off. Sequenced vibrations would lead users to speakers, which would continue to give users instructions, leading to the building exit.

Proof of Concept

A series of sensors (or pushbuttons in this case) indicate checkpoints that trigger the exit path. An array of tactors allow the users to feel directioned vibrations in there feat (or arms if it were attached to something akin to a guiderail). At a checkpoint, a speaker would play a prerecorded message, instructing users as where to go next.

crit3_movie

crit3

Sound Crit: Airplane Headphones

Problem: Headphones on Planes

Description: Does anyone not wear headphones on a plane anymore? Whether you are listening to an audio book or a movie or white noise and trying to fall asleep, people are constantly connected to something in the air and that can make lots of jobs difficult. 

  • Seat mate: if you want to get out, but do not want to tap the person next to you to interrupt them, what do you do?
  • Flight attendant: do you need to ask everyone to take their headphones out every time you walk by for drink orders, pretzels or trash?
  • Captain: should you even give announcements if most people are not listening in the first place?
  • Flyer: when do I turn my music down to get information?

Since I do not see people returning to headphone-less days on planes in the next five years, I think our headphones need to be smarter.

General Solution: Headphones that adjust their own volume based on given situations.

Proof of Concept: Using a Unity build to go through multiple situations, I have an interactive flight attendant and another person on the flight that both move up and down the rows, causing different levels of audio feedback. The other flyer does not change the volume as they are likely just going to the bathroom or stretching, but the flight attendant gradually lowers the volume as they approach just in case they need to talk to you. ALSO, depending on their speed, that gradient could change because the faster they are walking then they are probably just going to the other end of the plane and not stopping en route. As the attendant stops at your chair, if you look at them, your music’s volume completely cuts out and returns when you look away. Finally, represented by a key-click, if you are asleep and want to stay asleep because of the flight attendants, your music will not change at all as they walk through. In this mode, the only thing that will change the volume of your music is a captain’s announcement because of its safety implications. In real life and not a computer simulation, each flight attendant would be outfitted with some kind of RFID tag/sensor that communicates with a corresponding sensor in each seat. Since your phone is connected to your seat number because of your reservation, they would sync to provide you with accurate location data. 

Unity files: https://drive.google.com/open?id=1OzPjUNTmd8tUH93fNJEz809hihKvDIr7

Sound Critique: Synthesthesia

Concept

Synesthesia is a neurological phenomenon where triggers of one sense create the perception in the brain of a different sense. For instance, a sound or note giving the perception of a color. This concept has been explored in works of art including Gershwin’s Rhapsody in Blue, or in the musically inspired paintings of Kandinsky and Picasso.

This project aims to evoke those same connections by controlling music with visual stimuli. Additionally it has the potential to allow a visually impaired user to experience art through synthesized music. Through the use of a camera and a computer synthesizer, Synthesthesia plays a simple musical composition with four parts each with its amplitude controlled by aspects of the image captured by the camera.

The percussion is driven by the overall brightness of what the system sees. Bright images evoke louder drum beats, and as the light fades the volume of the drum beet fades with it. Similarly three different synth tracks are controlled by the red, green, and blue intensities in the image. Blue plays a saw toothed bass sound, green is a sequence of bells, and red is a synth lead.

Execution

The initial hope was to package the whole system into a handheld device with audio, video, processing, and a battery. However, the Raspberry Pi Zero was unable to handle the load. That said, there is no doubt that in the next five years the necessary processing power can be easily placed in the palm of your hand.

For now, the camera and audio is processed and synthesized on a laptop. A Python program using OpenCV takes in video from the webcam, and measures the average brightness, as well as the amount of red, green, and blue in the image. That triggers Sonic Pi to adjust the levels in the loops.

Demo

Tech stuff

In OpenCV it can be tempting to just grab the RGB (or as they call them BGR) values, but these values tend to swing more than you would think with variations in brightness and shadows. Instead converting to Hue Saturation and Value (HSV) allows for isolating by color (hue) range independent of the brightness. From there, just taking the average across the whole frame gives a pretty good level that can easily be scaled and passed into an OSC message.

On the synth side, Sonic Pi allows for creating multiple synced loops. In this case I made four: beats, red, green, and blue. Each listens for an osc trigger, and uses that to set the amplitude of the samples and synths. These are each saved as individual files. Red, green, and blue are tied to the beats loop for syncing, so it’s best to start each of them first, and then all will trigger together when you run the beat loop.

Synthesthesia.zip

A device to help with a restful night of sleep

PREMISE

The sleep machines currently available are often not very interactive. You set the timer before you fall asleep and hope that you fall asleep before the timer finishes or you leave it on for the entire night. Another reason people tend to leave their sleep machines on the entire night is because they are afraid of loud noises waking them up during the early hours of the day.

PROPOSAL 

An interactive sleep machine which helps you fall asleep and then stay asleep! As the user falls asleep, soothing sounds will be played. Once, the user has fallen asleep the sound will be turned off. During the night, if external noises reach above a certain dB level, white noise will begin to play to cancel out the noise.

PROOF OF CONCEPT

I used a pulse sensor, speaker and a microphone. The pulse sensor was used to identify what part of the sleep cycle the user is in.  The microphone monitors the sound in the room to understand whether the speaker should be triggered or not. In the mode to fall asleep, the speaker plays a calm tune and once the pulse drops it moves into sleep maintenance.

 

 

Continue reading “A device to help with a restful night of sleep”

everything is a musical instrument, just try not to eat it

What if every object in your home had a different pitch, and as you walk around and touch different things, you create a melody? Everything in the space around you can become part of the musical instrument!

Wait what?!

This instrument takes in different inputs and uses them to trigger notes. Each object can be assigned a note, and whenever you touch that object, that note plays. Once the object is connected, you can assign and reassign any note you want to that object through a dial.

Each object creates a different note as you touch it.
Features:
  • 8 inputs to plug in your different objects.
  • 1 speaker to play the different notes.
  • 2 dials & a button to choose which object you want to change to which note.
Dial interface and how to set notes to different objects.

How does that even work…?

Here, I used an orange, a banana, a compass and my laptop case. You can use anything you want that can conduct electricity! In the case of the laptop case, or other non conductive objects, you can use something like thin conductive tape to conduct. Note at the end of the video, you can use the dials and the button to change a specific object’s assigned sound.

Code and Other Files

Desk Alarm Clock

Problem:

For people who work at desks, it can be hard to track when one is tired or needs a break. In these scenarios, they might be inclined to push themselves too hard and damage their health for the sake of productivity. When one is in a state of extreme exhaustion, it is very easy to make simple mistakes that could’ve otherwise been avoided or be unproductive in the long-run. With a cloudy mind, it can be difficult to make clear, thought-out decisions as well. In essence, knowing when to get rest vs. when to continue can be impossible in certain work situations.

A General Solution:

A device that would be able to detect if someone is awake, dozing off, or asleep while working at their desk. In the case that the person is awake, they will get a periodical reminder to get up, stretch, and hydrate. In the case that the person is dozing off, the alarm will try to wake them up and encourage them to take a nap. This can be triggered using a button to signify that the person will begin taking a nap which will set a timer to wake the person up after a full REM cycle. In the case that the person is asleep, the device will set an alarm to wake the person up after a full REM cycle.

Proof of Concept:

An Arduino with an accelerometer to represent the state of the user (awake, dozing, or asleep) with a button to allow the user to signal when they are intentionally planning on taking a nap.  While the system can be fully functional in this manner, the system could also use a machine-learning strategy through a camera to detect the three previously mentioned states as well. The system should have two type of states, one in which the Arduino is looking for whether the person is in an asleep state or is planning on going to sleep and another in which it acts as a timer, counting down to sound an alarm.

Fritzing Sketch:

The Fritzing sketch shows how the accelerometer and switch are set up to feed information into the Arduino as well as how the speaker is connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to the laptop which houses the p5.js code. In addition, the potentiometer here is included as volume control (for my ears’ sake).

Proof of Concept Sketches:

The user’s pitch (or their perceived state) is sensed using an accelerometer (or camera) which informs the device of whether the person is awake, dozing off, or asleep. The user can also input a signal that they will be taking a nap which will also set the alarm for a full REM cycle. This system could be further improved to take into account one’s schedule, the frequency of the naps, etc. to begin making other suggestions to the user.

Proof of Concept Using Accelerometer Video:

ml5.js Proof of Concept Video:

Files:

Sound Crit Final

Crit 03: Playing piano through gestures(body movements)

Problem

I started with a problem area related to sound. For example, when a conductor is conducting an orchestra, how might they control the volume, tempo, or pitches through the gestures? However, this problem is pretty complex for me to tackle in a week, so I narrowed down to a more specific and small problem – how might we control the music through gestures. I came up with making a different sound based on the distance or proxemics so that we could play the piano through gestures.

General Solution

I come up with an idea that what if I could play the piano through gestures or body movements. Rather than playing the piano with fingers, I thought that there might be a way to enable people to play the piano using their bodies. While thinking about it, I thought an idea of mapping piano notes on the distances. By moving our arms and hands we could play with sounds.

Proof of Concept

In order to track the distance, I used the ultrasonic sensor and Arduino. When I go further, it wasn’t accurate enough to detect further distances. However, when it is close, I realized that it is pretty precise in distance. I tried to use a potentiometer to change the distance manually, but it didn’t work really well. Thus, I just focused on the ultrasonic sensor.

To control the computer, I used a terminal to run python codes. Through python codes, I could control the keyboards and mouse, which allowed me to play the online piano using keyboards.

Codes and Video

Jay_Crit_03_sound