Sound Critique: Synthesthesia

Concept

Synesthesia is a neurological phenomenon where triggers of one sense create the perception in the brain of a different sense. For instance, a sound or note giving the perception of a color. This concept has been explored in works of art including Gershwin’s Rhapsody in Blue, or in the musically inspired paintings of Kandinsky and Picasso.

This project aims to evoke those same connections by controlling music with visual stimuli. Additionally it has the potential to allow a visually impaired user to experience art through synthesized music. Through the use of a camera and a computer synthesizer, Synthesthesia plays a simple musical composition with four parts each with its amplitude controlled by aspects of the image captured by the camera.

The percussion is driven by the overall brightness of what the system sees. Bright images evoke louder drum beats, and as the light fades the volume of the drum beet fades with it. Similarly three different synth tracks are controlled by the red, green, and blue intensities in the image. Blue plays a saw toothed bass sound, green is a sequence of bells, and red is a synth lead.

Execution

The initial hope was to package the whole system into a handheld device with audio, video, processing, and a battery. However, the Raspberry Pi Zero was unable to handle the load. That said, there is no doubt that in the next five years the necessary processing power can be easily placed in the palm of your hand.

For now, the camera and audio is processed and synthesized on a laptop. A Python program using OpenCV takes in video from the webcam, and measures the average brightness, as well as the amount of red, green, and blue in the image. That triggers Sonic Pi to adjust the levels in the loops.

Demo

Tech stuff

In OpenCV it can be tempting to just grab the RGB (or as they call them BGR) values, but these values tend to swing more than you would think with variations in brightness and shadows. Instead converting to Hue Saturation and Value (HSV) allows for isolating by color (hue) range independent of the brightness. From there, just taking the average across the whole frame gives a pretty good level that can easily be scaled and passed into an OSC message.

On the synth side, Sonic Pi allows for creating multiple synced loops. In this case I made four: beats, red, green, and blue. Each listens for an osc trigger, and uses that to set the amplitude of the samples and synths. These are each saved as individual files. Red, green, and blue are tied to the beats loop for syncing, so it’s best to start each of them first, and then all will trigger together when you run the beat loop.

Synthesthesia.zip

A device to help with a restful night of sleep

PREMISE

The sleep machines currently available are often not very interactive. You set the timer before you fall asleep and hope that you fall asleep before the timer finishes or you leave it on for the entire night. Another reason people tend to leave their sleep machines on the entire night is because they are afraid of loud noises waking them up during the early hours of the day.

PROPOSAL 

An interactive sleep machine which helps you fall asleep and then stay asleep! As the user falls asleep, soothing sounds will be played. Once, the user has fallen asleep the sound will be turned off. During the night, if external noises reach above a certain dB level, white noise will begin to play to cancel out the noise.

PROOF OF CONCEPT

I used a pulse sensor, speaker and a microphone. The pulse sensor was used to identify what part of the sleep cycle the user is in.  The microphone monitors the sound in the room to understand whether the speaker should be triggered or not. In the mode to fall asleep, the speaker plays a calm tune and once the pulse drops it moves into sleep maintenance.

 

 

Continue reading “A device to help with a restful night of sleep”

An Adaptive Home Alarm System

The Problem:

Current home alarm systems often use motion detectors positioned outside/around the house in order to detect potential threats to a home’s safety. However, these systems rarely take into account parameters such as the detected motion’s speed, sound, and other such patterns. Because of this, small animal movements and other anomalies can cause false alarms—making these systems unreliable.

Proposed Solution:

For this project, I chose to focus on speed as a specific use case for this outdoor alarm system. Depending on the speed of the motion detected by the multiple break beam sensors, the system emits different sound patterns to embody various levels of urgency. For example, a fast motion would likely be reason for alarm, thus being associated with the least pleasant sound and signifying that users may want to call authorities. On the other hand, a slow motion emits a less intense tone that tells users that they may just need to check on what’s happening.

Proof of Concept:

Arduino Code and Fritzing Sketch

everything is a musical instrument, just try not to eat it

What if every object in your home had a different pitch, and as you walk around and touch different things, you create a melody? Everything in the space around you can become part of the musical instrument!

Wait what?!

This instrument takes in different inputs and uses them to trigger notes. Each object can be assigned a note, and whenever you touch that object, that note plays. Once the object is connected, you can assign and reassign any note you want to that object through a dial.

Each object creates a different note as you touch it.
Features:
  • 8 inputs to plug in your different objects.
  • 1 speaker to play the different notes.
  • 2 dials & a button to choose which object you want to change to which note.
Dial interface and how to set notes to different objects.

How does that even work…?

Here, I used an orange, a banana, a compass and my laptop case. You can use anything you want that can conduct electricity! In the case of the laptop case, or other non conductive objects, you can use something like thin conductive tape to conduct. Note at the end of the video, you can use the dials and the button to change a specific object’s assigned sound.

Code and Other Files

Crit #3 – Multiple Timers

Image result for kitchen timer

Problem: When in the kitchen and cooking a big meal (say, Thanksgiving dinner), I often have multiple timers going.  Between the microwave, my phone, my roommate’s Google assistant, etc., they can be hard to manage, especially when many timers are physically locked to their positions on the appliances.  This can be an issue for not only those with low mobility, but because each timer is often on a different type of interface (touch screen, keypad, twist timer), it can affect those with low dexterity as well.  Timers should be manageable and adaptable to user needs.

Solution: I want to solve the problem in two ways: by combining the timers into one place, and making input methods modular such that users can select the input that works for them e.g. lever, button pad, knob, etc.  The user should be able to easily discern which of their set timers is going off even though they are all now co-located, though, and this is done by unique audio cues for each.  They should also be able to know which timer they have shut off, and which are still going, based on sound.

Proof of Concept: My proof of concept is a system of 3 timers with one on/off button, one knob to set time, and one knob to select a timer.  Each of the timers can be set and turned off independently.  When one timer is going off, it adds to a melody that plays all of the currently on timers’ contributions.  Each timer has a distinct sound.  Users can also turn off all timers with a more complex input so as to not accidentally do it.  Ideally, I would extend this system with a more modular input method.  I want to include keypad entry like on microwaves, an easier slider input for those who cannot twist knobs or input on small keys, and ideally even voice input.  The customizability of this is not shown in the proof of concept, but the code framework can certainly support it.

Files + Code + Video

Desk Alarm Clock

Problem:

For people who work at desks, it can be hard to track when one is tired or needs a break. In these scenarios, they might be inclined to push themselves too hard and damage their health for the sake of productivity. When one is in a state of extreme exhaustion, it is very easy to make simple mistakes that could’ve otherwise been avoided or be unproductive in the long-run. With a cloudy mind, it can be difficult to make clear, thought-out decisions as well. In essence, knowing when to get rest vs. when to continue can be impossible in certain work situations.

A General Solution:

A device that would be able to detect if someone is awake, dozing off, or asleep while working at their desk. In the case that the person is awake, they will get a periodical reminder to get up, stretch, and hydrate. In the case that the person is dozing off, the alarm will try to wake them up and encourage them to take a nap. This can be triggered using a button to signify that the person will begin taking a nap which will set a timer to wake the person up after a full REM cycle. In the case that the person is asleep, the device will set an alarm to wake the person up after a full REM cycle.

Proof of Concept:

An Arduino with an accelerometer to represent the state of the user (awake, dozing, or asleep) with a button to allow the user to signal when they are intentionally planning on taking a nap.  While the system can be fully functional in this manner, the system could also use a machine-learning strategy through a camera to detect the three previously mentioned states as well. The system should have two type of states, one in which the Arduino is looking for whether the person is in an asleep state or is planning on going to sleep and another in which it acts as a timer, counting down to sound an alarm.

Fritzing Sketch:

The Fritzing sketch shows how the accelerometer and switch are set up to feed information into the Arduino as well as how the speaker is connected to the Arduino to receive outputs. Not pictured, is that the Arduino would have to be connected to the laptop which houses the p5.js code. In addition, the potentiometer here is included as volume control (for my ears’ sake).

Proof of Concept Sketches:

The user’s pitch (or their perceived state) is sensed using an accelerometer (or camera) which informs the device of whether the person is awake, dozing off, or asleep. The user can also input a signal that they will be taking a nap which will also set the alarm for a full REM cycle. This system could be further improved to take into account one’s schedule, the frequency of the naps, etc. to begin making other suggestions to the user.

Proof of Concept Using Accelerometer Video:

ml5.js Proof of Concept Video:

Files:

Sound Crit Final

Crit 03: Playing piano through gestures(body movements)

Problem

I started with a problem area related to sound. For example, when a conductor is conducting an orchestra, how might they control the volume, tempo, or pitches through the gestures? However, this problem is pretty complex for me to tackle in a week, so I narrowed down to a more specific and small problem – how might we control the music through gestures. I came up with making a different sound based on the distance or proxemics so that we could play the piano through gestures.

General Solution

I come up with an idea that what if I could play the piano through gestures or body movements. Rather than playing the piano with fingers, I thought that there might be a way to enable people to play the piano using their bodies. While thinking about it, I thought an idea of mapping piano notes on the distances. By moving our arms and hands we could play with sounds.

Proof of Concept

In order to track the distance, I used the ultrasonic sensor and Arduino. When I go further, it wasn’t accurate enough to detect further distances. However, when it is close, I realized that it is pretty precise in distance. I tried to use a potentiometer to change the distance manually, but it didn’t work really well. Thus, I just focused on the ultrasonic sensor.

To control the computer, I used a terminal to run python codes. Through python codes, I could control the keyboards and mouse, which allowed me to play the online piano using keyboards.

Codes and Video

Jay_Crit_03_sound

Crit 2 – Stabilizing Device for Tremors

Premise

My family and I struggle with a progressive nervous system disorder that causes an essential tremor that starts in your hands when you’re younger (i.e. me), and migrates throughout your body as you get older (i.e. my mom).

Proposal

For this project I wanted to look into ways to help stabilize things you’re holding if you have a tremor. I made this device that uses an accelerometer to detect movement, and offset that movement by using 2 servo motors to control the x and y rotations.

To do this I researched Quaternions and Spatial Rotations.

There are three different state options: stabilizer: help for when you need to hold something still; pouring: help for when you need to pour something; normal: device does nothing.

Proof of Concept

Because of the complexity of offsetting movements and the fact that I am not knowledgeable enough with physics, I found it really difficult to make the stabilizer and pouring states work together. Hence, the demonstration above only shows the stabilizing state.

Adding to that, I struggled with the adjustment between the relationship of input data and sensitivity/stability of the device. In other words, I didn’t know how to make sure the device doesn’t jitter as much while reading live data. For future iterations, learning how to normalize the input data should help.

Fritzing Circuit Sketch

stabilizer_code

Critique 02: Assisting Individual Body Training through Haptics

Problem

When training our body and build muscles at a gym, it is really important to maintain the accurate and balanced pose and gestures – not only for not being injured but also for maximizing effects and keeping muscles balanced. However, when we go to the gym by ourselves, it is sometimes really difficult to reflect ourselves whether we are using the tools in the right way or not.

Solution

I thought about a device – that could be a smartphone with an armband or a smartwatch or the other devices that are attached to our arms – that provides haptic feedback to us so that we can keep the right pose.

For example, when we are doing push up on the ground, when we go down, the device checks the degree of our arms and time. After we go up and go down again, the device provides haptic feedback (various intensities of vibration) to signal to us that we have to go down a bit further to reach the right degree. When we reach that, it vibrates shortly, to let us know we did well.

Proof of Concept

I tried to code through Swift and use the gyro sensors in the iPhone. I believe I could develop this idea much further – it could check various degrees of our bodies when we are doing exercise, even stretching to increase our flexibility. Also, if it could keep collect data throughout the time, it will understand our capabilities of a certain exercise or part of our muscles, so that it could guide us to eventually increase our capabilities with the appropriate tempo without harming our bodies.

Videos and Codes

degreeChecker – code

 

Emotional Haptic Feedback for the Blind

Problem:

Haptic feedback as a means of delivering information to the visually impaired isn’t a new concept to this class. Both in class assignments and in products that already exist in the real world, haptics have certainly become a proven tool. However, I feel that there has not been much consideration as to the more specific sensations and interactions that haptics can provide.

Proposed Solution:

With this project, I attempted to create a haptic armband that adds another dimension of feedback: spacial. By arranging haptic motors radially around the arm, I was able to control intensity, duration, as well as surface area in order to create different sensations. Controlling these variables, I recreated the sensations of tap, nudge, stroke (radially), and grab.

In terms of applications, I think time keeping could be a great illustration as to how different sensations can play a role. For example, a gesture such as a tap or nudge would be appropriate for situations such as a light reminder at the top of the hour — on the other hand, a grab would be more suitable in situations such as an alarm or if a user is running late for an appointment. Other more intricate gestures such as a radial stroke could be for calming users down in stressful situations.

Proof of Concept:

Arduino Code and Fritzing Sketch