Will you shut up, man?

Problem: Aggressive, domineering speakers often create a hostile environment (both consciously and unconsciously) by interrupting others during discussions, meetings, and most recently, during the presidential debate.

Lots of such speakers are unaware that they are constantly speaking over others, and do not actually respond to intervention by others in-situ when they are exhibiting this behavior. They also distrust other’s claim that they are frequently being disruptive to others because of confirmation bias.

Solution: A tactile, haptic feedback in your pocket based on how long you have interrupted someone. Similar to someone kicking you under the table when you’re saying something you shouldn’t, it delivers a metaphorical “kick under the table” to provide a speaker awareness of their interruption.

When you first start speaking when someone else is speaking, you receive a simple tap as a reminder. As more time elapses, the taps increase in frequency (and optionally magnitude. The mini solenoid in our kit doesn’t allow for a change in force). This should impress upon the speaker the severity of their interruption (i.e. the longer it is, the more rude and disruptive it is and unlikely to be constructive). If the interruption time elapsed exceeds 5 mins, it should just switch off someone’s mic.

Schematic

Application: Outside of the presidential debate, this kinetic feedback is a discrete way to help speakers monitor their own speaking (and interrupting) behavior to create a more inclusive environment.

As we transitioned into entirely online meetings due to covid WFH, people have become more conscious about turn-taking when speaking since visual / auditory cues of when someone might want to interject are now invisible.

Hands-free manipulation of underwater soft robotics

My idea is to use motion/distance sensors in order to manipulate the locomotive patterns of a soft robotic fish tail. Using simple gestures such swiping left/right with different rotation angles or giving various values to two distance sensors alternatively, one’s could manipulate the morphing of fluidic elastomer actuators, causing this way the tail to swim in different patterns (swim straight/turn right/left, big/small maneuver) .

In a greater scale and later one, this semi-autonomous fish tail could be designed and fabricated for motion impaired people, who meet difficulties exploring their body’s motion underwater. Having a personal trainer who manipulates externally their prosthetic tail, their body could develop different speeds, turns and navigation using dolphin style.

The fish’s body is composed out of two fluidic elastomer actuators (with chambers) separated by a constraint layer. The latter one tries to constraint each actuators deformation and as a result the tail bends.

Fabrication Process
Chambers and deformation

The liquid transfers from one chamber into the other using a gear pump, where two DC motors cause the gears to rotate in different directions.

 

Gear Pump Mechanism
Pulling/pushing water into different holes

 

 

 

 

 

 

 

 

DC motors activate alternatively and in different speeds according to the user’s input (right/left hand, rotation degree)

 

Assembly

Top View

 

Perspective View

Pet Pointer

 

A lazy dog

It is always great to have the companion of pets. However, it can get hard to keep a pet sometimes for people with hearing difficulty.  For example, pets like dog and cat like to run around when it is playing outside or hide in corners in the house. Usually people locate their pet by sounds or noises their pets make. This can be difficult for deaf people.

The vertical front view of the device

The solution I came up with is a pet locator that mechanically points to the direction of the pet. It has frequency sensors around and above the circular base to recognize any sound with familiar frequency. The direction of the sound is then indicated by the hand on top, which has a “wrist” that can rotate and bend. However, the fingers do not move for the ease of making the device.

The gesture of the hand when the pet is at the right.

Visual&Sound Cues => Mechanical

This is my dog and her favorite toy: a blue and green rubber ball. She recently got into the habit that when she wants us humans to play the ball with her and we don’t give her attention, she would roll the ball into places she can’t reach, usually under a desk or behind the TV set. Then she whines until someone gets it for her, which could be minutes if no one decides to get it.

The process of retrieving the ball involves bending down, looking for the ball under table, get it out with a 6ft long stick or with hands. And sometimes when I have my headphones on, I don’t hear her whining.

This is frustrating for both my dog, and my family. So I came up with the idea of making a device that could fit under the table. It can detect the dog’s whining sound, move under the table to find the ball, and push the ball out.

 

Body recognition in HCI for diverse applications

A] Behance Portfolio Review Kinect Installation

An open-minded approach to natural user interface design turned a portfolio review into a memorable interactive event.

B] Stroke recovery with Kinect

The project aims to provide a low-cost home rehabilitation solution for stroke victims. Users will be given exercises that will improve their motor functions. Their activities will be monitored with Kinect’s scanning ability, and a program that helps keep track of their progress.

This allows the patients to recover from home under private care or with family, instead of hospital environments. Their recovery levels can be measured and monitored by the system, and researchers believe the game-like atmosphere generated will help patients recover faster.

 

C] Kinect Sign Language Translator

This system translates sign language into spoken and written language in near real time. This will allow communication between those who speak sign languages and those who don’t. This is also helpful to people who speak different sign languages – there are more than 300 sign languages practiced around the world.

The Kinect, coupled with the right program, can read these gestures, interpret them and translate them into written or spoken form, then reverse the process and let an avatar sign to the receiver, breaking down language barriers more effectively than before.

D] Retrieve Data during a Surgery Via Gestures

https://www.gestsure.com

 

 

“Fore!” Wristband

Background

“Nowadays, most golfers yell “fore” only after they’ve hit an errant shot toward an unsuspecting golfer, but the term which translates to “watch out!” or “heads up!” was originally intended to be used before teeing off.”

—- quote from here

Golf  balls can travel really fast in the air, and hitting by golf balls can cause from severe swellings to permanent damages. When you are playing at the golf course, you yell “Fore!” when:

    • you didn’t wait for the group in front of you to leave(rude!)
    • you hit a really bad ball and you are hitting onto other fairways(not where your ball is suppose to go), and you are not sure whether there are people there.

Because you might hit/injure them, or even just scare them.

When you hear “Fore!” on the golf course, there is usually less than a second for you to react. There are a few things that you can do:

    • identify where the sound comes from
    • hide at a nearby golf cart or tree
    • put your hands on your head
    • stay low so that the ball does not hit your head

But usually, it’s hard to identify where the sound comes from within such a short period of time, so one usually attempts to do all of above very confused and hope for the best.

Problem

Ever since the early times in the sport of golf, people rely on yelling and hearing “Fore!” to warn each other about a flying ball coming at their directions. This has issues:

    • People with hearing problems don’t receive the warning
    • There is not enough time for people to react to identify where the sound comes from

Proposed Solution

My proposed solution is a “Fore!” wristband. Everyone on the golf course will be wearing it, and it can do the followings:

    • Sense when “Fore!” warning is made
    • Communicate the location/direction of the warning to other wristbands
    • Display the direction of the warning

For proof of concept, I will(still don’t have my hardwares yet) use an accelerometer to detect whether a swing is made, a sound sensor to detect whether there is a loud yelling immediately following the swing, and an 8×8 led matrix for the display.

Discussion

The prototype design has a few flaws:

    • The sound sensor is chosen so that the golfer does not need to make any additional actions of notify other golfers, but the sound sensor only detects for sound above a certain threshold. It’s common for golfers to yell things other than “Fore!” immediately after the swings. For surrounding golfers, the issue should not be too big because the distance makes their yellings quieter. But for the golfer wearing the wristband, this would raise a problem of sending false alarms to other golfers. Possible solutions can be running speech recognition to ensure that “Fore!” is the word, or adding some additional conditions to avoid false alarms.
    • In my prototype, I omitted the wireless communication. But the communication can cause some problems if the wristbands are put in use in the real world, since it’s not uncommon for golf courses to have poor signals. So radio signals set up and maintained by the golf  course may be a good solution.

Critique 1: Visual

Crit Assignment: Visual Accessibility

Goal: Use vision to make something accessible to someone without hearing or manual sense of touch.  These do not have to be physical disabilities, it could be construction works wearing hearing protection or heavy protective gloves that prevent them from having a sense of physical touch.

Find a problem and solve it.  The problem needs to have a context.