ROBO CAT!

Sometimes we need an animal for emotional support; however, not everyone can afford to have pets! Therefore, I introduce a (part) of my robocat.

For cats, tail is one of the biggest indicator/langauge for how they feel right now.

Unlike dogs, cats wagging their tail indicate that they are agitated. The more they are annoyed, the more they wag their tails. So I created a module of correlation between petting and a tail overtime.

The button stands for a body part. When it is pressed, it. means that the cat is being pet. The LCD monitor shows the state of the cat literally.
The cat is pet for twice (button is pressed twice). The tail wags a little bit.
The cat is pet four times. The tails wags more, but still slowly.
The cat is pet for multiple times now. The cat is now starting to get annoyed.
You pet the cat for too many times! The tails wags rampantly, and now the cat hates you.

But cats, unlike humans, they are fast at forgetting.

After 10 seconds of being mad at you, the cat is now back to where she was.

(in the video, for convinience I set the timer as 3 seconds)

Video demonstration:

https://drive.google.com/file/d/1UfTICAObl-EV51BOliyEytrU68ZdZQGh/view?usp=sharing

Intuitive TV (Volume) Remote

Problem:

When watching something on TV, the volume we want it to be at is highly dependent on what is on the screen. If there is an action scene, the volume is usually way too loud and needs to be lowered. If the scene is dialogue with some background music, the volume often needs to be raised to understand what is being said. This is something that myself, my family, and my housemates have often faced and leads to us button smashing the remote after certain scene changes.

Proposed Solution:

After raising and lowering the volume three times, the volume automatically goes to the average of the low/high by simply holding the corresponding button for a second thereafter. The volume could still be adjusted from there by pressing the buttons, but being able to consistently go back and forth to a value would make things much more convenient. I chose three times of going back and forth because that leads to a pretty good average which would be quite close to the volume actually wanted each time as well as because three times is when I want to just hold the button down and go back to the previous low/high.

Proof of Concept:

Here I used two tactile push buttons to represent the up and down volume button respectively. I chose not to take too high or low volumes so that the video could be relatively short, but still make the point. As can be seen after cycling up and down the volume a few times, I long hold the up button and it goes to the average high. I then adjust it a bit lower to a medium volume followed by going back to the high. Finally, I skip to the low volume. I chose to start at volume 30 arbitrarily here, but for an actual TV, it would start at whatever the last volume was.

Assignment 6: Translate data in to motion

Take physical input collected over time, turn that in to an interactive movement related to the input data.  Interaction has emotional meaning, can you create an emotional data feed?

My doorbell example:  If someone presses my doorbell button multiple times, should the doorbell ring multiple times?  Or should it text me with a photo of who’s at the door?  If it’s a neighbor’s kid, should it text their parents and ask them to cut it out?  If it’s someone in my house and they have bags of groceries, and it’s raining outside, should it text/email me and flash all the lights in the house to let me know to let them in?

Class 10, 1 October notes

administrivia

voting – go vote!  If you think you’ll be late for class, please drop me email in advance.

celebrate non-US holidays:  Chuseok and other Fall-holidays are a perfectly fine reason to miss class

Update:  another banned assignment is navigation belts as they have been done to death.

Physical interaction with temperature

Adam Savage’s DIY costume cooling vest for cosplay and a commercial alternative.

Consider medical/physically safety with devices that touch the skin.  It’s really easy to get burned/frozen with Pletier plates and ice.  Instead of smoke, use soap bubbles and a fan.

Coaching vs. grading

Think  about coaching, providing good feedback and encouragement to take a positive action instead of negative feedback.   In Total Control Training we use positive feedback and coaching.  Instead of saying, “You did that wrong, you were too slow going in to the corner,do it again” we say “That’s good, now do it again with a little more throttle as you go in to the corner.”

You want your interaction to be one that invites people and makes them want to interact.  If you do a good job, people will wait in line to use your interaction:  https://vimeo.com/3796361

Example: sports trainer that monitors your HR, BP, breathing rate, and hydration and knows your training course.  It encourages you to do better instead of punishing you for not doing enough.

Example: music “coach” that helps you learn to perform music. Watches your body and helps you correct form/posture.  Reminds you that you are always performing, even when you’re just practicing a scale or an etude.

Something to read over the break:  Alice Miller’s “For Your Own Good“, a criticism arguing that we replace the pedagogy of punishment  with support for learning, using the German pedagogy that gave rise to support of fascism as one study.

I see you

Problem:
One of the greatest pains when working in an open office is not knowing if/when someone is coming up behind you to speak to you. Because of how impersonal the giant office space can be, people often wear headphones (particularly noise cancelling headphones) to create a personal sensorial bubble where you are enveloped in your own ambient soundscape.

Often, because of how desks are arranged in long rows, the only way that someone can approach you when wanting to speak to you is from behind or the side, where your peripheral vision does not reach. Given that we can’t see who’s behind us (and now can’t hear) we often get startled, shocked or surprised by incoming people.

Solution:

By installing a Kinect on top of the monitor, we can tell when someone is approaching and entering your personal space, ostensibly to try to talk to you.

This gives us information on the direction that they’re approaching at (from the left or the right) and how near they are.

To communicate this information, I’m going to use a pair of animatronic eyes that will be installed atop the desk monitor. The reason being that this is close enough to eye level that it won’t be obtrusive, but any movement would be easily registered since I’d be looking in that direction most of the time.

Eyes are an incredibly expressive part of human language, and can convey lots of nuances even with the slightest inflections. This idea is inspired by the eyes on the billboard in The Great Gatsby, where the sense of being watched / seen invokes a different emotional response in the viewer.

Source

If there’s no one within the space, the eyes remain closed.

If someone comes close enough to your left side, the eyes will open and look to the left. Ideally, it’s supposed to mirror the interaction if you were sitting across someone at a diner and they see someone approaching you from behind.

Similarly, when someone approaches from the right, the eyes will open and look to the right.

Hands-free manipulation of underwater soft robotics

My idea is to use motion/distance sensors in order to manipulate the locomotive patterns of a soft robotic fish tail. Using simple gestures such swiping left/right with different rotation angles or giving various values to two distance sensors alternatively, one’s could manipulate the morphing of fluidic elastomer actuators, causing this way the tail to swim in different patterns (swim straight/turn right/left, big/small maneuver) .

In a greater scale and later one, this semi-autonomous fish tail could be designed and fabricated for motion impaired people, who meet difficulties exploring their body’s motion underwater. Having a personal trainer who manipulates externally their prosthetic tail, their body could develop different speeds, turns and navigation using dolphin style.

The fish’s body is composed out of two fluidic elastomer actuators (with chambers) separated by a constraint layer. The latter one tries to constraint each actuators deformation and as a result the tail bends.

Fabrication Process
Chambers and deformation

The liquid transfers from one chamber into the other using a gear pump, where two DC motors cause the gears to rotate in different directions.

 

Gear Pump Mechanism
Pulling/pushing water into different holes

 

 

 

 

 

 

 

 

DC motors activate alternatively and in different speeds according to the user’s input (right/left hand, rotation degree)

 

Assembly

Top View

 

Perspective View

Pet Pointer

 

A lazy dog

It is always great to have the companion of pets. However, it can get hard to keep a pet sometimes for people with hearing difficulty.  For example, pets like dog and cat like to run around when it is playing outside or hide in corners in the house. Usually people locate their pet by sounds or noises their pets make. This can be difficult for deaf people.

The vertical front view of the device

The solution I came up with is a pet locator that mechanically points to the direction of the pet. It has frequency sensors around and above the circular base to recognize any sound with familiar frequency. The direction of the sound is then indicated by the hand on top, which has a “wrist” that can rotate and bend. However, the fingers do not move for the ease of making the device.

The gesture of the hand when the pet is at the right.

Example: data over time to interact with

GIS: “A geographic information system (GIS) is a framework for gathering, managing, and analyzing data. Rooted in the science of geography, GIS integrates many types of data. It analyzes spatial location and organizes layers of information into visualizations using maps and 3D scenes. ​With this unique capability, GIS reveals deeper insights into data, such as patterns, relationships, and situations—helping users make smarter decisions. ”

There are some examples in the link where GIS is used for monitor changes and forecasting.

Visual&Sound Cues => Mechanical

This is my dog and her favorite toy: a blue and green rubber ball. She recently got into the habit that when she wants us humans to play the ball with her and we don’t give her attention, she would roll the ball into places she can’t reach, usually under a desk or behind the TV set. Then she whines until someone gets it for her, which could be minutes if no one decides to get it.

The process of retrieving the ball involves bending down, looking for the ball under table, get it out with a 6ft long stick or with hands. And sometimes when I have my headphones on, I don’t hear her whining.

This is frustrating for both my dog, and my family. So I came up with the idea of making a device that could fit under the table. It can detect the dog’s whining sound, move under the table to find the ball, and push the ball out.

 

Sound/Visual Cues Transformed into Mechanical

Honks can be really important information sources for drivers. This can be problematic for people that are deaf or if someone is listening to music loudly in the car. One possible solution is to have a visual cue, but driving already relies so much on visual input from the outside world and indicator lights on the dashboard that it would be overwhelming and not very helpful. Because of this, a mechanical notification such as the steering wheel vibrating would be a really helpful alternative. The wheel can vibrate with different frequencies based on the frequency of the honks and this information would be easily passed on to the driver as at least one of their hands would be on the wheel.

Another idea I had was if someone is blind they need to rely on hearing or haptic feedback. Oftentimes I get someones attention by saying something and if they don’t hear me I will wave my hand a couple feet in front of their face. If someone who is blind who’s working on something say on their computer while listening to something would not be notified by either of these actions. I would potentially feel awkward about just walking up to them and tapping their shoulder. If there was a device that they could wear that would tap them for me weather as a watch or armband it would be really useful.