Translating Image to movement

First idea (for accessibility):

Currently, I am painting my sculptures. The problem is, though, that I have a limited amount of ventilation in the studio space because I am currently working in my room, so the paint doesn’t dry as it should. And that is a big problem for me because the paint I use is toxic to the human lungs, which means that when I smell it, it is already too late.

So I have designed a fan with LCD, one of the gas sensors from my last project, and an RGB sensor that interacts with my work and space. So the LCD screen will display the value of gas from the paint. If it is in a high range, it will activate the fan. Also, the RGB sensor will detect the color difference on the sculpture. It will compare the colour value of before and after, and using millis function after a certain amount of time has passed, it will activate the fan. The fan will not only help me with ventilation but also speed up the drying process.

Second idea (for myself):

As a sculpture artist, I always wondered installations kinetically interacting with the audience. What I am imagining to design in this is expected to be used in my future project of expressing the mob pressure.

After some mapping, the arduino RGB sensor gives the frequency of each RGB value. Therefore, one could set a range for each RGB value and make each servo motor move differently in angle. For example, when each value falls from 0-20, each assigned servo motor could move. Therefore, when each servomotor senses certain colors of clothes or shoes, it will turn towards them.

The more effective/accurate version would be using the PIR sensor I used before. When someone approaches and PIR sensor value turns to 1, the servo motors can turn towards the person at the same time. Or have each PIR sensor for each servo motor, so when a person gets closer to one motor, each motor can look away from the person to deliver a more excluded feel.

They all turn towards you when the PIR sensor senses the movement

I guess it is kind of like a reversed mirror, just that it doesn’t show you your reflection, but it reflects your movement.

Or even it will be nice to have wheels attached so they follow you. That will be extra creepy.

 

Assignment 5: reading and a design exercise

If you’ve never dealt with input smoothing, this 20 minute tutorial on smoothing analog input on arduino covers all the basics

Tutorial 23: Smoothing Data

Assignment #5: describe translating a sound or image in to a mechanical output, assume it’s for accessibility.  No need to make anything, this is a thinking/drawing exercise that should take no more than an hour

If you find some examples of data over time to interact with (hint: thursday assignment) please post to Looking Outward.  (no data for stock markets nor weather)

Class notes 29 Sep 2020 – start kinetics

Admin

A10 dates updated on blog

reminder, campus is closed after Thanksgiving

looking for parts, try octopart search engine: https://octopart.com/

do people want 3d prints or lasercuts or both?

Discuss reading of Make It So chapters

Start Kinetic discussion

size of physical control vs. input

size of physical control vs. output

tactile controls are great for fine control/refinement/detailed feedback

but can get lost in the shuffle — NASA controls are laid out based on the physical design of the space shuttle

or can be stylistic / skeumorphic. Why does a Starfleet vessel have touchscreens (LCARS) everywhere but the warp engines are driven by an 19th century ship’s throttle?

MIX MECHANICAL AND OTHER CONTROLS WHERE APPROPRIATE Mechanical controls are better for some uses, though they can’t as easily serve multiple functions. Nonmechanical controls, like touch-screen buttons, are easier to change into other controls but don’t offer the same kind of haptic feedback, making them impossible to identify without looking at them and creating questions about whether they’ve been actuated. Design interfaces with an appropriate combination that best fits the various uses and characteristics.

look at inputs for kinetic outputs

median vs. mean

std devs see wiki for details: https://en.wikipedia.org/wiki/Standard_deviation

how to read complex sensors over serial protocols: I2C, SPI, MIDI

look at data smoothing / filtering

simple smoothing: https://www.arduino.cc/en/Tutorial/Smoothing

break down the types of kinetic interaction

focusing on output

– vibration

– thumps, pokes

– temperature? peltier boards

– symbols: Braille

Then over time

– signal encodings, morse

– pattern recognition: what does walk feel like? Run? Crying? Laughing?

– earthquake pattern recognition

– meaning generated by content that changes over time, poetry

Body recognition in HCI for diverse applications

A] Behance Portfolio Review Kinect Installation

An open-minded approach to natural user interface design turned a portfolio review into a memorable interactive event.

B] Stroke recovery with Kinect

The project aims to provide a low-cost home rehabilitation solution for stroke victims. Users will be given exercises that will improve their motor functions. Their activities will be monitored with Kinect’s scanning ability, and a program that helps keep track of their progress.

This allows the patients to recover from home under private care or with family, instead of hospital environments. Their recovery levels can be measured and monitored by the system, and researchers believe the game-like atmosphere generated will help patients recover faster.

 

C] Kinect Sign Language Translator

This system translates sign language into spoken and written language in near real time. This will allow communication between those who speak sign languages and those who don’t. This is also helpful to people who speak different sign languages – there are more than 300 sign languages practiced around the world.

The Kinect, coupled with the right program, can read these gestures, interpret them and translate them into written or spoken form, then reverse the process and let an avatar sign to the receiver, breaking down language barriers more effectively than before.

D] Retrieve Data during a Surgery Via Gestures

https://www.gestsure.com

 

 

Kinetic Interaction Examples

This swing analyzer is put on golf gloves to sense and analyze golf swings  by detecting motions.

 

This smart night light automatically lights up when someone walk by it within a certain distance, and it turns off either after a set period of time after the person leaves.

An alarm clock that is turned off only after the person gets off the bed and stands on the pad.

Kinetic Interaction Examples

  1. Games that make use of device gyroscope / accelerometer for input control
  2. Project Soli – close range radar for fine motor controls w/o physical hardware
  3. Ultrahaptics – feeling without touching; providing haptic feedback through ultrasound
  4. Shape changing controller based on drag force experienced
  5. Theremin
  6. Text rain
  7. Posture training device
  8. Dynamic VR display (Rhizomatiks Research)

Kinetic Interaction Examples

These “music gloves” by Imogen Heat help users to create music much more seamlessly and in the moment than using a keyboard or soundboard. By just moving hands up and down, left and right, tilting, pinching fingers, and pointing the user can change the volume, pitch, tone, and filtering.

 

This “Kinetic Wall” by Cupra changes shape so that where the user is looking moves from wall panels to windows to glimpse behind the wall.

This “Bloomframe” window at the push of a button will transform from a window to an open air balcony.

The”Sharifi-ha House” is a house that will open or close rooms based on the season/temperature. When it is warm out, the rooms will be very open and get a lot of natural light and air while when it is cold, the house will close itself so that the minimal air is leaked. It can also be changed by users choice, but the website didn’t really specify this.

Make it So reading

As the author of Make it so explains, technology has an unattachable relationship with science fiction. I feel like back in the days, to the general public, high-tech was something that was very unfamiliar, and very few people had access to. Therefore, science fiction had to contain something that appeared to be “high-tech” for a long time, which included the jewel-like buttons and unfriendly user interface. (As a lot of people still say “beep boop” when they explain high-tech in a jokingly way). The interface had to appear very complex and unfriendly because that way the general audience will watch and think that,  “Wow, Captain Kirk is amazing–how does he memorize all the roles of those buttons?”.  The interfaces’ role is very big in any technology; it is like a representative that you speak to at customer services. It has to be as literal, straightforward, and user-friendly. From that perspective, anthropomorphism might seem inevitable in its existence. Because if anything for technology to appear friendly to humans, it would have human-like characteristics. However, the interesting part is that anthropomorphism actually evokes an unfriendly feeling to humans such as an uncanny valley. If people preferred something resembles that of a human being, the camera would look like an eye, or a mouse would be in the shape of a hand. Rather than anthropomorphism, what really appeals to humans that resembles one’s physique would be ergonomics. Its beauty does not come from mimicking one’s physique but rather interacts with it. On the other hand, anthropomorphism may not be an appeal in an interface, but it is in communication. A great example would be emojis. It has ambiguous human-like traits so that anyone can see themselves in those emojis and express themselves. No matter how much people try to make objects look more human, it only intensifies the uncanny valley; for humans, those objects that have direct contacts with humans are better to be ergonomic and those express humans are better to be anthropomorphic.