Final Crit: 3-Axis Trackball

Intro: A close friend of mine has mostly unobtrusive Duane syndrome, meaning their right eye cannot turn outward, causing double vision, etc. whenever they look to the right.  They always like and relate to chameleons or owls because of this. That relation got me thinking about how this syndrome could be experienced by others to help empathize.  I brainstormed a game with their approval that is more or less a first person I, Spy starring a chameleon.  Each of the players’ first person “eyes” are controlled independently, and the goal is to focus on certain objects you find.  This would be controlled by two 3-axis trackballs, matching each eyes pitch, roll, and yaw. For this assignment, I prototyped this trackball, and in five years time, can see this alternate input method making a truly unique mark on a game.

Proof of Concept

The Trackball

Normal mouse trackballs often use a single optical sensor to read one plane of data, or, the XY coordinates of a screen. Optical sensors typically do this by comparing pixels between two images taken at a high frame rate.  I figured it would be easy to harvest two optical sensors from two mice to get the XY data from one and rotate the other 90 degrees to get only the Z from it.  This pair of a mouse and one orthogonal to it would give me all 3 axes.

I was wrong in how difficult it would be to find mice with documented pinouts, since I am not good enough yet to deduce them on my own.  After going through seven or eight or so different mice, nearly all with different chips, I ordered two PS/2 mice.  I would not access these optical sensors directly, but knew I could communicate with the PS/2 port as a backup.  So, the project now had two mice talking to my Arduino effectively.

The ball would need to be very close to the mice, since they’re quite old (being PS/2 and all) and not that great at detecting motion.  I used ball transfers and PVC pipe to allow the ball to rest over a mouse at its bottom with 3 points of contact, and a small elevated platform for the orthogonal mouse to get its third axis.

After going through a number of balls, a soccer ball was large enough to use the complete mice.  I attempted disassembly of the PS/2 mice, but was too scared of breaking them and missing the deadline by having to order more, so kept them whole, leading to a foam molding housing.

The Game

I used a simple program Udino to talk to Unity, a game engine.  Unity could read any port on the board, and use the data accordingly.  To render first person double vision, I wrote a screen space shader that blended the views of two different cameras a few units apart in the world.  The closer their forward vectors, the closer the cameras would also move to each other, effectively rendering the same “view” and making blending seamless.  The less similar their forward Vectors, the more it would blend, and cause dissociation.  Udino successfully controlled each camera separately.  A view of the cameras working can be found here:

Unfortunately, I could not connect the working trackball to Unity to successfuly modify game state with the soccer ball.  I am guessing it has to do with Udino using serial communication, and the PS/2 library I am using also needing to write calls to the mice over the serial port.  In the future, I hope to get proper optical sensors instead of whole mice to make this project more compact and use less unnecessary protocols.

Conclusion

Overall, I am disappointed with failing to connect the trackball to Unity, but satisfied that all of the individual parts of the pipeline still work separately. Based on playtesting, the ball itself is a fun toy and easy to let people play with, which I think is important because it sets a base level of engagement.  To even try to understand someone else’s experience, its important to want to in the first place, and I feel like this project is a good step in the right direction of this.

finalCrit

Crit #3 – Multiple Timers

Image result for kitchen timer

Problem: When in the kitchen and cooking a big meal (say, Thanksgiving dinner), I often have multiple timers going.  Between the microwave, my phone, my roommate’s Google assistant, etc., they can be hard to manage, especially when many timers are physically locked to their positions on the appliances.  This can be an issue for not only those with low mobility, but because each timer is often on a different type of interface (touch screen, keypad, twist timer), it can affect those with low dexterity as well.  Timers should be manageable and adaptable to user needs.

Solution: I want to solve the problem in two ways: by combining the timers into one place, and making input methods modular such that users can select the input that works for them e.g. lever, button pad, knob, etc.  The user should be able to easily discern which of their set timers is going off even though they are all now co-located, though, and this is done by unique audio cues for each.  They should also be able to know which timer they have shut off, and which are still going, based on sound.

Proof of Concept: My proof of concept is a system of 3 timers with one on/off button, one knob to set time, and one knob to select a timer.  Each of the timers can be set and turned off independently.  When one timer is going off, it adds to a melody that plays all of the currently on timers’ contributions.  Each timer has a distinct sound.  Users can also turn off all timers with a more complex input so as to not accidentally do it.  Ideally, I would extend this system with a more modular input method.  I want to include keypad entry like on microwaves, an easier slider input for those who cannot twist knobs or input on small keys, and ideally even voice input.  The customizability of this is not shown in the proof of concept, but the code framework can certainly support it.

Files + Code + Video

Assignment #8: Trunk

Problem: I’ve driven off after my friends have gotten stuff out of my car trunk but before they’ve closed it.  Thinking about this problem, its baked into my usual wait period when dropping someone off, the weirdness of hearing the thunking of the trunk when you’re not the one opening or closing it, and the fact that its all directly behind you.  Audio feedback would be a good way to help differentiate the trunk’s state when the driver is not the one operating.

Solution: More or less, different audio cues based on trunk status.  Traditionally there is a slap slap slap done by someone on the side of the car that means “you’re good to drive off now,” but this could be communicated better.  It is important this does not confuse the driver though, and should be noticeably distinct from any relevant “trunk open” or “door ajar” sounds that they may also be hearing during normal trunk procedures.  So, the system differentiated through kinetic sound when the drunk is closed, and more electronic audio when the trunk is ajar or being fuddled with.  The former is accomplished with a solenoid, and the latter, a normal speaker and ideally a weight sensor.

Proof of Concept:

Video + Code

Assignment #7 – Airplane Announcement

Problem: Info monitors on the back of seats in airplanes provide nice-to-have information, such as total flight time, time till destination, and nearby locations on the ground.  These monitors are often visual-only, assuredly to not disturb nearby guests, but this makes them inaccessible to the visually impaired.  Converting this information to audio inside earbuds or headphones would be an easy and unobtrusive fix.

Solution: Because these headrest monitors already have audio jacks, reusing them to communicate this information would be easy using established screen reading tech or more elegant selection methods beyond a touch screen.  This introduces the difficulty of limiting audio, and potentially distracting from important announcements.  This leads to the second possible part of this assignment: making often garbled announcements more understandable for those hard of hearing with this audio jack.  More or less, this all boils down to an interruptable info stream of important flight info.

Solution: The solution is simple, two psuedo-threads of audio, switchable between with a button press simulating a pilot or flight attendant’s announcement.  The “information” like distance to destination is simulated with a tone from a pot right now, since I have no idea about playing samples yet.  Being interruptable lets any outside announcement alert the user to plug in and listen, actually give them the info, or more.

Files + Video

Crit #2 – Grumpy Toaster

Problem: Toasters currently only use their *pop* and occasionally a beep to communicate that they are finished toasting whatever is inside them.  It is also difficult to tell the state of various enclosed elements of the toaster, like if the crumb tray needs to be empty, any heating elements need wiped off, etc.  I believe the toaster could communicate a lot more with its “done” state in ways that would be inclusive to a variety of different user types.

Solution: More or less, a toaster that gets grumpy if it is left in a state of disrepair.  Toasters are almost always associated with an energetic (and occasionally annoying) burst of energy to start mornings off, but what if the toaster’s enthusiasm was dampened?  Because users are generally at least half paying attention to their toaster, a noticeably different *pop* and kinetic output could alert them that certain parts of the toaster needed attention.  For example, if the toaster needed cleaned badly, it would slowly push the bread out, instead of happily popping it up.  Both the visual and audio differences generated by modifying this kinetic output would be noticeable.

Proof of Concept: I constructed a model toaster (sans heating elements) using a small servo and a raising platform.  Because a variety of sensing methods for crumbs did not work, “dirtiness” is represented by a potentiometer.  I’ve substituted a common lever for a light push switch to accommodate a broader range of possible physical actions.

The servo drives the emotion of the toaster.  It can sharply or lethargically push its contents out, providing the user its current state.  Once removed, the weight of the next item to be put inside then lowers the platform back onto the servo.

Files + Video: Drive link

Discussion: This model is ripe for extension.  I originally designed this around the idea of overstuffing your toaster, something I do frequently that not only doesn’t toast the bread well but surely dumps more crumbs than necessary into the bottom tray.  Unfortunately, I couldn’t figure out a way to test for stuffedness, and went with straight cleanliness instead.  But, the overall idea behind designing emotionally (grumpy toaster, fearful car back-up sensor) has helped me understand this class a lot better, and I hope to continue working on that line of thinking with more physical builds like this.

 

Assignment #5 – Icy Roads

Problem: In contemporary cars, its common for backup cameras to have additional visual aids such as guiding lines and audio feedback such as a dinging that gets quicker the closer you are to an object.  Rarely, there are haptics in the seat as well.  These haptics are often bad, and may even be different based on where you’re sitting in the seat or how thick / how many layers are between you and the seat.  Further, its the same feedback no matter the road conditions, something drivers want to be aware of.

Image result for backup camera

The Solution: First, the audio dinging does not really communicate exactly how much left you have to go, instead forcing drivers to rely on constantly slowing their pace to match it.  This can be useful in forcing more exact drivers, but can often be annoying.  It is also not available for hard of hearing drivers.  Therefore, a physical system that represents in miniature how far you have left to go would also be good.  This is constructed here as a rotating servo, ideally fixed to some reference point in the car like a level.

Second, road conditions are not communicated with the current back up camera system.  While it could be obvious to look outside the vehicle and see snow, ice can often be more inconspicuous.  To emotionally communicate this state, I thought it could be fun to have the backup device become “nervous” and shiver/shake, alerting drivers that something is off and they should be extra careful.

Proof of Concept: Primarily, a distance sensor driving a servo, with additional input from a potentiometer approximating “iciness” of the roads.  The servo rotates similar to a weather vein based on distance to and from whatever object is in the rear, and is more or less “nervous” (see video) based on how bad the road conditions are.  These are, respectively, a fixed amplitude and a variable frequency of a sin function.

[Fritzing coming later today]

Video + Code

Crit #1 – Fun

Problem: This turned out to be a hard assignment for me, since I had difficulty coming up with a problem that should be solved.  I ended up considering “fun” in general, and how play would be different if in a blind world for hard of hearing users. I figured in that world sound would be the primary way to playfully engage and communicate with each other, and was drawn to the piano scene from the movie Big. The floor piano has no tactile feedback, staying flat on the ground, and is only fun because multiple people can be on it at once. So, my problem to solve was how to offer a different type of creative fun using this musical structure. Probably too tall an order on my end.

Solution: Essentially, collaborative music visualization.  Nothing novel, but it pushed me to understand this entire pipeline and actually learn p5.js, something I needed to do. Users would need to be able to interact with a visualization system that responded to their keyboard inputs, reacting differently based on a potentially varied amount of states.  This would need to capture the feeling of creating ripples in an existing system, and having your inputs matter differently at different times.

Proof of Concept:

A microswitch “keyboard” was built to handle inputs. Compared to a floor keyboard, this is relatively miniature. Foam keys rested on lever microswitches that fed back to the board.  In a final build, I envision RFID readers embedded in each key that could determine who was pressing what key, highlighting the actual collaboration that could take place.

Chance Crit 1 Piano

The keys then affect a pattern on a browser window, based on what key and how many keys are pressed.  The visualization interacts back with the user(s) by becoming more “resistant” to input the more it receives, and less the longer it has been unused.  It further has time-interactive elements like color, frequency, and so on dependent on how users interact with it.  Code partially based on an existing p5.js library, wavemaker.

chance Crit 1 Gif

With the RFID or other tracking mentioned above, this system could be extended to drive more into the feeling of collaborative creation I’m trying to capture.  Different users could have different “heat” signatures they apply to the waveforms, different speeds, or different interactions with each key or section.

Files:

The Arduino code is extremely simple, basically just reading and passing values, with the bulk in the p5.js files.  lytle_crit1.

Assignment #4 – Cast Iron

Problem: I cook almost exclusively on cast iron.  It takes a while to heat up, but retains that heat super well.  There are often times where I have to step away from the skillet for a while to let it cool down.  I also moved in with new roommates recently who promptly touched the handle when it was still scalding hot.  It is difficult to tell how hot the pan is without holding the back of your hand up to it; there is a small visual indicator of heat coming off it, but it is minute.  The primary feedback is physical. A further issue is our individual heat tolerance–I can take and wash the pan / reseason it in the sink when its hotter than when my roomates can or want to.  It is impossible to tell just how hot it is without actually touching it, thermometer excluded.

Chance Assignment 4

Solution: Basically, heat indicators built into the pan.  This would allow a user to know at a glance if the pan were hot or not, instead of using their hand. This would be accomplished by a thermometer testing the surface of the pan.  Because cast iron pans heat fairly evenly, it has a fairly large margin of error in terms of where its sampling from.  No idea on how thermometers actually work so that may be wrong.

The embedded LEDs range from green to red, fairly common stop and go indicators. Red fairly clearly means hot when used on cookware as well.  I see the lights lighting up linearly from Not to Hot based on temperature, so only 1 LED would be in use if it were cold, and all in use when Hot.

I think this would at least ease the use of ambiguously hot pans, in a similar way to the audio feedback project by another student from Assignment #2.

Proof of Concept:

Chance Assignment 4

Chance Assignment 4

Fairly straightforward code that lights up LEDs in a row based on how large a value is read from a pot, acting as a thermostat stand in.  I am pretty sure I fried one of my pots but this does work fine, so hope for that demo in class!

// Chance, Assignment 4. Not really digital states, but approximations of analog ones

#define Serial SerialUSB

const int buttonPin = 2; // momentary button
const int greenLED = 13; // green LED
const int green2LED = 12; // green LED
const int yelLED = 11; // yellow LED
const int yel2LED = 10; // yellow LED
const int redLED = 9; // red LED
const int red2LED = 8; // red2 LED
const int potPin = 1; // analog 1

float LEDcount = 6.0f; // should just cast this later but fine to be declared as this
int startPin = 8;

void setup() {
// initialize the LED pins as an outputs:
  for (int i = red2LED; i <= greenLED; i++) {
    pinMode(i, OUTPUT);
  }
// initialize the pot pin as an analog input:
    pinMode(potPin, INPUT);
}


void loop() {
  // read in the pot / thermo stand-in
  int val = analogRead(potPin);
  Serial.println(val);

  for(int i = LEDcount - 1; i >= 0; i--) {
    if(val >= (((float)i / LEDcount) * 1023)) digitalWrite(i + startPin, HIGH);
  else
    digitalWrite(i + startPin, LOW);
  }
}

Assignment #3 – Garbage Trash Cans

Problem: At the Chick-fil-a near Waterfront, they have some trash cans.  They’re bad, they’re garbage, and they’re intended to stop users from putting in new trash while they compress down the previous user’s inputted garbage.  While this is a good idea in saving space, it’s annoying to wait for the trash can to finish compressing, and the only feedback is the audio of the trash can compressing.  There is no warning for “you cannot put trash in right now.”  There is, I think, a place for it on the top of the trash can, but its either always burnt out or never working, so here is my solution instead.

Solution: An interruptable trash can state machine that effectively warns patient users when it is busy and allows impatient users to dump their trash in and move on.

Proof of Concept: The trash can will have three added lights, one for “trash allowed” and two for “compressing, please wait.”  Additionally, users will be able to interrupt the compressing state to dump their trash in anyway, which may then lengthen the next compressing state.  This is intended to allow it to compress primarily when people are not using it.  For simplicity, the putting trash in the trash door is represented by a momentary switch.

Video: Chance Assignment #3

Fritzing Sketch:

Chance Assignment #3

Still shaking off the rust of my circuitry skills, but after this I’m pretty comfortable.

Arduino Sketch:

Assignment3_Chance files

Basic psuedo-state machine.  The RED state defaults back to the GREEN state after X amount of time, user definable.  I hope to write proper states and transitions moving forward, something I know but just didn’t have time for this assignment.

// Chance, Assignment 3.  Honestly just glad I got something working well.

#define Serial SerialUSB

const int buttonPin = 2;     // momentary button
const int greenLED =  13;    // green LED
const int redLED = 11;      // red LED
const int red2LED = 9;      // second red LED


void setup() {
  // initialize the LED pins as an outputs:
  pinMode(greenLED, OUTPUT);
  pinMode(redLED, OUTPUT);
  pinMode(red2LED, OUTPUT);
  // initialize the pushbutton pin as an input:
  pinMode(buttonPin, INPUT);
}

// state vars
int reading, previous = LOW;
int state = LOW;

// timer stuff
long time = 0;
long debounce = 200;

void loop() {
   reading = digitalRead(buttonPin);

  // check for state toggle and account for input delay
  if (reading == HIGH && previous == LOW && millis() - time > debounce) {
    if (state == HIGH)
      state = LOW;
    else
      state = HIGH;

    time = millis();
  }

  manageState();

  previous = reading;
}

// quick and easy psuedo timer thats badly CPU bound
int counter = 0;

// i didnt write a ""good"" state machine with transitionsn yet, short on time
void manageState() {
  bool tempState = sin(0.025 * millis()) > 0.0; // sloppy timer solution
  Serial.println(counter);
  switch(state) {
    case HIGH:
      counter = 0;
      digitalWrite(greenLED, HIGH);
      digitalWrite(redLED, LOW);
      digitalWrite(red2LED, LOW);
    break;
    case LOW:
      counter++;
      if(counter++ > 1000 * 20) {
        state = HIGH;
        previous = HIGH;
      }
      digitalWrite(greenLED, LOW);
      digitalWrite(redLED, tempState ? HIGH : LOW);
      digitalWrite(red2LED, tempState ? LOW : HIGH);
    break;
    default:
        digitalWrite(greenLED, LOW);
        digitalWrite(greenLED, LOW);
  }
}