Are you ok?

This project comes from the feelings of frustration and even alienation that come from canned responses to serious issues. I’m referring to the copy-pasted mental health messages you see in a course syllabus, a canned “I’m sorry for your loss” message from an employer or someone after the loss of a family member, or a friend telling you to “just take a deep breath” when you’re seriously stressed out.

These people may mean well – and often are making a serious effort to connect and help – but being on the receiving end can be exhausting. These are 1-way interactions, and they often feel disingenuous. The pillars of offering good support are truly listening, showing that you care, and devoting time without distractions or judgement. As online therapy and corporate mental health initiatives are becoming more prevalent, these issues might only get worse despite better awareness.

“Are you ok?” is a robotic emotional support kiosk, and embodies everything frustrating about bad mental health support systems. The construction is simple: A CRT monitor with an emotive face, and a pair of big push buttons for the user’s Yes/No responses in front. The kiosk prompts the user with text and displays emotions via simple faces on the screen occasionally. The systems sits on a sterile white pedestal that houses the logic circuits. The pedestal itself has a start button and a coin slot on it to “initiate the support interaction.” A speaker in the pedestal reads aloud the prompts so anyone nearby can listen in.

The kiosk:

  • Doesn’t care about you or your emotions (it’s a robot…)
  • Only provides canned responses
  • The only input is a binary Y/N, you can’t open up to this thing
  • It seems to solely attempt to diagnose you and move on
  • It’s impersonal
  • It only acts logically
  • They might be in public locations, or in groups. There’s no privacy.

No description available.No description available.

An example interaction would look like:

  1. Are you ok?
    1. Y – Thank you! (Session ends)
    2. N – Sorry to hear that (sad face). Are you depressed?
      1. Y – Sorry to hear that (sad face). Have you tried getting extra sleep?
        1. Y – Hmm (confused face). Have you tried talking to your professors?
          1. Y – Your professors are looking out for your mental health! Have you seen CAPS?
            1. Y – Hmm. Unfortunately, that’s all we can help you with here! Thank you! (Happy face, session ends.)
            2. N – You can make an appointment online! (Happy face, session ends!)
          2. N – You should reach out to your professors! They have your mental health as a priority. (Happy face, session ends.)
        2. N – Try getting more sleep! (happy face) (session ends)
      2. N – Have you experienced the loss of a family member recently?
        1. Y – I’m sorry for your loss! (sad face, session ends.)
        2. N – Are you currently feeling anxious or stressed out?
          1. Y – Try taking a deep breath! (Deep breath animation) Did that help?
            1. Y – Great! (Happy face, session ends!)
            2. N – TODO, etc
          2. N – TODO, etc

The Logic

The logic sentence would work by mapping the order of Ys and Ns to a specific output -> ie, NYYY would map to “Your professors are looking out for your mental health! Have you seen CAPS?”

The inputs are binary Y/N, which are basically input’s “one” “two” “three” etc, and align with the Y/N inputs in order.

There’s lots of logic sentences here, one per output. The example above (NYYY) would look like:

(~one AND two AND three AND four) -> “Your Professors . . .”

or NNNY:

(~one AND ~two AND ~three AND four) -> “Try taking a deep breath!”

The table for those two sentences looks like:

One Two Three Four Your Professors Deep Breath
0 0 0 0 0 0
0 0 0 1 0 1
0 0 1 0 0 0
0 0 1 1 0 0
0 1 0 0 0 0
0 1 0 1 0 0
0 1 1 0 0 0
0 1 1 1 1 0
1 0 0 0 0 0
1 0 0 1 0 0
1 0 1 0 0 0
1 0 1 1 0 0
1 1 0 0 0 0
1 1 0 1 0 0
1 1 1 0 0 0
1 1 1 1 0 0

and the circuit looks like:

No description available.

The logic is basically a ton of AND gates.