Background

“Are you ok?”

This project comes from the feelings of frustration and even alienation that come from canned responses to serious issues. I’m referring to the copy-pasted mental health messages you see in a course syllabus, a canned “I’m sorry for your loss” message from an employer or someone after the loss of a family member, or a friend telling you to “just take a deep breath” when you’re seriously stressed out.

These people may mean well – and often are making a serious effort to connect and help – but being on the receiving end can be exhausting. These are 1-way interactions, and they often feel disingenuous. The pillars of offering good support are truly listening, showing that you care, and devoting time without distractions or judgement. As online therapy and corporate mental health initiatives are becoming more prevalent, these issues might only get worse despite better awareness.

“Are you ok?” is a robotic emotional support kiosk, and embodies everything frustrating about bad mental health support systems. The construction is simple: A CRT that alternates between an emotive face and text, and a pair of big push buttons for the user’s Yes/No responses in front. The kiosk prompts the user with text and displays emotions via simple faces on the screen occasionally. The systems sits on a sterile white pedestal that houses the electronics. A speaker in the pedestal reads aloud the prompts so anyone nearby can listen in.

In it’s idle state, the displays the message “Are you OK?” and the Y/N buttons blink slowly, waiting for someone to use it.

The kiosk:

  • Doesn’t care about you or your emotions (it’s a robot…)
  • Only provides canned responses
  • The only input is a binary Y/N, you can’t open up to this thing
  • It seems to solely attempt to diagnose you and move on
  • It’s impersonal
  • It only acts logically
  • They might be in public locations, or in groups. There’s no privacy.

Maquette

~40% scale. I made this mostly to experiment with the aspect ratio of the pedestal, and I think it works well at this scale. I’m planning on building the full pedestal out of MDF and painting it white.

I’m also playing with scenery. In it’s world, the kiosk would live in a corporate lobby, hospital waiting area, museum lobby, university campus, a high school, or a similar public space. It would have chairs, floor plants, or similar décor around it, so I’m hoping to figure that out as well.

No description available.

No description available.

No description available.

Construction TODO:

The Monitor: (Red VideoSphere TV, 12in diameter. Arrives Thursday.) I intend to paint it white, as long as I can find good white paint for it and take it apart effectively.

I chose this monitor partly because it’s cool/weird, but also because it’s a really unfamiliar form factor. I think that gives the kiosk a bit more interest, but also adds to the imaginative/speculative nature of the design. The spherical face feels a bit like an astronaut or an alien, which I think is fitting.

Videosphere Model 3240 Portable Television Portable Television, 1970 |  Objects | Collection of Cooper Hewitt, Smithsonian Design MuseumJVC video sphere up and running With Sky Tv 1970s Vintage Tv - YouTube

Overview | How to Convert a JVC Videosphere into a Monitor for Raspberry Pi  and Desktop Computers | Adafruit Learning System

Electronics: https://learn.adafruit.com/using-a-videosphere-as-a-digital-monitor?view=all

I have an RF Modulator and an RF Matching Transformer to feed the TV analog video/audio.

hacks_connections_desktop.png(from adafruit)

Everything will run on a raspberry pi that implements the logic and feeds a video signal to the TV.

Software TODO:

The most important thing I need to do is write the graphics code to generate an image for the TV from the logic state. I have a lot of ideas on how to do this. Likely python plus tkinter, or maybe javascript with p5.js or something similar. I need to do a bit of research here, and this is probably the most significant TODO for the project. I want to get started right away.

Because the image is somewhat difficult, I’m probably not going to implement the logic on 74xx chips, since it would be really significant to implement all of the logic outputs this way.

The Logic

An example interaction would look like:

  1. Are you ok?
    1. Y – Thank you! (Session ends)
    2. N – Sorry to hear that (sad face). Are you depressed?
      1. Y – Sorry to hear that (sad face). Have you tried getting extra sleep?
        1. Y – Hmm (confused face). Have you tried talking to your professors?
          1. Y – Your professors are looking out for your mental health! Have you seen CAPS?
            1. Y – Hmm. Unfortunately, that’s all we can help you with here! Thank you! (Happy face, session ends.)
            2. N – You can make an appointment online! (Happy face, session ends!)
          2. N – You should reach out to your professors! They have your mental health as a priority. (Happy face, session ends.)
        2. N – Try getting more sleep! (happy face) (session ends)
      2. N – Have you experienced the loss of a family member recently?
        1. Y – I’m sorry for your loss! (sad face, session ends.)
        2. N – Are you currently feeling anxious or stressed out?
          1. Y – Try taking a deep breath! (Deep breath animation) Did that help?
            1. Y – Great! (Happy face, session ends!)
            2. N – TODO, etc
          2. N – TODO, etc

The logic sentence would work by mapping the order of Ys and Ns to a specific output -> ie, NYYY would map to “Your professors are looking out for your mental health! Have you seen CAPS?”

The inputs are binary Y/N, which are basically input’s “one” “two” “three” etc, and align with the Y/N inputs in order. To differentiate between NYY and NYYY, for example, there is also a set of logic variables to represent what stage we are at. eg: stage_4 implies we’ve gotten 4 inputs so far. Some software would handle iterating this.

There’s lots of logic sentences here, one per output. The example above (NYYY) would look like:

stage_4 AND (~one AND two AND three AND four) -> “Your Professors . . .”

or NNNY:

stage_4 AND (~one AND ~two AND ~three AND four) -> “Try taking a deep breath!”

The table for those two sentences looks like:

One Two Three Four Your Professors Deep Breath
0 0 0 0 0 0
0 0 0 1 0 1
0 0 1 0 0 0
0 0 1 1 0 0
0 1 0 0 0 0
0 1 0 1 0 0
0 1 1 0 0 0
0 1 1 1 1 0
1 0 0 0 0 0
1 0 0 1 0 0
1 0 1 0 0 0
1 0 1 1 0 0
1 1 0 0 0 0
1 1 0 1 0 0
1 1 1 0 0 0
1 1 1 1 0 0

 

No description available.