16-223 Work https://courses.ideate.cmu.edu/16-223/f2020/work Introduction to Physical Computing: Student Work Mon, 14 Dec 2020 13:08:14 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Remote Lighting Final Report https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/14/remote-lighting-final-report/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/14/remote-lighting-final-report/#respond Mon, 14 Dec 2020 13:04:17 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4984 Dillon Shu and Jack Rooney
Our physical prototype – it was held to the wall by velcro and made from cardboard.

Abstract:

Waking up in the morning is often difficult. We are always looking for ways to assist us in getting to our jobs on time. Sometimes a loud, blaring sound is not enough to continue us through our endless cycle of work and sleep. That is why we decided to design a lighting system that will help with that. Our Wake-y Up-y Remote-y Lighting ™ makes sure that your lights go on when you silence your alarm because nothing should stop you from completing your labor with a smile.

Our goal was to design something that would turn on your lights when you pick up your phone in the morning and turn them off when you put your phone down before you go to sleep.

Our redesigned nightstand unit – the goal was a sleek/simple design but also functional. It looks far better than our physical prototype…

Objectives:

Our objectives for the project shifted throughout its duration. While we initially set out to make a working prototype, in the end we decided to make a CAD design that would allow us to circumvent some of the constraints that had limited us when we were trying to make a physical version. Once we shifted to the CAD design, our goal was to make a user-friendly product out of parts mostly available to us. We focused on aspects like design in order to remove any potential barriers for use by a potential user. 

The features we had initially implemented were quite basic in nature. We built both the load cell piece and the light switch mechanism, with the light switch mechanism being a laser cut part attached to a servo that would be pressed down to turn the light on/off. Having ensured that basic level of functionality, we moved onto the load cell but were met with technical difficulties. Though we ended up figuring out what was causing issues with our implementation of the load cell, by that point we had already opted to move to a CAD design.

Redesigned nightstand unit (cont.) – The pink represents the wireless charger hidden beneath the carpet.
Redesigned nightstand unit (cont.) – An alternate angle of the nightstand unit to show a small sliver to slot an Arduino Nano into, or something of similar size (and less computing capability)

Implementation:

With regards to our final design choices, we hoped to try to remove all reasons for a potential user not to use the product. With that in mind, we tried to make the design sleek so as the user would actually want to leave it on their nightstand. We opted for a phone carriage that was about 50mm larger than our phones in both length and width, primarily because we felt that having a snug fit for the phone would make it more difficult that it need to be to use. We also added the base plate to the nightstand unit at the end to address a problem that we had faced since the outset of the project, which was how to make sure the load cell did not tip over when a phone was placed in the carriage. When initially working with our physical prototypes, we had used our own counterweights to ensure the unit wouldn’t tip over. However, once we switched to a CAD design, we could implement a more efficient design. The base plate crosses over where the center of mass of the entire night stand unit would be, so as to make sure the unit doesn’t tip over. Ideally, if we were to actually create our CAD design in person, we’d like to make sure that both the base plate and the initial counterweight base cylinder were of a heavier material than the rest of the nightstand unit.

Our redesigned wall mounting unit – looks similar to our physical prototype. Our goal was to realistically shrink down the size of the model, but the components are basically the same.
Redesigned wall mount unit (covered)

Outcomes:

Our biggest success was shifting the focus of the project from physical prototyping to design. Once we decided to work on a CAD design as our final deliverable, it allowed us to think from a brand new perspective not about what we could physically make, but instead what we thought would make the best product. This gave us new takeaways from the project aside from what we learned about the physical parts and gave new insight into what needs to be considered for product design. Our biggest failure was probably the inability to get the physical model working in a timely fashion. Though we did end up figuring out what was wrong with our load cell design, by that point we had already wasted a few days troubleshooting individually and we were both extremely frustrated with the project. More importantly, though, our shortcomings with the physical model led to us deciding to switch to a CAD design so in the end that benefitted us regardless. Because of this, the shape of our final product was more of a product design where we pitched our idea of what we wanted to make to the class.

We also had some basic code written up for our physical prototype wall mount, as seen below.

#include <Servo.h&gt; 
const int SERVO_PIN = 7;
const int ON = A0;
const int OFF = A1;


Servo controlledServo;

void setup() {
  Serial.begin(115200);
  controlledServo.attach(SERVO_PIN);

}

void loop() {
 if (!digitalRead(ON))
 {
   switch_on();
   delay(1000);
   controlledServo.write(90);
 }
  
  if (!digitalRead(OFF))
 {
   switch_off();
   delay(1000);
   controlledServo.write(90);
 }
}

void switch_on(){
  for (int i = 90; i &gt; 0 ; i--){
    controlledServo.write(i);
    delay(1);
  }
  delay(1000);
  for (int i = 0; i < 90 ; i++){
    controlledServo.write(i);
    delay(1);
  }
}

void switch_off(){
  for (int i = 90; i < 180; i++){
    controlledServo.write(i);
    delay(1);
  }
  delay(1000);
  for (int i = 180; i &gt; 90; i--){
    controlledServo.write(i);
    delay(1);
  }
}

When working on our physical model, we only got so far as to test the functionality of our prototype. We stopped short of combining the load cell’s output and the servo movement. The code above was only to ensure we could get the range of motion we hoped to have in order to turn on the light switch. At the time of writing it, we had planned to factor in load cell output to control the type of motion, but because we had so many issues with the load cell we never ended up doing so. This was one of the major shortcomings in our project. What is not shown are the many libraries for the HX711 load cell amplifier that we had tinkered with in an attempt to get the consistent output we wanted from our prototype, unaware that it was our physical design that was interfering with the output.

Future Work:

Taking this project to the next iteration for us means a few things – a more compact design, a physical model, and expanding the features that we can offer. Even when switching to working on the CAD model we still tried to be mindful as to which resources would have been available for us to use had we wanted to convert it into a physical model. The first step would be to remove the existing load cell in favor of a load cell that looked more like a traditional scale. It would make the design more compact since the large load cell we have used in our design leads to the product being generally unwieldy and oddly shaped. Then, we would have to build a physical prototype of the new model. The whole point of designing the product was to make sure it was usable. Finally, incorporating some of the feedback we had received from our classmates, we believe that adding more features is imperative to helping convince a potential user that the product is the one for them to use. 

A few examples of what we’d like to incorporate:

  • Potential method of keeping the nightstand unit stuck to the table (perhaps using velcro or magnets) so the user cannot pick up the whole unit
  • Some form of lid to prevent removal of the phone once it’s been put in
    • This would have to be an optional feature for use. We had toyed with the idea but opted not to use it. One concern for a lid is that at night if someone needs to use their phone as a flashlight they’d have to take it out of the case, and that should be as simple as possible. Conversely, we’d like to try to prevent people from taking their phone out before they’ve fallen asleep/while they’re laying in bed trying to fall asleep. 
  • Support for further edge cases – Perhaps our biggest takeaway is that we have to consider use cases for the product that are not entirely black and white. We must account for things like having to go to the bathroom at night and not wanting the light to turn on in that instance when you take your phone off the cradle. 
  • User customizability – with some of the more open ended ways for us to continue expanding the product, the most important thing we can add is a way for users to decide which of these features they’d like to use and which they’d rather disable. Each person has different habits when it comes to falling asleep at night, so the product we create should try to cater towards allowing flexibility. While one person may like the feature of having the lights turn on whenever the phone is removed so as to strictly enforce a “no phone policy”, others may not.

Contributions:

The project was done jointly/collaboratively for the most part. We assembled the physical prototypes on our own as well as the final CAD design (though this can be attributed more to a misunderstanding of the due dates than anything else), but because of the shift in our project most of the work we did jointly in the last two weeks or so was spent on discussion of what to implement within the design. The general trend we followed was that Dillon did more of the design aspect while Jack focused more on the physical assembly and finer details of the design. 

Dillon: Sketches, CAD Design for nightstand unit and wall mount unit, design/features input

Jack: Schematics, Soldering + additional assembly of physical prototype, design/features input

Schematics and Sketches:

Electronic Schematics – Wall mounted unit
Electronic Schematics (cont.) – Nightstand unit
Basic Sketches drawn up while working on our redesign. They are mostly visual representations of the features we hoped to include. In order to try to make sure we agreed on what the CAD design looked like, we drew the model from different angles.
Redesign sketches (cont.) – Wall mount (right), nightstand unit (center)
Redesign Sketches (cont.) – nightstand unit from above (right), nightstand unit from the side (center)
]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/14/remote-lighting-final-report/feed/ 0
Rotating Labyrinth Final https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/11/rotating-labyrinth-final/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/11/rotating-labyrinth-final/#respond Sat, 12 Dec 2020 03:46:58 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4914 Abstract

Toys and games are becoming progressively more reliant on screens and these screens are the sole source of feedback for the children playing them. Our project objective was to create a game with more tactile and real-world movement that can be used to engage children. We decided to build a tilting marble maze with some form of decision-making and movement within the maze. The physical prototype achieved the objective of a completable tilting maze with real world movement. To enhance the decision-making aspect of the maze a later CAD iteration made the game more complex and generally more exciting to the player.

Objectives

The following were the goals for the marble maze in order of decreasing priority:

  • Responsive rotation of a maze via a gimbal system
  • Rotation is controlled by a joystick module
  • Compact so that it can easily be used by an individual
  • Movement within the maze itself (rotating walls)
  • Sensory input within the board to control movement
  • Concealment of wiring, motors, joystick pins, etc.

Concept and Preliminary Design

The board would rotate on a gimbal system such that the board could change in pitch and roll. The playing board would be attached to an inner frame and this inner frame would be connected to a servo and linkage that would rotate it about the x-axis. This servo and linkage would be part of an outer frame that rotates about the y-axis by way of a stationary servo and linkage. The maze walls and gimbal frames/stands would be fabricated out of 1/8″ wood so that the entire device would be relatively light. Dowels and bearings would be used to assure smooth rotation about the axis. Additional servo motors would also be mounted beneath the board to rotate certain walls. Rollover switches would be placed on the board with lever switches mounted beneath them to detect whether the rollover switch has been triggered. An Arduino UNO microcontroller would connect the motors and sensors of the board while also receiving user input from a joystick.

This image has an empty alt attribute; its file name is image.png
Preliminary sketches showing the gimbal system and concept
Sketch for rotating wall beneath the board. Wall rotates on a circular disk that rests on a ring support beneath the board which is linked to a servo motor
Sketch for rollover switch mounting. Rollover switch rests on a ring support and triggers a lever switch beneath the board
Electronic Schematic for constructed physical product
This image has an empty alt attribute; its file name is IMG_3356-768x1024.jpg
First built design with working gimbal system and mounted but not operational rollover switches and rotating walls

After using prototypes to figure out how to position the gimbal servo motors and linkages, a final physical iteration was made with the main emphasis being the board. Unfortunately, we could not finish mounting of the lever switches beneath the rollover switches so while the maze game is playable, it does not require as much thought from the user.

The below code shows how the gimbal system works by translating input from the joystick into movement of the frames. The code is very raw, with no smoothing filters placed in, just intended to allow user to complete the game without many control frustrations.

#include <Servo.h&gt;

const int outerServoPin = 5;
const int innerServoPin  = 4;

const int jsXPin = A0;
const int jsYPin = A1;

Servo outerServo;
Servo innerServo;

int outerRotStart = 90;
int innerRotStart = 90;
int bigWallStart = 0;
int smallWallStart = 0;

int outerRotMin = outerRotStart - 60;
int outerRotMax = outerRotStart + 60;
int innerRotMin = innerRotStart - 60;
int innerRotMax = innerRotStart + 60;

int xPos;
int yPos;
int outerPos;
int innerPos;

void setup() {
  //pin setup
  pinMode(jsXPin, INPUT);
  pinMode(jsYPin, INPUT);
  Serial.begin(115200);

  outerServo.attach(outerServoPin);
  innerServo.attach(innerServoPin);
  
  outerServo.write(outerRotStart);
  innerServo.write(innerRotStart);
  delay(1000);
}

void loop() {
  //read the positions of the joystick
  xPos = analogRead(jsXPin);
  yPos = analogRead(jsYPin);
  
  //change frame states based on joystick
  outerPos = map(xPos, 0, 1023, outerRotMax, outerRotMin);
  innerPos = map(yPos, 0, 1023, innerRotMax, innerRotMin);

  outerServo.write(outerPos);
  innerServo.write(innerPos);
}
Video of maze tilting and marble being guided through maze

As we were unsatisfied with the level of thought required from the user, further CAD designs emphasized the game element of the marble maze. The design was steered in a puzzle-like direction with multiple switches and rotating walls that the user would have to spend some time thinking through. The new design also has more movement that’s apparent to the player as every rotating wall rotates 90 degrees when any of the rollover switches are triggered. Now that the maze feels like its constantly changing, it can create more excitement for the user while playing it and the extra thought required can create more satisfaction when the user conquers the maze. The board alternates between two states as the user navigates through it as seen below:

Initial Position of redesigned maze board. Rollover switches are shown by shining half-spheres and the rotating walls are seen with circular plates beneath. The user goes from the top-left (start) to the bottom-right (finish)
Maze position once a rollover switch has been triggered. If a rollover switch is triggered again the board goes back to the initial position

To give the game more challenge and complexity, this new design contains a branching path. The initial physical design just had the user navigating the marble into a certain place before advancing towards the goal. The intention behind complexity is to make the user want to play the game again as each time the game would be slightly different. This idea can be expanded upon in more iterations but the goal of having the maze be compact conflicts with it.

Isometric View of Board
Isometric view of entire revised design

Link to Fusion 360 Project Folder: https://andrew565.autodesk360.com/g/projects/20201106350547781/data/dXJuOmFkc2sud2lwcHJvZDpmcy5mb2xkZXI6Y28uVWhCQlBIakNUWUtQLXAyWkwzQnVTQQ

Continuous Improvement

Moving forward, we could keep iterating and ideating playfields and sensor ideas to create a more arcade-like game. While our current version is more akin to a game that could be found in a homes, we could expand upon our ideas to add different game states and possibly multiplayer modes. Our original idea was to have this be a single-player game as inspired by quarantine. However, if we evolve our idea to become more socially engaging and added more visual aspects via LEDs and sensors, there are many more branches we could explore.

Another idea to increase the replay-ability of the game is to have the user be timed in their completion of the maze. This would encourage players to try again even after winning to try to beat their personal best.

There is also potential for a different goal altogether: learning and education. Due to not reaching the goal of wire/motor concealment, not only is the movement of the maze apparent to the user but HOW it moves is as well. With more guidance from either an outside source (an adult) or indicators within the game itself, a child could learn how modules like servo motors, linkages, and sensors work.

Sources

Inspiration for Gimbal Design from Tilting Maze Final Report – Henry Zhang, Xueting Li

Authors and Contributions:

Shiv Luthra – maze design sketches, Computer Aided Designs, Arduino Code

Amelia Chan – gimbal design sketches, physical manufacturing and assembly

]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/11/rotating-labyrinth-final/feed/ 0
Handsfree Balance Maze :: Project Report https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/09/handsfree-balance-maze-final-report/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/09/handsfree-balance-maze-final-report/#respond Thu, 10 Dec 2020 04:03:09 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4818 Team: Yu Jiang & Max Kornyev

Abstract

Our original development goals for the maze device were to cater an interactive experience to the user; mainly through a unique mechanical design and nontrivial set of control methods. But during a time when we’re often apart from our friends, we expanded our project to create a remote marble maze experience unlike any other: with players being able to use their maze locally, switching game modes, and then letting their remote friends take control.

The work done in this project is a great example of how microcontrollers and the network can transform a game that is usually played alone into a truly novel social experience.

Objectives

Our project goals first involved prototyping a Fusion360 maze design, laser cutting, and then assembling the physical component. Then, through weekly iterations on the code and control schemes, we planned to develop a remote input mode for the Arduino and a local sonar-based controller.

Project Features:

  1. Laser-cut Mechanical Marble Maze – with roll & pitch movement motivated by an Arduino microcontroller.
  2. Physical Input – a two-sonar-sensor setup for triangulating an object in 2D space, and converting it to a 2-axis maze movement.
  3. Remote Input Mode – an Arduino mode for parsing MQTT server messages when connected to a network device.
  4. Software Input – an iOS application that forwards Gyroscope data to the MQTT server.

Implementation

CAD Design

We made a CAD design to laser cut the parts. The main CAD design includes two narrow pivoting frames, the main maze board, a tilted marble retrieval board, and the main playground base and walls — the complete CAD design can be accessed below.

V1 Prototype Design

The structure was assembled with lightweight 6mm laser-cut plywood, low friction shoulder screws, and two hobby servo motors.

V2 Prototype Assembly

After extending our Arduino program with a remote input mode, we designed a sonar range sensor control scheme to control the two maze axes. Operating the device with nothing but your hands was a feature we hoped would make the game more engaging to play.

V2 Prototype with sensor input: each hand corresponds to a sensor

We then iterated on our input scheme, and developed a triangulation setup that could convert the 2-dimensional movement of an object into movement on two servo axes.

Outcomes

Laser-cut Mechanical Marble Maze

By using a Gimbal frame, we were able to design a sturdy yet mechanically revealing structure that resonated with curious players. This also offered us some vertical flexibility, in that a taller maze frame allowed us to implement a marble retrieval system: This would catch the marble below the gameboard for the player, and deposit it in front of them.

Physical Input

We were also able to successfully implement a control scheme that we can confidently say users have probably never seen before. In the following video, we utilized Heron’s formula and a bit of trig to control this remote maze by moving a water bottle within a confined space.

V2 Prototype with 2D sensor input
2D triangulation sonar sensor schematic
2D triangulation sonar sensor setup

Remote Mode & Software Input

Unfortunately, testing this feature from halfway around the world posed one serious issue: latency. The combination of sending serial output to an MQTT broker and across the world to another serial channel was too much for our MQTT server to handle, and caused input clumping on the receiving maze side.

In order to address this, we developed an iOS client that can tap into your mobile device’s quality Gyroscopic sensor, and directly control the tilting of a remote board. This provided us with much more accurate sensor input, requiring minimal parsing on the Arduino side. And, in our experiments, the application helped almost entirely eliminate input clumping on the receiving Arduino’s end by transmitting at 4 messages/second.

Low latency messages being received server-side

Our final project outcomes looked like this:

Final Demo Video
High Level Project Outcomes

In what ways did your concept use physical movement and embedded computation to meet a human need?

Locally, our device can function alone. Through two separate sonar setup schemes, we provided unique means of control to the player. And the control of the device via hand movements and object movements definitely seemed more natural to execute than using a more limiting tactile control. And ultimately, by porting our device to accept remote input from a variety of sources, we made what is typically a solo gaming experience into a fun activity that you can share with your remote friends. 

In what ways did your physical experiments support or refute your concept?

Our device does achieve and encourage remote game playing. It also conveys a sense of almost ambient telepresence – it shows more than presence but also the remote person’s relative movements through the tilting of the two axes. Right now the two players can collaborate in a way such that the remote player controls the axis then the local player can observe and reset the game if needed. Our game is also open to future possibilities for the local player to be more involved perhaps through combing remote and local sensor/phone input.

We found that the latency issue across the MQTT bridge significantly hampers the gaming experience but we expect the latency to be greatly reduced if two players are relatively closer (in the same country/city). Another major issue is that currently, the remote player doesn’t get a complete visual of the board – this can either be later added or the game can be made into a collaboration game such that the local player is instructing the remote player’s movement.

Future Work

With our device, we successfully implemented a working maze game capable of porting remote input to it. But, there are two main aspects in which the device can be extended.

Additional Game Modes:

In order to really take the novelty of controlling a device in your friend’s house to the next level, we must make the game interactive for both parties. By adding interactive game modes, each player could take control of an axis, and work together (or against each other) to finish the marble maze. This would encourage meaningful gameplay, and engage both users to return to the remote game mode.

Networking Improvements:

A code level improvement that our Arduino could benefit from involves an extension of the REMOTE_INPUT mode. Currently, there are no controls for buffering latent remote input; such that the device will process the signals as soon as they are available. As a result, the local user will see an undefined jitter in the maze whenever a block of inputs are processed. Implementing a buffer to store these inputs and process them at a set rate will not rid of any latencies, but it will reflect a delayed but accurate movement of the messages that were transmitted. This will make it obvious to the user that network latencies are present, as opposed to the undefined movements.

Contributions

Max: Arduino code, two sonar triangulation scheme, and the iOS client.

Yu: CAD design and laser cutting, prototype assembly, and local testing.

Citations

  1. The object triangulation setup scheme and Heron’s formula implementation is adopted from Lingib’s Dual Echo Locator demo.
  2. The highpass filter and smoothing code are inspired by Garth Zeglin @ CMU.

Supporting Material

  1. Source Code on Github
    Ardunio Source
    iOS Application Source
  2. CAD files
    Link to Fusion Sketch
]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/09/handsfree-balance-maze-final-report/feed/ 0
Masks During Pandemics (Flower Mask) Final Project https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/08/flower-mask-final-project/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/08/flower-mask-final-project/#respond Wed, 09 Dec 2020 02:40:13 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4911 Abstract

In this project, we created wireframe masks mimicking plague masks from the 1700’s. They contain fake flower petals representing the safety that the mask gives the wearer because in history, dried flowers and herbs were placed in those beaks since doctors thought that the good smell would prevent them from getting sick. By creating an artistic expression of disease masks, our goal is to convey the importance of wearing masks during pandemics throughout history. In general, this project is meant to convey how although wearing masks has become a political issue, underlying this debate is the greater issue of everyone’s safety.

Objectives

Aside from the obvious objective of creating the masks, we originally wanted both of them to open at the push of a button before defining our concept (to be portrayed in a video) more clearly. However, after deciding to make one person the one putting others in danger and the other person essentially a victim, our objectives changed drastically for both masks to be very different from each other. Eventually, we decided we wanted Brianna’s mask to be covered in black material, and the only one that would open and that Anya’s should be an open wireframe with the beak “closed”. The over-arching objective of this project was to create a video with these creations that clearly portray a concept that does not taking much context outside of the video.

Implementation

As discussed in objectives, the implementation of these devices and design choices centered around portraying a very specific and clear concept in our video. In the video, as Brianna opens her mask to fight against the mandate for people to wear masks, Matilda (Anya’s housemate) and Brianna both lose the flower petals from their masks, signifying that even though Matilda kept her mask closed, both of them are in danger when Brianna doesn’t keep her mask on. After the argument that mandating masks is illegal is explained, Matilda responds with the argument as to why is it not illegal (through voiceover as well).

Outcomes

I think our video successfully conveys our concept because it shows how one person’s safety is affected by another person’s decision not to wear a mask during a pandemic. I think this concept is also introduced in a unique way, since it uses plague masks in the 1700s to get the message across. The visual differences in the mask also tie into the concept. Brianna’s wears a mask covered in black fabric, that she opens, putting herself and Matilda in danger, which suits her “character” who fights for their choice to wear a mask or not instead of protecting herself and others. However, the fact that her mask is covered in fabric also shows that even though she opens her mask, she is still more protected than Matilda (since other people wear masks and that makes her safer). However, Matilda is demonstrating a very bare wireframe mask that is closed, but clearly leaves a lot of open space. The fact that her mask is not covered helps convey how even though she is responsibly wearing a mask, she is left very vulnerable and in danger since Brianna opens her mask. One “failure” or weak point of our project in our opinion, is that the automation aspect of our project became more and more irrelevant while defining our concept. We still appreciate it in the project because it emphasizes how we have come a long way (in technology and our understanding of pandemics) yet we still don’t all wear masks, which even doctors in the 1700s knew to do (whether or not the right science backed it up), however I think that this part of the project and its purpose isn’t so emphasized or clear in the video.

Future Work

Like what was mentioned in outcomes, I think the presence of automation seems a little random in the video so for a future iteration, I think we would try to make it more clear as to how the mask being able to be opened by the press of a button contributes to the concept.

Contribution

We created our own masks while consulting quite often to decide on design choices and how they would contribute to a final concept. We also created our own videos of our mask, which Anya put together using iMovie.

Photo Documentation

We just started with a very unclear purpose of just creating automated plague masks from the 1700’s, as seen from the initial drawings below.

We then started by creating a simple wireframe:

Our progression started with just this wireframe proof-of-concept.

Then we hooked up a servo motor to open the bottom jaw so that it remains open as long as the button is pressed (on both of our first versions).

A string connecting the servo horn to the bottom, center joint on the jaw allows it to move.
The circuit includes a push button switch, a 2kΩ resistor, and a servo.

Then this is where our masks and objectives for the next steps started to vary. Brianna covered hers in black fabric and put a top hat on to create a more clear character and Anya remade hers into a more sleek wireframe design that would demonstrate the flower petals leaving with the mask closed. The final products can be seen in the video above.

Citations

The wire framework of the mask was originally inspired by various masks made by Eli Secrest. More of his work can be found in the following link: https://www.instagram.com/aerarius_metalworks/?hl=en

Code

Code for getting the servo motor to open the jaw on Brianna’s mask:

const int SWITCH_PIN = 2;
const int SERVO_PIN = 9;
Servo svo;

void setup() {
Serial.begin(115200);
pinMode(SWITCH_PIN, INPUT);
svo.attach(SERVO_PIN);
}

void loop() {
if (digitalRead(SWITCH_PIN) == HIGH){
svo.write(180);
}

else if (digitalRead(SWITCH_PIN) == LOW){
svo.write(0);
}
}
]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/08/flower-mask-final-project/feed/ 0
Elevate: Final https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/elevate-final/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/elevate-final/#respond Mon, 07 Dec 2020 23:59:47 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4832 Imagine waking up, creating a cup of coffee and going to the living room to watch some TV. You would place the cup of coffee on elevate, which would lift your coffee up and provide a backlight. It’s easy to forget your mugs around the house, so elevate not only physically lifts your mug but lifts it to a higher level of importance. The goal is to make the user more conscious of their belongings, while also creating an aesthetic piece of furniture that users would want to use and interact with. 

This is a rendering of our Elevate in a living room environment with a mug highlighted

The goal of Elevate is to create a soothing and calming experience for the user, while also giving the user an indirect sense of companionship. Since the beginning of the pandemic, many individuals have remained quarantined from social interaction. Elevate aims to address the lack of interaction by providing a soft, subtle presence in everyday life. Whether it is your morning coffee, or an afternoon book, Elevate acknowledges your actions and highlights your daily routines.

This is our physical model in a living room, but focused in on the mug and the light emitted from it creating a soft enivronment

In order to achieve this our table includes LED lights which creates a back light for the panel that is lifted. This adds to the overall aesthetic of Elevate and further emphasizes the objects placed on the panel. Additionally, we control the stepper motor to move slowly and calmly for the user experience, but not too slow to the point where it is frustrating. The panel only slightly shifts upwards, around the height of one panel, because it is meant to emphasize and give importance to your objects, but not create an inconvenient level change. We also decided to create multiple panels, so that the table could highlight one specific area of the table, otherwise there wouldn’t really be an emphasis on a specific object if the entire table would lift.

A panel consists of an LED, force-sensitive resistor (FSR), and a stepper motor. The code includes state-machines for each panel. Each panel is either loaded or unloaded, based on the FSR reading. A panel is controlled by two Arduino, the Master Controller, which monitors the state using an FSR, and the Stepper Driver Arduino, which controls the Stepper motors.

Each loop, the Master Arduino runs each panel’s state machine, which maintains the state of the panel, and then checks the state of the panel to see if it is loaded or unloaded. If the panel is loaded and was previously unloaded, the master controller sends the state change to the Stepper Driver Arduino. The Driver Arduino loop iterates through a list of steps needed until a panel is at its state. If a panel has steps remaining, each step is taken incrementally, allowing for simultaneous stepper motor movements. When data is received from the master controller, the steps array is updated, and the motor is moved accordingly.

Below is an example of the State Machine for Panel A. When state_a is equal to 5, this means that the FSR was triggered and the panel is in the loaded state. When state_a is equal to 4, this means the panel is in the unloaded state and the state machine resets.

// STATE MACHINES
// Manages the state of the Motor/FSR/LED combo and updates according to 
// FSR reading
void SM_a (){
  state_prev_a = state_a;
   
  switch (state_a){
    case 0: //RESET
      state_a = 1;
      break;
       
    case 1: //START
      val_a = analogRead(aA);
 
      if (val_a &gt;= fsr_a_threshhold) {state_a = 2;}
       
      break;
       
     case 2: // SET TIME
      t0_a = millis();
      state_a = 3;
      break;
       
     case 3:// WAIT AND CHECK
      val_a = analogRead(aA);
      t_a = millis();
 
      if (val_a < fsr_a_threshhold) {state_a = 0;}
 
      if (t_a - t0_a &gt; debounceDelay){
        state_a = 5;
      }
      break;
       
     case 4: // UNLOADED
      state_a = 0;
      break;
       
     case 5: // LOADED
      val_a = analogRead(aA);
      if (val_a < fsr_a_threshhold) {state_a = 4;}
      break;
  }
}

The Stepper Driver loop calls the step_motors function, which checks if motors need to be stepped, and steps them. This function steps each motor incrementally, to allow for simultaneous stepper motor movement.

// Iterates through the steps array and steps each motor the required number of steps
// This allows for simultaneous movement of stepper motors
void step_motors() {
   
  for (int i = 0; i < 4; i++){
    //Rotate clockwise to loaded state
    if (steps[i]&gt;0){
      //Case for A motor because A pins are not sequential on CNC Shield
      if (i == 0) {
        step (true, aDir,aStep);
        steps[i] -= 1;
        }else{
          step (true, (4+i),(1+i) );
          steps[i] -= 1;
        }
      //Rotate counterclockwise to unloaded state
    } else if (steps[i] < 0){
      if (i == 0) {
        step (false, aDir,aStep);
        steps[i] += 1;
        }else{
          step (false, (4+i),(1+i));
          steps[i] += 1;
        }
       
    }
     
  }
   
}

The full code is provided in the appendix.

This is what the physical model looks like under the panels.

While we were able to place the circuit portion into the table we did come across multiple barriers. The first struggle was getting the panels to align correctly, since the FSR under the panels were creating uneven leveling. Additionally the FSR we were using were smaller circular ones, so if we had more time and resources to move forward we would probably shift to using the larger rectangular ones. Using the smaller ones led to the table not always responding when an item was placed on it because the force did not reach the sensor or was not strong enough. For this reason we had to thoughtfully place the mug near or on the sensors to get a response. We also had to sand down the panels so that they would not rub against each other and get stuck while moving, which ultimately led to gaps between panels. In a living room environment where liquids and food would be placed on the table there is lots of opportunities for spillage between the gaps causing the circuit to be destroyed.

Despite physical problems we do think our design was successful in creating the effect we desired. Our panel slowly elevate with a back light highlighting the mug we were running tests with. Elevate was successful in interacting with us and informing us that we have placed an item on it, and to not simply ignore it, but rather treat it with more consciousness. As we were dealing with the mug full of liquid we were definitely more aware of what we were doing with the mug and where it was being placed. This was mostly because we did not want to cause any spillage and ruin the device, but overall it was successful in giving the mug more importance.

Moving forward we would want to further improve our physical table and create a more leveled top to improve the overall design as well as the function. Additionally, we would work on fixing the code, so that it is able to lift an object that spans two panels because our current iteration is unable to do so. I also think we would want to explore ways we can make it water proof and have some sort of collection system for crumbs. Bowls with food and cups filled with liquid will be placed on this table, and currently it will just spill into the wires and circuit area. Moving forward we would want to prevent this to decrease stress for the user as it is supposed to be a calming piece in your living space.

Through this process and project we have learned that 3-D modeling is extremely idealistic especially with moving components. In rhino it is very easy to have complete control over each component, but in reality they bring their own hurdles that are not highlighted in a digital model. 

Amal focused on the physical aspect of the table, while Chris worked with the electrical aspect.

Files:

]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/elevate-final/feed/ 0
Mona Lisa Final https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/mona-lisa-final/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/mona-lisa-final/#respond Mon, 07 Dec 2020 14:43:38 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4814 During these pandemic times, it becomes increasingly difficult to stay connected with peers. This raises concern involving communication among people but also brings forth the need for telepresence. For our project, we used computer vision and facial recognition components to detect human eyes and send that information to an arduino that controls the movement of a device which will reproduce the idea of Mona Lisa’s eyes. Instead of having the Monas Lisa follow the person looking at it, since we were trying to establish a connection among a diverse set of people, our device replicates the partners eye movements. The goal of this project is to enhance connection among people. We were able to utilize and create a fully functional computer vision software that detected eye movements, a communication channel that sent information from the eye detection software to the arduino of the other person, and a working device which responded to the data received and produced a physical motion that corresponded to the eye movements. 

Project Goals

Our goals throughout the project were to convey personality through eye movement which enhances the telepresence interaction, as well as elicit an amused yet shocked reaction. The MVP of our project was to create a working prototype that would move Mona Lisa’s eyes. This meant having facial, eye, and pupil recognition using opencv, an arduino that would receive messages over an mqtt server, an animatronic eye on a 2 axis gimbal to replicate eye movements, all placed into a box with a Mona Lisa drawing in the front.  This is how the eye was set up:

Front view of eyes
Back View of eyes

Initial Design

When I began writing the code for the pupil detection there were a few choices to make in terms of the implementation. Since there does not exist a ML model that detects pupils like there are for faces and eyes, I had to be a little more creative in order to detect the pupil. I decided to use blob detection to locate the pupil. The idea was that the largest blob within the eye frame would be the pupil. In opencv, in order to do blob detection you first need to make the image grayscale. At first, to determine where the blob was, I applied a threshold on the grayscale image in order to limit certain pixels from showing up. Ideally, this would cause the pupil to stay, while removing unwanted pixels. However, this had several issues, because this logic heavily relied on finding the perfect threshold value. This wasn’t good, especially because lighting played a huge role in determining that value. Thus, I decided to go with a different design that would find the contours of the eye frame and then pick the pupil based on blob sizes that appeared. This resulted in a more well-defined solution that limited external factors. Another important decision was where in the code the transformation of data would occur. There were two possible places that this could happen in. The first option was on the python end, meaning that before sending the message to the arduino, the python code would convert the data into its appropriate angles. The second option was to convert the data into angles on the arduino end. Upon receiving the data, the arduino would convert the relative pupil distances into angles. I chose to convert the angles on the python end, because it made more sense to do the heavy lifting all on the python, and simply have the arduino read the data and execute the servo motion. Another important design decision was the use of the mqtt code provided through the class resources. When I first began implementing the mqtt server connectivity I was planning on making my own program to send messages. However, reusing the existing bridge program code saved me many hours that I would have spent trying to figure out how to use that.  

When we thought about creating the eye structure, there were several ways online to go about it. We decided to use the method where we used a 2 axis gimbal to portray eye movement. We didn’t want it to be so complex that we couldn’t achieve it, but we also wanted to create a somewhat realistic eye movement experience. Initially, we decided to use clay to model the eyeball.The clay ended up being too weak to support the wire movement, and it also absorbed the glue that was used on it to attach the l-brackets. Therefore, we switched to plastic eyeballs, as they provided us with a more realistic eye feature for the artwork and didn’t cause any of the problems that the clay eyeballs did. Due to uncertainty regarding using TechSpark, we were unable to change some of the cardboard features for the box to plywood. We also couldn’t add an extra shelf, like we intended, so we decided to use books to place the arduino and breadboard on. The intended structure can be seen in the following images:

Another feature we had issues with was the paperclip linkages that connected the servo horn to the animatronic eyes. They were not flexible enough, so we decided to use some soldering wire. The CAD model that we created, was also unable to move, due to the rigidity of the linkage connecting the servo to the eye and the fixed position of the servos. Unlike We couldn’t represent the soldering wire in the CAD model, but the physical changes we made were successful. The eye structure was able to move according to the messages it received over the mqtt bridge. 

This can be seen in the video below:


Future Plans

The next iteration of this project would be to create a replica of the device. The initial design included a pair of systems intended to be controlled by each person. Having two devices would allow for a richer connection between the users, as they can both act as the controller (person who controls device) and receiver (person who sees output of eye movements) simultaneously. Another idea we had to convey personality was by adding a blinking feature. This would provide the device with a more human-like action. 

Task Breakdown

Max wrote the code for the pupil detection, integrated the mqtt server to send messages to arduino, and wrote arduino code to move servos. Marione created the CAD designs, animatronic eye, and the mona lisa box. These can be seen below:

Physical Setup of Hardware – Structure situated on a desk

Code Implementation Details

If you want to look more in detail into the code it can be found here: https://github.com/mdunaevs/Eye_Tracking

Or downloaded as a zip:

In order to recognize a pupil, first we need to detect a face, and from that face we need to identify the persons eyes. The same principle can be applied in programming. Opencv allows for easy extraction of a person’s face and eyes using cascade classifiers (machine learning approach to detect objects within an image. In my code, I made use of these classifiers so that I could detect a persons’s face and eyes. In the functions shown below, I am converting the image into a grayscale and then feeding it to the classifiers. The classifiers then come back with a set of coordinates representing the detected objects, which contain information about the position, minSize, and maxSize of the object. The third index of the tuple represents the maxSize. Using the information about the maximum size, I can determine which object is the largest, which represents the object I am looking for.

Function to detect largest face within the camera view frame
Function to detect eyes of the largest face.

Now that we had the eyes isolated, we needed to get the pupils. However, the issue was that unlike that face and eyes there currently does not exist a cascade classifier for a pupil. This meant I had to get a little more creative with how to extract the pupil. As mentioned before, the concept behind the classifiers were to detect regions / objects within the image. Following a similar approach I decided to attempt to detect large objects within the eye frame.

My solution to this was to use blobs. A blob is a group of connected pixels in an image that share some common grayscale value. By finding all the blobs within the eye frame, I can use this information to find the appropriate blob corresponding to the pupil.

Code to get blobs within the eye

Now that the we were able to locate the pupil of the eye, we needed to figure out how to send some sort of data relating to the pupil to the arduino. To begin with, we defined what our data would represent. Our data represents the angle to move the arduino. The way we calculated the angle was by first computing the distance between the expected location of the pupil (based on center position of eye frame) and the actual location of the pupil (based on position within the frame).

Calculation of distance

Once we had the distance we applied a conversion to change the distance into an angle. Note that the constants were found by experimenting with the data and trying to get a reasonable angle change.

Conversion of distance to angle

Now that we had data, this needed to be sent to the arduino. The data was sent over an mqtt server. Using the same bridge program we had in class, we were able to send data to the arduino.

Publish message to monalisa topic

Then when the data was received on the arduino end, we were able to move the servos.

Arduino code to move servo to new angle position

At this point the code was working properly, however in order to get a ricer eye movement experience, we decide to incorporate saccade movements. A saccade refers to the sudden, quick, simultaneous movement of both eyes in the same direction. 

Saccade motion detection and message publishing

The trick to this was determining what was considered a saccade motion. Since saccade motions are typically quick and sudden jolts of both eyes, if we had to map every saccade motion to an arduino movement, it would result in the same issue as before, where too much data is being sent to the arduino in too little time. Thus, I created a filtering system to only acquire the saccades in which the ending position of the pupils relative to their eye were roughly in the same position. Based on the data and tweaking the value around 5 was a reasonable number used to determine whether the distances of the pupils were relatively close to one another.

CITATIONS

http://www.pyroelectro.com/tutorials/animatronic_eyes/hardware_servos.html

https://medium.com/@stepanfilonov/tracking-your-eyes-with-python-3952e66194a6

https://gist.github.com/edfungus/67c14af0d5afaae5b18c

Supporting Material

Electronic Schematics:

Circuit diagram for eye mechanisms

CAD Files:

Eyeball rotating around y-axis
Ring mechanism rotating around x-axis
CAD – back
]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/mona-lisa-final/feed/ 0
Zip File upload test https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/upload-test/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/upload-test/#respond Mon, 07 Dec 2020 12:59:59 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4757 Sorry, but you do not have permission to view this content. ]]> Sorry, but you do not have permission to view this content. ]]> https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/upload-test/feed/ 0 Flower Mask Final Video https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/flower-mask-final-video/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/flower-mask-final-video/#respond Mon, 07 Dec 2020 06:06:50 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4799

For our final iteration of the flower mask, we settled on a concrete concept, as it kept changing throughout the creation process. We also created a video to clearly demonstrate this concept. For context, we created wireframe masks mimicking plague masks from the 1700’s. They contain fake flower petals representing the safety that the mask gives the wearer because in history, dried flowers and herbs were placed in those beaks since doctors thought that the good smell would prevent them from getting sick. As Brianna opens her mask to fight against the mandate for people to wear masks, Matilda (Anya’s housemate) and Brianna both lose the flower petals from their masks, signifying that even though Matilda kept her mask closed, both of them lose their safety when Brianna doesn’t keep her mask on. After the argument that mandating masks is illegal is explained, Matilda responds with the argument as to why is it not illegal (through voiceover as well). In general, this project is meant to convey how although wearing masks has become a political issue, underlying this debate is the greater issue of everyone’s safety.

]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/07/flower-mask-final-video/feed/ 0
Joystick Maze Revised – V4 https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/06/joystick-maze-revised-v4/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/06/joystick-maze-revised-v4/#respond Mon, 07 Dec 2020 04:29:53 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4781 From the prototypes, we were satisfied with the gimbal system that rotated the maze. The goal for this revision was to CAD a more complex, puzzle-like board for the user to navigate.

We decided instead of having one switch activate one rotating wall, one switch would move all the rotating walls 90 degrees. The user then has to move through a constantly shifting maze with more movement than before. With this in mind, the board was entirely redesigned into the following:

Initial Position of the Maze. Shining half-spheres represent the rollover switches that activate the rotating walls which sit on circular plates.
Maze position once a rollover switch is triggered and walls rotate 90 degrees.
Maze with walls and switches removed. The components sit on lips created by supports attached to the back of the board.
Isometric view of maze board

The entire gimbal system with the board can be seen below:

Isometric view of entire maze with gimbal system
]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/06/joystick-maze-revised-v4/feed/ 0
Mona Lisa V4 https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/05/mona-lisa-v4/ https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/05/mona-lisa-v4/#respond Sun, 06 Dec 2020 00:57:57 +0000 https://courses.ideate.cmu.edu/16-223/f2020/work/?p=4771 For the final iteration of our project, we finished adding the finishing touches.

In terms of the code, we added saccade motion. A saccade refers to the sudden, quick, simultaneous movement of both eyes in the same direction. In the code this was implemented as shown in the image below.

The trick to this was determining what was considered a saccade motion. Since saccade motions are typically quick and sudden jolts of both eyes, if we had to map every saccade motion to an arduino movement, it would result in the same issue as before, where too much data is being sent to the arduino in too little time. Thus, I created a filtering system to only acquire the saccades in which the ending position of the pupils relative to their eye were roughly in the same position. Based on the data and tweaking the value around 5 was a reasonable number used to determine whether the distances of the pupils were relatively close to one another.

Here is the link to the github: https://github.com/mdunaevs/Eye_Tracking

For the mechanical components:

Since we had previous issues with the eye materials for last time, we replaced the clay eyes with plastic eyes, and replaced the paperclip linkages with solder wire. This allowed for the mechanism to move more easily.

Physical Setup of Hardware – Structure situated on a desk
Front view of eyes
Back View of eyes
CAD- Front
CAD – back
]]>
https://courses.ideate.cmu.edu/16-223/f2020/work/2020/12/05/mona-lisa-v4/feed/ 0