Waking up in the morning is often difficult. We are always looking for ways to assist us in getting to our jobs on time. Sometimes a loud, blaring sound is not enough to continue us through our endless cycle of work and sleep. That is why we decided to design a lighting system that will help with that. Our Wake-y Up-y Remote-y Lighting makes sure that your lights go on when you silence your alarm because nothing should stop you from completing your labor with a smile.
Our goal was to design something that would turn on your lights when you pick up your phone in the morning and turn them off when you put your phone down before you go to sleep.
Our objectives for the project shifted throughout its duration. While we initially set out to make a working prototype, in the end we decided to make a CAD design that would allow us to circumvent some of the constraints that had limited us when we were trying to make a physical version. Once we shifted to the CAD design, our goal was to make a user-friendly product out of parts mostly available to us. We focused on aspects like design in order to remove any potential barriers for use by a potential user.
The features we had initially implemented were quite basic in nature. We built both the load cell piece and the light switch mechanism, with the light switch mechanism being a laser cut part attached to a servo that would be pressed down to turn the light on/off. Having ensured that basic level of functionality, we moved onto the load cell but were met with technical difficulties. Though we ended up figuring out what was causing issues with our implementation of the load cell, by that point we had already opted to move to a CAD design.
With regards to our final design choices, we hoped to try to remove all reasons for a potential user not to use the product. With that in mind, we tried to make the design sleek so as the user would actually want to leave it on their nightstand. We opted for a phone carriage that was about 50mm larger than our phones in both length and width, primarily because we felt that having a snug fit for the phone would make it more difficult that it need to be to use. We also added the base plate to the nightstand unit at the end to address a problem that we had faced since the outset of the project, which was how to make sure the load cell did not tip over when a phone was placed in the carriage. When initially working with our physical prototypes, we had used our own counterweights to ensure the unit wouldn’t tip over. However, once we switched to a CAD design, we could implement a more efficient design. The base plate crosses over where the center of mass of the entire night stand unit would be, so as to make sure the unit doesn’t tip over. Ideally, if we were to actually create our CAD design in person, we’d like to make sure that both the base plate and the initial counterweight base cylinder were of a heavier material than the rest of the nightstand unit.
Our biggest success was shifting the focus of the project from physical prototyping to design. Once we decided to work on a CAD design as our final deliverable, it allowed us to think from a brand new perspective not about what we could physically make, but instead what we thought would make the best product. This gave us new takeaways from the project aside from what we learned about the physical parts and gave new insight into what needs to be considered for product design. Our biggest failure was probably the inability to get the physical model working in a timely fashion. Though we did end up figuring out what was wrong with our load cell design, by that point we had already wasted a few days troubleshooting individually and we were both extremely frustrated with the project. More importantly, though, our shortcomings with the physical model led to us deciding to switch to a CAD design so in the end that benefitted us regardless. Because of this, the shape of our final product was more of a product design where we pitched our idea of what we wanted to make to the class.
We also had some basic code written up for our physical prototype wall mount, as seen below.
#include <Servo.h> const int SERVO_PIN = 7; const int ON = A0; const int OFF = A1; Servo controlledServo; void setup() { Serial.begin(115200); controlledServo.attach(SERVO_PIN); } void loop() { if (!digitalRead(ON)) { switch_on(); delay(1000); controlledServo.write(90); } if (!digitalRead(OFF)) { switch_off(); delay(1000); controlledServo.write(90); } } void switch_on(){ for (int i = 90; i > 0 ; i--){ controlledServo.write(i); delay(1); } delay(1000); for (int i = 0; i < 90 ; i++){ controlledServo.write(i); delay(1); } } void switch_off(){ for (int i = 90; i < 180; i++){ controlledServo.write(i); delay(1); } delay(1000); for (int i = 180; i > 90; i--){ controlledServo.write(i); delay(1); } }
When working on our physical model, we only got so far as to test the functionality of our prototype. We stopped short of combining the load cell’s output and the servo movement. The code above was only to ensure we could get the range of motion we hoped to have in order to turn on the light switch. At the time of writing it, we had planned to factor in load cell output to control the type of motion, but because we had so many issues with the load cell we never ended up doing so. This was one of the major shortcomings in our project. What is not shown are the many libraries for the HX711 load cell amplifier that we had tinkered with in an attempt to get the consistent output we wanted from our prototype, unaware that it was our physical design that was interfering with the output.
Taking this project to the next iteration for us means a few things – a more compact design, a physical model, and expanding the features that we can offer. Even when switching to working on the CAD model we still tried to be mindful as to which resources would have been available for us to use had we wanted to convert it into a physical model. The first step would be to remove the existing load cell in favor of a load cell that looked more like a traditional scale. It would make the design more compact since the large load cell we have used in our design leads to the product being generally unwieldy and oddly shaped. Then, we would have to build a physical prototype of the new model. The whole point of designing the product was to make sure it was usable. Finally, incorporating some of the feedback we had received from our classmates, we believe that adding more features is imperative to helping convince a potential user that the product is the one for them to use.
A few examples of what we’d like to incorporate:
The project was done jointly/collaboratively for the most part. We assembled the physical prototypes on our own as well as the final CAD design (though this can be attributed more to a misunderstanding of the due dates than anything else), but because of the shift in our project most of the work we did jointly in the last two weeks or so was spent on discussion of what to implement within the design. The general trend we followed was that Dillon did more of the design aspect while Jack focused more on the physical assembly and finer details of the design.
Dillon: Sketches, CAD Design for nightstand unit and wall mount unit, design/features input
Jack: Schematics, Soldering + additional assembly of physical prototype, design/features input
Toys and games are becoming progressively more reliant on screens and these screens are the sole source of feedback for the children playing them. Our project objective was to create a game with more tactile and real-world movement that can be used to engage children. We decided to build a tilting marble maze with some form of decision-making and movement within the maze. The physical prototype achieved the objective of a completable tilting maze with real world movement. To enhance the decision-making aspect of the maze a later CAD iteration made the game more complex and generally more exciting to the player.
The following were the goals for the marble maze in order of decreasing priority:
The board would rotate on a gimbal system such that the board could change in pitch and roll. The playing board would be attached to an inner frame and this inner frame would be connected to a servo and linkage that would rotate it about the x-axis. This servo and linkage would be part of an outer frame that rotates about the y-axis by way of a stationary servo and linkage. The maze walls and gimbal frames/stands would be fabricated out of 1/8″ wood so that the entire device would be relatively light. Dowels and bearings would be used to assure smooth rotation about the axis. Additional servo motors would also be mounted beneath the board to rotate certain walls. Rollover switches would be placed on the board with lever switches mounted beneath them to detect whether the rollover switch has been triggered. An Arduino UNO microcontroller would connect the motors and sensors of the board while also receiving user input from a joystick.
After using prototypes to figure out how to position the gimbal servo motors and linkages, a final physical iteration was made with the main emphasis being the board. Unfortunately, we could not finish mounting of the lever switches beneath the rollover switches so while the maze game is playable, it does not require as much thought from the user.
The below code shows how the gimbal system works by translating input from the joystick into movement of the frames. The code is very raw, with no smoothing filters placed in, just intended to allow user to complete the game without many control frustrations.
#include <Servo.h> const int outerServoPin = 5; const int innerServoPin = 4; const int jsXPin = A0; const int jsYPin = A1; Servo outerServo; Servo innerServo; int outerRotStart = 90; int innerRotStart = 90; int bigWallStart = 0; int smallWallStart = 0; int outerRotMin = outerRotStart - 60; int outerRotMax = outerRotStart + 60; int innerRotMin = innerRotStart - 60; int innerRotMax = innerRotStart + 60; int xPos; int yPos; int outerPos; int innerPos; void setup() { //pin setup pinMode(jsXPin, INPUT); pinMode(jsYPin, INPUT); Serial.begin(115200); outerServo.attach(outerServoPin); innerServo.attach(innerServoPin); outerServo.write(outerRotStart); innerServo.write(innerRotStart); delay(1000); } void loop() { //read the positions of the joystick xPos = analogRead(jsXPin); yPos = analogRead(jsYPin); //change frame states based on joystick outerPos = map(xPos, 0, 1023, outerRotMax, outerRotMin); innerPos = map(yPos, 0, 1023, innerRotMax, innerRotMin); outerServo.write(outerPos); innerServo.write(innerPos); }
As we were unsatisfied with the level of thought required from the user, further CAD designs emphasized the game element of the marble maze. The design was steered in a puzzle-like direction with multiple switches and rotating walls that the user would have to spend some time thinking through. The new design also has more movement that’s apparent to the player as every rotating wall rotates 90 degrees when any of the rollover switches are triggered. Now that the maze feels like its constantly changing, it can create more excitement for the user while playing it and the extra thought required can create more satisfaction when the user conquers the maze. The board alternates between two states as the user navigates through it as seen below:
To give the game more challenge and complexity, this new design contains a branching path. The initial physical design just had the user navigating the marble into a certain place before advancing towards the goal. The intention behind complexity is to make the user want to play the game again as each time the game would be slightly different. This idea can be expanded upon in more iterations but the goal of having the maze be compact conflicts with it.
Link to Fusion 360 Project Folder: https://andrew565.autodesk360.com/g/projects/20201106350547781/data/dXJuOmFkc2sud2lwcHJvZDpmcy5mb2xkZXI6Y28uVWhCQlBIakNUWUtQLXAyWkwzQnVTQQ
Moving forward, we could keep iterating and ideating playfields and sensor ideas to create a more arcade-like game. While our current version is more akin to a game that could be found in a homes, we could expand upon our ideas to add different game states and possibly multiplayer modes. Our original idea was to have this be a single-player game as inspired by quarantine. However, if we evolve our idea to become more socially engaging and added more visual aspects via LEDs and sensors, there are many more branches we could explore.
Another idea to increase the replay-ability of the game is to have the user be timed in their completion of the maze. This would encourage players to try again even after winning to try to beat their personal best.
There is also potential for a different goal altogether: learning and education. Due to not reaching the goal of wire/motor concealment, not only is the movement of the maze apparent to the user but HOW it moves is as well. With more guidance from either an outside source (an adult) or indicators within the game itself, a child could learn how modules like servo motors, linkages, and sensors work.
Inspiration for Gimbal Design from Tilting Maze Final Report – Henry Zhang, Xueting Li
Authors and Contributions:
Shiv Luthra – maze design sketches, Computer Aided Designs, Arduino Code
Amelia Chan – gimbal design sketches, physical manufacturing and assembly
]]>Our original development goals for the maze device were to cater an interactive experience to the user; mainly through a unique mechanical design and nontrivial set of control methods. But during a time when we’re often apart from our friends, we expanded our project to create a remote marble maze experience unlike any other: with players being able to use their maze locally, switching game modes, and then letting their remote friends take control.
The work done in this project is a great example of how microcontrollers and the network can transform a game that is usually played alone into a truly novel social experience.
Our project goals first involved prototyping a Fusion360 maze design, laser cutting, and then assembling the physical component. Then, through weekly iterations on the code and control schemes, we planned to develop a remote input mode for the Arduino and a local sonar-based controller.
Project Features:
CAD Design
We made a CAD design to laser cut the parts. The main CAD design includes two narrow pivoting frames, the main maze board, a tilted marble retrieval board, and the main playground base and walls — the complete CAD design can be accessed below.
The structure was assembled with lightweight 6mm laser-cut plywood, low friction shoulder screws, and two hobby servo motors.
After extending our Arduino program with a remote input mode, we designed a sonar range sensor control scheme to control the two maze axes. Operating the device with nothing but your hands was a feature we hoped would make the game more engaging to play.
We then iterated on our input scheme, and developed a triangulation setup that could convert the 2-dimensional movement of an object into movement on two servo axes.
Laser-cut Mechanical Marble Maze
By using a Gimbal frame, we were able to design a sturdy yet mechanically revealing structure that resonated with curious players. This also offered us some vertical flexibility, in that a taller maze frame allowed us to implement a marble retrieval system: This would catch the marble below the gameboard for the player, and deposit it in front of them.
Physical Input
We were also able to successfully implement a control scheme that we can confidently say users have probably never seen before. In the following video, we utilized Heron’s formula and a bit of trig to control this remote maze by moving a water bottle within a confined space.
Remote Mode & Software Input
Unfortunately, testing this feature from halfway around the world posed one serious issue: latency. The combination of sending serial output to an MQTT broker and across the world to another serial channel was too much for our MQTT server to handle, and caused input clumping on the receiving maze side.
In order to address this, we developed an iOS client that can tap into your mobile device’s quality Gyroscopic sensor, and directly control the tilting of a remote board. This provided us with much more accurate sensor input, requiring minimal parsing on the Arduino side. And, in our experiments, the application helped almost entirely eliminate input clumping on the receiving Arduino’s end by transmitting at 4 messages/second.
Our final project outcomes looked like this:
In what ways did your concept use physical movement and embedded computation to meet a human need?
Locally, our device can function alone. Through two separate sonar setup schemes, we provided unique means of control to the player. And the control of the device via hand movements and object movements definitely seemed more natural to execute than using a more limiting tactile control. And ultimately, by porting our device to accept remote input from a variety of sources, we made what is typically a solo gaming experience into a fun activity that you can share with your remote friends.
In what ways did your physical experiments support or refute your concept?
Our device does achieve and encourage remote game playing. It also conveys a sense of almost ambient telepresence – it shows more than presence but also the remote person’s relative movements through the tilting of the two axes. Right now the two players can collaborate in a way such that the remote player controls the axis then the local player can observe and reset the game if needed. Our game is also open to future possibilities for the local player to be more involved perhaps through combing remote and local sensor/phone input.
We found that the latency issue across the MQTT bridge significantly hampers the gaming experience but we expect the latency to be greatly reduced if two players are relatively closer (in the same country/city). Another major issue is that currently, the remote player doesn’t get a complete visual of the board – this can either be later added or the game can be made into a collaboration game such that the local player is instructing the remote player’s movement.
With our device, we successfully implemented a working maze game capable of porting remote input to it. But, there are two main aspects in which the device can be extended.
Additional Game Modes:
In order to really take the novelty of controlling a device in your friend’s house to the next level, we must make the game interactive for both parties. By adding interactive game modes, each player could take control of an axis, and work together (or against each other) to finish the marble maze. This would encourage meaningful gameplay, and engage both users to return to the remote game mode.
Networking Improvements:
A code level improvement that our Arduino could benefit from involves an extension of the REMOTE_INPUT mode. Currently, there are no controls for buffering latent remote input; such that the device will process the signals as soon as they are available. As a result, the local user will see an undefined jitter in the maze whenever a block of inputs are processed. Implementing a buffer to store these inputs and process them at a set rate will not rid of any latencies, but it will reflect a delayed but accurate movement of the messages that were transmitted. This will make it obvious to the user that network latencies are present, as opposed to the undefined movements.
Max: Arduino code, two sonar triangulation scheme, and the iOS client.
Yu: CAD design and laser cutting, prototype assembly, and local testing.
In this project, we created wireframe masks mimicking plague masks from the 1700’s. They contain fake flower petals representing the safety that the mask gives the wearer because in history, dried flowers and herbs were placed in those beaks since doctors thought that the good smell would prevent them from getting sick. By creating an artistic expression of disease masks, our goal is to convey the importance of wearing masks during pandemics throughout history. In general, this project is meant to convey how although wearing masks has become a political issue, underlying this debate is the greater issue of everyone’s safety.
Aside from the obvious objective of creating the masks, we originally wanted both of them to open at the push of a button before defining our concept (to be portrayed in a video) more clearly. However, after deciding to make one person the one putting others in danger and the other person essentially a victim, our objectives changed drastically for both masks to be very different from each other. Eventually, we decided we wanted Brianna’s mask to be covered in black material, and the only one that would open and that Anya’s should be an open wireframe with the beak “closed”. The over-arching objective of this project was to create a video with these creations that clearly portray a concept that does not taking much context outside of the video.
As discussed in objectives, the implementation of these devices and design choices centered around portraying a very specific and clear concept in our video. In the video, as Brianna opens her mask to fight against the mandate for people to wear masks, Matilda (Anya’s housemate) and Brianna both lose the flower petals from their masks, signifying that even though Matilda kept her mask closed, both of them are in danger when Brianna doesn’t keep her mask on. After the argument that mandating masks is illegal is explained, Matilda responds with the argument as to why is it not illegal (through voiceover as well).
I think our video successfully conveys our concept because it shows how one person’s safety is affected by another person’s decision not to wear a mask during a pandemic. I think this concept is also introduced in a unique way, since it uses plague masks in the 1700s to get the message across. The visual differences in the mask also tie into the concept. Brianna’s wears a mask covered in black fabric, that she opens, putting herself and Matilda in danger, which suits her “character” who fights for their choice to wear a mask or not instead of protecting herself and others. However, the fact that her mask is covered in fabric also shows that even though she opens her mask, she is still more protected than Matilda (since other people wear masks and that makes her safer). However, Matilda is demonstrating a very bare wireframe mask that is closed, but clearly leaves a lot of open space. The fact that her mask is not covered helps convey how even though she is responsibly wearing a mask, she is left very vulnerable and in danger since Brianna opens her mask. One “failure” or weak point of our project in our opinion, is that the automation aspect of our project became more and more irrelevant while defining our concept. We still appreciate it in the project because it emphasizes how we have come a long way (in technology and our understanding of pandemics) yet we still don’t all wear masks, which even doctors in the 1700s knew to do (whether or not the right science backed it up), however I think that this part of the project and its purpose isn’t so emphasized or clear in the video.
Like what was mentioned in outcomes, I think the presence of automation seems a little random in the video so for a future iteration, I think we would try to make it more clear as to how the mask being able to be opened by the press of a button contributes to the concept.
We created our own masks while consulting quite often to decide on design choices and how they would contribute to a final concept. We also created our own videos of our mask, which Anya put together using iMovie.
We just started with a very unclear purpose of just creating automated plague masks from the 1700’s, as seen from the initial drawings below.
We then started by creating a simple wireframe:
Then we hooked up a servo motor to open the bottom jaw so that it remains open as long as the button is pressed (on both of our first versions).
Then this is where our masks and objectives for the next steps started to vary. Brianna covered hers in black fabric and put a top hat on to create a more clear character and Anya remade hers into a more sleek wireframe design that would demonstrate the flower petals leaving with the mask closed. The final products can be seen in the video above.
The wire framework of the mask was originally inspired by various masks made by Eli Secrest. More of his work can be found in the following link: https://www.instagram.com/aerarius_metalworks/?hl=en
Code for getting the servo motor to open the jaw on Brianna’s mask:
const int SWITCH_PIN = 2; const int SERVO_PIN = 9; Servo svo; void setup() { Serial.begin(115200); pinMode(SWITCH_PIN, INPUT); svo.attach(SERVO_PIN); } void loop() { if (digitalRead(SWITCH_PIN) == HIGH){ svo.write(180); } else if (digitalRead(SWITCH_PIN) == LOW){ svo.write(0); } }]]>
The goal of Elevate is to create a soothing and calming experience for the user, while also giving the user an indirect sense of companionship. Since the beginning of the pandemic, many individuals have remained quarantined from social interaction. Elevate aims to address the lack of interaction by providing a soft, subtle presence in everyday life. Whether it is your morning coffee, or an afternoon book, Elevate acknowledges your actions and highlights your daily routines.
In order to achieve this our table includes LED lights which creates a back light for the panel that is lifted. This adds to the overall aesthetic of Elevate and further emphasizes the objects placed on the panel. Additionally, we control the stepper motor to move slowly and calmly for the user experience, but not too slow to the point where it is frustrating. The panel only slightly shifts upwards, around the height of one panel, because it is meant to emphasize and give importance to your objects, but not create an inconvenient level change. We also decided to create multiple panels, so that the table could highlight one specific area of the table, otherwise there wouldn’t really be an emphasis on a specific object if the entire table would lift.
A panel consists of an LED, force-sensitive resistor (FSR), and a stepper motor. The code includes state-machines for each panel. Each panel is either loaded or unloaded, based on the FSR reading. A panel is controlled by two Arduino, the Master Controller, which monitors the state using an FSR, and the Stepper Driver Arduino, which controls the Stepper motors.
Each loop, the Master Arduino runs each panel’s state machine, which maintains the state of the panel, and then checks the state of the panel to see if it is loaded or unloaded. If the panel is loaded and was previously unloaded, the master controller sends the state change to the Stepper Driver Arduino. The Driver Arduino loop iterates through a list of steps needed until a panel is at its state. If a panel has steps remaining, each step is taken incrementally, allowing for simultaneous stepper motor movements. When data is received from the master controller, the steps array is updated, and the motor is moved accordingly.
Below is an example of the State Machine for Panel A. When state_a is equal to 5, this means that the FSR was triggered and the panel is in the loaded state. When state_a is equal to 4, this means the panel is in the unloaded state and the state machine resets.
// STATE MACHINES // Manages the state of the Motor/FSR/LED combo and updates according to // FSR reading void SM_a (){ state_prev_a = state_a; switch (state_a){ case 0: //RESET state_a = 1; break; case 1: //START val_a = analogRead(aA); if (val_a >= fsr_a_threshhold) {state_a = 2;} break; case 2: // SET TIME t0_a = millis(); state_a = 3; break; case 3:// WAIT AND CHECK val_a = analogRead(aA); t_a = millis(); if (val_a < fsr_a_threshhold) {state_a = 0;} if (t_a - t0_a > debounceDelay){ state_a = 5; } break; case 4: // UNLOADED state_a = 0; break; case 5: // LOADED val_a = analogRead(aA); if (val_a < fsr_a_threshhold) {state_a = 4;} break; } }
The Stepper Driver loop calls the step_motors function, which checks if motors need to be stepped, and steps them. This function steps each motor incrementally, to allow for simultaneous stepper motor movement.
// Iterates through the steps array and steps each motor the required number of steps // This allows for simultaneous movement of stepper motors void step_motors() { for (int i = 0; i < 4; i++){ //Rotate clockwise to loaded state if (steps[i]>0){ //Case for A motor because A pins are not sequential on CNC Shield if (i == 0) { step (true, aDir,aStep); steps[i] -= 1; }else{ step (true, (4+i),(1+i) ); steps[i] -= 1; } //Rotate counterclockwise to unloaded state } else if (steps[i] < 0){ if (i == 0) { step (false, aDir,aStep); steps[i] += 1; }else{ step (false, (4+i),(1+i)); steps[i] += 1; } } } }
The full code is provided in the appendix.
While we were able to place the circuit portion into the table we did come across multiple barriers. The first struggle was getting the panels to align correctly, since the FSR under the panels were creating uneven leveling. Additionally the FSR we were using were smaller circular ones, so if we had more time and resources to move forward we would probably shift to using the larger rectangular ones. Using the smaller ones led to the table not always responding when an item was placed on it because the force did not reach the sensor or was not strong enough. For this reason we had to thoughtfully place the mug near or on the sensors to get a response. We also had to sand down the panels so that they would not rub against each other and get stuck while moving, which ultimately led to gaps between panels. In a living room environment where liquids and food would be placed on the table there is lots of opportunities for spillage between the gaps causing the circuit to be destroyed.
Despite physical problems we do think our design was successful in creating the effect we desired. Our panel slowly elevate with a back light highlighting the mug we were running tests with. Elevate was successful in interacting with us and informing us that we have placed an item on it, and to not simply ignore it, but rather treat it with more consciousness. As we were dealing with the mug full of liquid we were definitely more aware of what we were doing with the mug and where it was being placed. This was mostly because we did not want to cause any spillage and ruin the device, but overall it was successful in giving the mug more importance.
Moving forward we would want to further improve our physical table and create a more leveled top to improve the overall design as well as the function. Additionally, we would work on fixing the code, so that it is able to lift an object that spans two panels because our current iteration is unable to do so. I also think we would want to explore ways we can make it water proof and have some sort of collection system for crumbs. Bowls with food and cups filled with liquid will be placed on this table, and currently it will just spill into the wires and circuit area. Moving forward we would want to prevent this to decrease stress for the user as it is supposed to be a calming piece in your living space.
Through this process and project we have learned that 3-D modeling is extremely idealistic especially with moving components. In rhino it is very easy to have complete control over each component, but in reality they bring their own hurdles that are not highlighted in a digital model.
Amal focused on the physical aspect of the table, while Chris worked with the electrical aspect.
Project Goals
Our goals throughout the project were to convey personality through eye movement which enhances the telepresence interaction, as well as elicit an amused yet shocked reaction. The MVP of our project was to create a working prototype that would move Mona Lisa’s eyes. This meant having facial, eye, and pupil recognition using opencv, an arduino that would receive messages over an mqtt server, an animatronic eye on a 2 axis gimbal to replicate eye movements, all placed into a box with a Mona Lisa drawing in the front. This is how the eye was set up:
Initial Design
When I began writing the code for the pupil detection there were a few choices to make in terms of the implementation. Since there does not exist a ML model that detects pupils like there are for faces and eyes, I had to be a little more creative in order to detect the pupil. I decided to use blob detection to locate the pupil. The idea was that the largest blob within the eye frame would be the pupil. In opencv, in order to do blob detection you first need to make the image grayscale. At first, to determine where the blob was, I applied a threshold on the grayscale image in order to limit certain pixels from showing up. Ideally, this would cause the pupil to stay, while removing unwanted pixels. However, this had several issues, because this logic heavily relied on finding the perfect threshold value. This wasn’t good, especially because lighting played a huge role in determining that value. Thus, I decided to go with a different design that would find the contours of the eye frame and then pick the pupil based on blob sizes that appeared. This resulted in a more well-defined solution that limited external factors. Another important decision was where in the code the transformation of data would occur. There were two possible places that this could happen in. The first option was on the python end, meaning that before sending the message to the arduino, the python code would convert the data into its appropriate angles. The second option was to convert the data into angles on the arduino end. Upon receiving the data, the arduino would convert the relative pupil distances into angles. I chose to convert the angles on the python end, because it made more sense to do the heavy lifting all on the python, and simply have the arduino read the data and execute the servo motion. Another important design decision was the use of the mqtt code provided through the class resources. When I first began implementing the mqtt server connectivity I was planning on making my own program to send messages. However, reusing the existing bridge program code saved me many hours that I would have spent trying to figure out how to use that.
When we thought about creating the eye structure, there were several ways online to go about it. We decided to use the method where we used a 2 axis gimbal to portray eye movement. We didn’t want it to be so complex that we couldn’t achieve it, but we also wanted to create a somewhat realistic eye movement experience. Initially, we decided to use clay to model the eyeball.The clay ended up being too weak to support the wire movement, and it also absorbed the glue that was used on it to attach the l-brackets. Therefore, we switched to plastic eyeballs, as they provided us with a more realistic eye feature for the artwork and didn’t cause any of the problems that the clay eyeballs did. Due to uncertainty regarding using TechSpark, we were unable to change some of the cardboard features for the box to plywood. We also couldn’t add an extra shelf, like we intended, so we decided to use books to place the arduino and breadboard on. The intended structure can be seen in the following images:
Another feature we had issues with was the paperclip linkages that connected the servo horn to the animatronic eyes. They were not flexible enough, so we decided to use some soldering wire. The CAD model that we created, was also unable to move, due to the rigidity of the linkage connecting the servo to the eye and the fixed position of the servos. Unlike We couldn’t represent the soldering wire in the CAD model, but the physical changes we made were successful. The eye structure was able to move according to the messages it received over the mqtt bridge.
This can be seen in the video below:
Future Plans
The next iteration of this project would be to create a replica of the device. The initial design included a pair of systems intended to be controlled by each person. Having two devices would allow for a richer connection between the users, as they can both act as the controller (person who controls device) and receiver (person who sees output of eye movements) simultaneously. Another idea we had to convey personality was by adding a blinking feature. This would provide the device with a more human-like action.
Task Breakdown
Max wrote the code for the pupil detection, integrated the mqtt server to send messages to arduino, and wrote arduino code to move servos. Marione created the CAD designs, animatronic eye, and the mona lisa box. These can be seen below:
Code Implementation Details
If you want to look more in detail into the code it can be found here: https://github.com/mdunaevs/Eye_Tracking
Or downloaded as a zip:
In order to recognize a pupil, first we need to detect a face, and from that face we need to identify the persons eyes. The same principle can be applied in programming. Opencv allows for easy extraction of a person’s face and eyes using cascade classifiers (machine learning approach to detect objects within an image. In my code, I made use of these classifiers so that I could detect a persons’s face and eyes. In the functions shown below, I am converting the image into a grayscale and then feeding it to the classifiers. The classifiers then come back with a set of coordinates representing the detected objects, which contain information about the position, minSize, and maxSize of the object. The third index of the tuple represents the maxSize. Using the information about the maximum size, I can determine which object is the largest, which represents the object I am looking for.
Now that we had the eyes isolated, we needed to get the pupils. However, the issue was that unlike that face and eyes there currently does not exist a cascade classifier for a pupil. This meant I had to get a little more creative with how to extract the pupil. As mentioned before, the concept behind the classifiers were to detect regions / objects within the image. Following a similar approach I decided to attempt to detect large objects within the eye frame.
My solution to this was to use blobs. A blob is a group of connected pixels in an image that share some common grayscale value. By finding all the blobs within the eye frame, I can use this information to find the appropriate blob corresponding to the pupil.
Now that the we were able to locate the pupil of the eye, we needed to figure out how to send some sort of data relating to the pupil to the arduino. To begin with, we defined what our data would represent. Our data represents the angle to move the arduino. The way we calculated the angle was by first computing the distance between the expected location of the pupil (based on center position of eye frame) and the actual location of the pupil (based on position within the frame).
Once we had the distance we applied a conversion to change the distance into an angle. Note that the constants were found by experimenting with the data and trying to get a reasonable angle change.
Now that we had data, this needed to be sent to the arduino. The data was sent over an mqtt server. Using the same bridge program we had in class, we were able to send data to the arduino.
Then when the data was received on the arduino end, we were able to move the servos.
At this point the code was working properly, however in order to get a ricer eye movement experience, we decide to incorporate saccade movements. A saccade refers to the sudden, quick, simultaneous movement of both eyes in the same direction.
The trick to this was determining what was considered a saccade motion. Since saccade motions are typically quick and sudden jolts of both eyes, if we had to map every saccade motion to an arduino movement, it would result in the same issue as before, where too much data is being sent to the arduino in too little time. Thus, I created a filtering system to only acquire the saccades in which the ending position of the pupils relative to their eye were roughly in the same position. Based on the data and tweaking the value around 5 was a reasonable number used to determine whether the distances of the pupils were relatively close to one another.
CITATIONS
http://www.pyroelectro.com/tutorials/animatronic_eyes/hardware_servos.html
https://medium.com/@stepanfilonov/tracking-your-eyes-with-python-3952e66194a6
https://gist.github.com/edfungus/67c14af0d5afaae5b18c
Supporting Material
Electronic Schematics:
CAD Files:
For our final iteration of the flower mask, we settled on a concrete concept, as it kept changing throughout the creation process. We also created a video to clearly demonstrate this concept. For context, we created wireframe masks mimicking plague masks from the 1700’s. They contain fake flower petals representing the safety that the mask gives the wearer because in history, dried flowers and herbs were placed in those beaks since doctors thought that the good smell would prevent them from getting sick. As Brianna opens her mask to fight against the mandate for people to wear masks, Matilda (Anya’s housemate) and Brianna both lose the flower petals from their masks, signifying that even though Matilda kept her mask closed, both of them lose their safety when Brianna doesn’t keep her mask on. After the argument that mandating masks is illegal is explained, Matilda responds with the argument as to why is it not illegal (through voiceover as well). In general, this project is meant to convey how although wearing masks has become a political issue, underlying this debate is the greater issue of everyone’s safety.
]]>We decided instead of having one switch activate one rotating wall, one switch would move all the rotating walls 90 degrees. The user then has to move through a constantly shifting maze with more movement than before. With this in mind, the board was entirely redesigned into the following:
The entire gimbal system with the board can be seen below:
In terms of the code, we added saccade motion. A saccade refers to the sudden, quick, simultaneous movement of both eyes in the same direction. In the code this was implemented as shown in the image below.
The trick to this was determining what was considered a saccade motion. Since saccade motions are typically quick and sudden jolts of both eyes, if we had to map every saccade motion to an arduino movement, it would result in the same issue as before, where too much data is being sent to the arduino in too little time. Thus, I created a filtering system to only acquire the saccades in which the ending position of the pupils relative to their eye were roughly in the same position. Based on the data and tweaking the value around 5 was a reasonable number used to determine whether the distances of the pupils were relatively close to one another.
Here is the link to the github: https://github.com/mdunaevs/Eye_Tracking
For the mechanical components:
Since we had previous issues with the eye materials for last time, we replaced the clay eyes with plastic eyes, and replaced the paperclip linkages with solder wire. This allowed for the mechanism to move more easily.