gregariosa-TeachableMachine

I created a “rock, paper, scissors” model using this tool. While this was a lighthearted exercise, it revealed how training ML data requires human touch to be most successful. When my idea initially emerged, I instantly thought the images would look like stock photo representations of ‘rock, paper scissors’, where the silhouette of the signals are recognizable to a human eye. However, when training the model, I found myself using natural gestures of rock paper scissors, resulting in many unrecognizable gestures visually. This made sure that the model was more accurate when interacting with it on camera, which was perhaps not achievable if the model was just made of stock photo images of rock paper scissors.