This is a project linking the soli swiping motion to unleashing the plague on an unsuspecting group of believers. The current enabled plagues: plague of frogs, hail, and death to the first born.

I learned how to use physics and collision physics to implement real world feeling game dynamics.


(the cites for the sounds are at the top of the code)

for some reason, can’t export my screen recording off of pixel. It keeps failing. So here’s the video made off of the desktop version of the app.:

mokka – checkin

Wack-a-Mole: Might be pretty self-explanatory but I wanted the tap movement to be able to emulate this idea of “wacking” the mole. Or, perhaps I may use this concept of wack-a-mole but metaphorize it into something else. Like maybe wack a cockroach/bug.


  • Wanting a Title/Start Page
  • Mole Holes don’t center or resize along with canvas
  • How to implement image into it while also being able to resize it (wanting to make my own pixel characters as moles)
  • Overall, visually stunted.

gregariosa – checkin


For this project, I wanted to create an alarm that reacts to the range of conscious states you have in the morning.

First, the unconscious: you wake up in the morning, with a phone spewing information. Nothing registers in your head because you’re still trying to wake up. Hence, the phone spews out meaningless information as well.

Second, the conscious: you grab your phone for some new information. Things start to make sense, as you expose yourself to the news in the world. Hence, the phone also reacts, by reciting the most recent news of the day.

Lastly, the dismissal. you go about your day. Hence, when you leave the room, the phone also dismisses itself.

(Made with the New York Times API, Rita.js, and p5.speech.js. Still need to create better transitions and visual interest…)

Toad2 – soli-checkin

Concept: Allow the user to experience the joy of turning over rocks and finding beetles in a garden and watching them scuttle to a nearby shadow (the user’s hand or back under the rock they came from).

Reach In – beetles scuttle to the user’s hand shadow location, Reach In Too Fast – beetles scatter off screen (think they’re being crushed), No Presence – beetles wander idly across the screen, Swipe – roll rock to according to direction

In progress: Turning the rock over, implementing way to show multiple species of the same beetle, allowing the beetles to hide under the rock instead of the user’s hand, connect it to Soli

Future: Improving beetle physical and movement details, adding more details to environment, adding trail of mud where the rock used to be?





Almost Chicken Checkin

Video demo:

Good things:

  • all the features implemented seems to work!
  • chicken looking bouncy
  • I have a chicken maker that will make any adjustments to the chicken fairly easy.

Bad things(todo):

  • I got the screen dimensions wrong(could be fixed easily though)
  • Chicken– without any features — looks very phallic(   . _.)
  • sometimes springs get tangled up if you slap it too much (this should be easy fix)
  • Needs chicken sounds
  • Weird thing where while idling it sways towards one side, something wrong with how I’m using noise


  • I haven’t implemented it on Soli yet because I want to get the features down
  • I feel like it’s not particularly interesting yet, it’s just a DNA strand you can manipulate
  • I like the colors


Dog Lullaby

On desktop, use the arrow keys. Otherwise, use Soli swipe gestures to change the three notes used to generate the melody and a Soli tap gesture to mute and unmute the lullaby.

I’m using RiTa.js for lyric generation (markov-generated from a text file (lyrics.txt on glitch) I’ve filled with lullabies, love notes, children’s songs, bedtime prayers, and nursery rhymes). I hope to improve the rhyming, which currently simply replaces the last word of a line with a random word that rhymes with the last word of the line before. This results in nonsense that isn’t thematically consistent with the markov-generated nonsense. More complex rhyme schemes than AABB could draw attention away from this fact.

I’m using Tone.js for the song aspects. I was initially using p5.Sound but the quality was terrible in general but especially terrible on the device I am designing for, so last night I switched everything over to Tone.js. Due to that late change, I still have to refine the oscillators/synthesizers I’m using and explore other Tone.js functionalities. I’m very pleased with the Tone.js quality in comparison to p5.Sound and wish I had used it from the beginning.

I also have to refine the animation, as it was based around the previous drum pattern I was using, which ran on p5.Sound. I specifically mean the dog (Joe <3), as there are plenty of opportunities to animate him beyond what I currently have. I might ease the stars in and out beyond the simple fade introduced by having a background set to a low alpha value, but this would require creating and storing stars in an array of stars, drawn in draw() instead of in playBass(), where it currently is. This is a relatively straightforward change, the most complicated aspect being disposing of the elements in the array that do not need to be drawn. This will be based around a timer, which hopefully will be simpler to work with using Tone.js, possibly relating to ticks.value instead of counting the number of beats manually, as I am doing now.

I also have to implement functions for the beginning and end of the song. It runs for a random number of lyrics between 15 and 35 lines, but the program ends on a scope error from there.




demo version

DAGWOOD is a sandwich generator which create random sandwiches by using the two Soli interaction methods, swipe and tap. Users can tap to add ingredients to the sandwich and swipe to remove the last piece of ingredient if they don’t want it.

Currently working on connecting p5 to Twitter for auto posting screenshots to the Twitter bot.


Soli Landscapes

For this project, I am using the Soli sensor data as a way to traverse and look for specific objects in a generative landscape. The text quoted below in the screen is a statement that will reference a specific thing to look for in the environments. In the case of the video below, the excerpt references and eyes and so the player needs to traverse through the landscape to find them.

This project is not complete, and while the main elements of the game are complete, I still have a few things left to implement:

  1. I am still working on making the objects move smoothly using linear interpolation and parallax. I think this would greatly improve the visual presentation of the piece.
  2. I am working on using recursive tree structures and animations to make this environment more dynamic. I also want to add more variety of the objects and characters a person looks for in the landscapes, and text that references it. Also, more movement and responsive qualities in the background, such a birds when a specific gesture is done, currently an example of this type of responsiveness is how the moon-eye moves, it closes and opens depending on if human presence is detected.
  3. Varied texts and possible generative poems that come from the objects that are being looked at in the landscape. This excerpt comes from a piece of writing my friend and I worked at but I would like to make it generative.

Here is an example of how the environments look when swiping:


Here is another landscape:


This demo does not work with the microphone yet (I’m working on a jank workaround). You can check out a web version here. Open the console to see the possible verbal commands. Use the arrow keys in place of swipe events.


This demo can be viewed on a browser here. Use the arrow keys in place of swipe events. The collision detection is pretty jank ¯\_(ツ)_/¯ . The main issue is that I have to to get the shadows & lighting fixed