This project uses Soli swipe events to navigate a filepath and see the breadcrumbs. I initially wanted to pursue this idea using d3.js, but I was unable to configure it correctly, so this was made with p5. The data is from d3.js's examples (flare.json). The up and down swipes change the current branch to a sibling branch, whereas the left and right swipes change branch to either a parent or child branch respectively. I also wanted to connect this filepath navigation to p5.speech() to allow users to change the entries in the selected field. However, with my struggles to get p5.speech() to work for another Soli project, I decided to leave this project as it is.
Project => ( app | code )
*use arrow keys in place of swipe events
This project allows users to use Soli swipe events to describe various transformations on a cube in 3D space. The project was build using three.js and I learned many things about the THREE.Matrix4 object. I also initially copied some code from 2013 to handle collision detection. However, it was really buggy, so I wrote my own which works by calculating the axis-aligned bounding box (AABB) and detecting collisions. Since each cube must pass through 6 transformation filters, the program requires a lot of swipe gestures. In order to make the program easier to use and more toy-like, the number of cubes is limited to 4. Personally, I could not get the Soli tap event to work on my phone, which is why I only relied on swipe events. Perhaps using tap to drop could make the interaction less awkward.
Project => ( app | code )
*use arrow keys in place of swipe events
This project allows users to create a world using Everest Pipkin's city game tilset. It was built using p5.js and p5.speech(). Soli swipe events allow users to move the cursor location, and parsed speech from p5.speech() allows users to place custom tiles. However, at the time of making this, Soli Sandbox did not support microphone access. The way I got the version filmed to work was by using two devices: the Pixel 4 updated the cursor and my laptop's microphone updated the world. Both devices were networked via socket.io.
Browser version => ( app | code )
*use arrow keys in place of swipe events
Phone version => ( app | code )
^^ this version is really jank
This project is a landscape generator, the idea of it is that a person can go in and traverse through the environment and take images, much like a regular person would while sightseeing. I am very happy with the improvements I did to this project. I was able to get a variety of gestures to work in personalizing these environments (adding parallax, choosing colors and playing around with iterations, ect…) , and I am really into the idea of making it into an actual application that lets you take photos! So that was fun.
The main thing I learned about my process in making this is: It seems I am still a toddler without developed object permanence, because when I don’t see the console logs, I simply seem to completely forget they exist. That was my poor attempt at a joke, but I did realize I have bad debugging habits, and that cost me a lot of time while making this. Soli Sandbox also has some weird quirks in how it deals with data (images) downloaded from the app so for this reason you cant use Save() or SaveCanvas() to download an image (it seems to all go into app data folder, and I didn’t want to mess with folders I don’t have permission to view). To go around this I created a Twitter bot to post some images, though dealing with the Imgur API has been painful.
– I added subtitles to the videos above. I might re-edit them so that the subtitles are forever in the video and not just a YouTube feature.
– I adjusted the Tap function to work as the gesture that takes the snapshot of the image, as well as making it tappable so it feels more like taking a picture on a Camera. 😀
– I removed the Twitter API from the documentation/code until I figure out how to make it work properly with Soli in terms of images. I will eventually update this with the working code!
Check the Twitter to see images generated by swipe down gesture:
New Media Experience: Virtual Squash is an app that recreates the experience of practicing squash in a squash court using Soli sensor on Google Pixel 4.
This app is made to motivate myself to do some workout in the COVID19 era.
Overview:
I made a virtual squash that could be played with Soli sensor. Left and right swipes mimic the squash swings, and tap reclaims the squash ball. Every time the squash ball hits the front wall, the number of hits on the top left corner increments by one. Similarly, every time the ball is reclaimed, the number of resets below the number of hits increments by one.
Reflection:
Though this project, I learned how to create space with WEBGL in p5.js, manipulate camera function to change viewpoint, and incorporate a sound file to be played every time a certain action has taken place.
Quick Demo Video:
~35 minute long video of playing squash with Soli:
* I used an aluminum bottle so that Soli detects my motion better. As you can see throughout the video, I noticed Soli recognizing almost every move I make in a certain bottle position and a certain speed of it moving across the screen and tried to swipe in that fixed position & speed.
Wack-a-Mole: Might be pretty self-explanatory but I wanted the tap movement to be able to emulate this idea of “wacking” the mole. Or, perhaps I may use this concept of wack-a-mole but metaphorize it into something else. Like maybe wack a cockroach/bug.
Problems:
Wanting a Title/Start Page
Mole Holes don’t center or resize along with canvas
How to implement image into it while also being able to resize it (wanting to make my own pixel characters as moles)
For this project, I wanted to create an alarm that reacts to the range of conscious states you have in the morning.
First, the unconscious: you wake up in the morning, with a phone spewing information. Nothing registers in your head because you’re still trying to wake up. Hence, the phone spews out meaningless information as well.
Second, the conscious: you grab your phone for some new information. Things start to make sense, as you expose yourself to the news in the world. Hence, the phone also reacts, by reciting the most recent news of the day.
Lastly, the dismissal. you go about your day. Hence, when you leave the room, the phone also dismisses itself.
(Made with the New York Times API, Rita.js, and p5.speech.js. Still need to create better transitions and visual interest…)
Concept: Allow the user to experience the joy of turning over rocks and finding beetles in a garden and watching them scuttle to a nearby shadow (the user’s hand or back under the rock they came from).
Reach In – beetles scuttle to the user’s hand shadow location, Reach In Too Fast – beetles scatter off screen (think they’re being crushed), No Presence – beetles wander idly across the screen, Swipe – roll rock to according to direction
In progress: Turning the rock over, implementing way to show multiple species of the same beetle, allowing the beetles to hide under the rock instead of the user’s hand, connect it to Soli
Future: Improving beetle physical and movement details, adding more details to environment, adding trail of mud where the rock used to be?