marimonda – TeachableMachine


View post on

This was really fun and silly, I was going to put music on the video clip but decided against it since I did it without music. This is also just me making a fool of myself so :pensive:. Oh well. I trained the detector on me dancing the Macarena with the main three lines of the dance and the choreography. For the record, I have never danced it before and I am not particularly good at it either.



marimonda – Pix2Pix


It makes me wish I was a better artist so I could do these models justice but I really do think the beauty in AI doesn’t come from how good it is at doing it’s job but how the biases in the original data sets and the human-ness to the human inputs in the prediction models make something completely unexpected.


I was trying to make it like one of those black cat photos where they look really funny. I’m not great at drawing though 🙁 BUT! the cat still looks Cat-like.


Cat heart <3


THIS looks like an actual shoe! Sort of!

This too :).


This was trained on very specific buildings– Like neo-classical style buildings mostly and I tried to draw a more modern design and Im not sure if it Vibed with it very well :O. I wonder what would happen if it was trained also on modern buildings? Would it create a clash in designs? Would it make everything more inaccurate?



marimonda face tracking

(The real gif)

It’s me!~

I am really homesick for a country I haven’t lived in for 9 years and this piece is about that.

The background is a drawing of La Iglesia de San Nicolas de Tolentino. 

My face is a marimonda.

My process/context about my guy  below the cut, since it’s mostly reflection on this body of work for myself.

Continue reading “marimonda-mask”


I tried to keep this somewhat short since I have a bad habit of writing essays on these blog posts:

  1. In law enforcement:  I think one main topic that got covered in most of the readings was using facial recognition in law enforcement. The reason this topic is super interesting to me is because it’s literally one of the most dystopian applications of AI. It is not just seen with facial recognition, but it’s even being used in decisions of who gets a longer sentence than not. 
  2. Is being recognized a good thing? I think the whole section of the John Oliver video that focused on was INSANE, I think this makes me wonder how this doesn’t already bypass federal regulations or any sort of fair use copyright, especially when multiple corporations like Facebook have requested Clearview to stand down. I was under the impression that this type of applications of AI would be illegal, and its so terrifying to know that its not.

marimonda – mobiletelematic

Short and to the point video

Full game play with a few drawings


PLAY PIXEL DRAW HERE (mobile only)


This project is a collaborative pixel environment, where two users get assigned a role of black or white pixels and together they make small drawings or icons.

I actually really enjoyed making this game, my original idea was somewhat more simple and straightforward, it was more about drawing and erasing the lines people make. In this version, I implemented a way for two users to alter the same space simultaneously in different ways. I was primarily concerned with the idea of collaborative drawing and a relatively limited set of tools — In essence, both players can attempt to make compelling images just by using their color, but when they are connected, they are able to push it and collaborate to make something interesting. I especially payed attention to the third design issue listed in the deliverable — Equal roles vs. complementary roles— because to me, the idea of two people assigning each other tasks and roles to make something is interesting. Naturally, both colors will have different functions. Often, white is relegated as the background and black as the foreground… but what if the prompt wants the user to draw a night sky? Or a blank notebook page? This project is ultimately about the communication and problem solving these two parties use to reach their ultimate goal, especially when most of the prompts are abstract or have different interpretations.

Overall, this project was surprisingly difficult to make, but really rewarding and fun. The actual game was easy, but there were a lot of weird things about using UUIDs that made it funky (mostly just in trying to continuously update the same data from two different places). There are a few minor things about the interface that were not relevant to put on the video. I made a little help pop up, and a way of saving the images locally. These things aren’t really important, but I thought were worth mentioning.

I think the weakest part of my project is my documentation, I don’t have the best camera equipment so I spent most of my time trying to set up a way of filming myself without a tripod. I should’ve spent more time thinking about alternatives rather than brute-forcing it. For that reason, I am going to try to re-shoot my video in a way that get gets the message across in a more creative manner.

I personally have been having a lot of fun playing this game with my friends, we actually have made a lot of small drawings together and it has been a fun way to get connected over this shared language of pixels. And while the images in the videos are somewhat ugly, there is a lot of possibility to make compelling icons together:

Some drawings I did with friends (screenshots showing the interface):





















Some extra doodles (saved using the save button):


(This is Benford)






MINOR EDIT: I added quotation marks after the prompt: ex. Draw “Rejection”

marimonda – SoliSandbox

This project is a landscape generator, the idea of it is that a person can go in and traverse through the environment and take images, much like a regular person would while sightseeing. I am very happy with the improvements I did to this project. I was able to get a variety of gestures to work in personalizing these environments (adding parallax, choosing colors and playing around with iterations, ect…) , and I am really into the idea of making it into an actual application that lets you take photos! So that was fun.

The main thing I learned about my process in making this is: It seems I am still a toddler without developed object permanence, because when I don’t see the console logs, I simply seem to completely forget they exist. That was my poor attempt at a joke, but I did realize I have bad debugging habits, and that cost me a lot of time while making this. Soli Sandbox also has some weird quirks in how it deals with data (images) downloaded from the app so for this reason you cant use Save() or SaveCanvas() to download an image (it seems to all go into app data folder, and I didn’t want to mess with folders I don’t have permission to view). To go around this I created a Twitter bot  to post some images, though dealing with the Imgur API has been painful.

Play with it here

Project link


– I added subtitles to the videos above. I might re-edit them so that the subtitles are forever in the video and not just a YouTube feature.

– I adjusted the Tap function to work as the gesture that takes the snapshot of the image, as well as making it tappable so it feels more like taking a picture on a Camera. 😀

– I removed the Twitter API from the documentation/code until I figure out how to make it work properly with Soli in terms of images. I will eventually update this with the working code!

Check the Twitter to see images generated by swipe down gesture:




Soli Landscapes

For this project, I am using the Soli sensor data as a way to traverse and look for specific objects in a generative landscape. The text quoted below in the screen is a statement that will reference a specific thing to look for in the environments. In the case of the video below, the excerpt references and eyes and so the player needs to traverse through the landscape to find them.

This project is not complete, and while the main elements of the game are complete, I still have a few things left to implement:

  1. I am still working on making the objects move smoothly using linear interpolation and parallax. I think this would greatly improve the visual presentation of the piece.
  2. I am working on using recursive tree structures and animations to make this environment more dynamic. I also want to add more variety of the objects and characters a person looks for in the landscapes, and text that references it. Also, more movement and responsive qualities in the background, such a birds when a specific gesture is done, currently an example of this type of responsiveness is how the moon-eye moves, it closes and opens depending on if human presence is detected.
  3. Varied texts and possible generative poems that come from the objects that are being looked at in the landscape. This excerpt comes from a piece of writing my friend and I worked at but I would like to make it generative.

Here is an example of how the environments look when swiping:


Here is another landscape:

marimonda – soli-sketches

I am still not completely sure of these ideas but they’re directions I’m sort of thinking about.

  1. Weird Landscape Generator

In this one I want to make a generative virtual world that can be traveled around with using gestures,  below I  have sort of an image of what the map would look like. I am interested in also using textures from images but I am still thinking about how I would do this, if I should use ML or perhaps an easier way to get weird generative landscapes (I could always just draw elements?).

2. Conversation agent?

Basically you are having a coffee date with certain people (in a sort of speed dating setting) but its really surreal conversations, ideally in a place that seems like could be a recorded zoom session because that’s the most uncomfortable digital space we are all used to. Swiping gestures could be used back and forth to change a person. It would be fun to work with generative backgrounds/people so that each person is unique. Choices would be things the user would have to say, so that the user can simply eat their dinner/coffee without getting their dirty hands on their phone.

3. Bad voice to text diary:

You use swiping/tapping gestures to reset and go back and forth in the pages. Many words are replaced to make what is being written somewhat nonsensical.

Certain keywords could aid in formatting? “bold” allows words to get bolded, as does “italics” and maybe certain words like “bigger”/”smaller” could change the font size. It would also be fun to experiment with posting these diary entries in social media with IFTTT.

4. Tweet navigator that shows/reflects specific trends/sets of keywords that are chosen from a list (using swipe or tap) to create poetry (it could also be generative, where the user swipes through different generative phrases to find tweets with them). This could also incorporate elements from 3. with speech to text.