Marimonda – Final Project

 

Breaking up with a robot after an intimate relationship defined by a lack of dancing. 

Rompecorazones (Heartbreaker)

In this video (in case you didn’t watch it), I am wearing a traditional Colombian Cumbia dress as I dramatically break up with a robot.

This project, at it’s core, is a vent piece that contextualizes my failure in attempting to have a robot dance with me.

My Person in Time project, at is core, was creating a robotic interaction that would follow me as I moved or danced with it. But I wasn’t able to get quite there. I spent roughly 2 months continuously debugging a UR5 Cobot (Collaborative Robot), as I attempted to have it follow my face in real time.  Using Handsfree.js, python and a couple of dozen cups of coffee, I was able to partially achieve this daunting task.

Without getting too much into the technical implementation of my robotic interactions, I had a lot of issues wrapping my head around different SDK’s and networking protocols for communicating with the robot with my computer. It’s not easy to make code for the robot to move, but it’s even harder to do so remotely from your computer. So I spent a lot of time trying to figure out how these pieces went together. I eventually hacked it, but I realized that to get to a step where the robot would be able to move smoothly with me, would take a lot more time and effort than I had. At this point I had grown bitter with the robot, verbally abusing it every time I could. Because I knew it would never be good enough to properly dance with me at this rate.

Needless to say, this piece as a whole is a testament to the technical failures that I experienced, technical failures that made me consider whether or not I wanted to drop this class at times. This piece was the process of coming to terms with my lack of technical expertise, but it is also a testament to my learning.

In terms of the video, I did consider using music/a background track (specifically of cumbia beats). But the jovial nature of cumbia sounds went against the tone of the video, and I found that my voice alone was appropriate enough for this project. Like any project, I find a billion ways in which my storytelling could be improved. I spent hours trying to figure out how to film myself interacting with the robot, and there are many distinct motions of the robots (as well as bugs) that I didn’t do justice in this documentation.

Special thanks to Oz Ramos, Madeline Gannon, and Golan Levin for helping me get started with the UR5! We developed a special relationship.

Special thanks to Neve (and Clayton), Mikey, Em, Ana, Will, Benford and Lauren for saving my butt with the narrative/filming/set up of the video!  I am not a video person, so I genuinely couldn’t have made this without everyone who helped me figure these things out. 

Final Project – Marimonda

For my final project, I want to extend (and finalize) the work done for my second project, which explored the UR5 as an expressive capturing device using real time communication. Because of limitations and struggles with RoboDK, networking and scheduling I didn’t have the opportunity to see this project fully realized. I learned a lot from my explorations with my second project, from understanding the basics of robot arms to learning about real-time control mechanisms.

Goal #1: I want to perfect a system to capture myself or others as a subject without a need for a cameraman or woman.  This is in essence a completion of my second project, but also a refinement of the systems I was already working with. I will be working with TCP/IP control of the robot using Python.

Goal #2: I want this system to support storytelling. I think having the ability of having a robot autonomously follow you provides an interesting avenue for a creative story that can be told. I don’t want this project just to be a capturing device, but I want to get to the core of what I want to capture. I think that question still needs refinement, and part of my final project will be getting towards that point. But for Goal 2, I need to finish Goal 1.

What has been done? What needs to be done?

I was able to control the robot through TCP/IP and Python relatively well, I need to extend the system I have to support real time interaction through camera feedback. But the networking aspect of that, which is the trickiest part for me, has been figured out. So this weekend I will try my best to finalize this system (Goal 1), so I can begin hacking away at Goal 2 after Thanksgiving.

marimonda – PersonInTime Update

I am making this post right now, describing a work-in-progress as a means of documenting where I am, since currently my connection with the robot is limited and I am unable to control it real-time through RoboDK (although I think I might get it to work using RDTE or OSC). Additionally, I had very limited access to the Studio as I was out of town for an extended period and there were conflicting events the weekend I came back.

My project is a performance/interaction between a person and UR5, exploring curiosity and mimicry.

Specifically, my goal is to create a robot that looks at and follows a person, by appending a camera to the UR5 and “correcting” the movements to follow a person in real time.

Using Oz Ramos’ method for real-time control of the UR5, through RoboDK, I was able to control the robot pretty well and support interactions through HandsFree.js (which additionally supported some pretty great pose/face tracking capabilities which would allow me to implement my own robot copycat).

Unfortunately, I started having problems with connecting the robot in real time. I attempted to reset my network settings, redownload RoboDK and connect through other methods. But unfortunately it’s getting to the point that trying to connect to the robot through this SDK is proving to be more time consuming that constructing the same program using RTDE or OSC.

I think I easily could have mocked a project either hard coding motions on the UR5 or by using handsfree.js to create an interactive experience. But I am not interested in these two venues, since my goal for this project is to obtain sufficient skill in controlling the UR5 in real time.

Right now I am considering different ways of implementing this project, because I am devoted to making UR5 my baby.

This is me controlling the UR5 with my simulation:

 

 

marimonda – PersonInTimeProposal

For this project, I am interested in exploring human movement and interactions by programming a UR5 to mimic/mirror a person.

This is a UR5 Robot:

UR5 Robot Arm Manipulator by Universal Robots - Clearpath — Clearpath Robotics

(Very cute right?)

I am interested in exploring this for two reasons:

  1. I think the performance is inherently a timed experience, exploring the interaction between a human and a robot through mimicry is an interesting approach to the idea of “A Person In Time”.
  2. I want to learn how to work well with the UR5 and explore the ways in which a human can perform/otherwise collaborate with a robot.

Functionally, the UR5 will detect a person’s face using a wide lens camera and center its field of view to the person’s head, attempting to move as close as possible to them. So that if the person moves down, or to the side, the robot looks at them as they move, with it’s joint paralleling the human’s motion.

marimonda – TwoCuts

Write Using the tag twocuts briefly consider what your two cuts may be in either your Typology Machine or if you feel ready feel free to discuss your two cuts in your Person In Time project.

https://cdn.discordapp.com/attachments/720701646077952116/1028415274917040218/unknown.png

I interpreted from the reading that in a sense, the cuts refers to the observer and what is observed. In our case it would be the process of capturing as opposed to the conceptual element that is created.

I am interested for my next project (Time) in manipulating a robot to follow around people, and within that we capture the motion and character of others through time.  In  both this project and my last project, I am interested in observation through time of different subjects — whether that’s their motion, behaviors or actions. In this sense, I am interested in the “Technoscientific Laboriality” sort of cut, which exists in this realm of measurement and archival.

marimonda – TypologyMachine

My First ASMR

This project is a typology of human expressions to reactive (and potentially disturbing) auditory content.

(For the best experience, please wear headphones)

For this project, I was interested in exploring the variation that exists through the sensory experiences of different human beings. As someone who is rather sensitive to sound due to misophonia, I was interested in how different people react to different unusual sounds. For example, chewing sounds might be appealing to someone who experiences ASMR or absolutely distressing (and even enraging) for someone who has misphonia.  Most people lie somewhere in between these two experiences, and through their faces they reveal a lot about how they experience these distinct auditory inputs.

Throughout multiple iterations of this project, I was interested in exploring the variety that exists in human emotions– whether they be my own or someone else’s. Upon exploring auditory input, I was introduced to this project by Olivia Cunnally, which captures Grima — a mostly universal response to deeply unsettling sounds. Since sound is something I struggle with, I decided to double down on this idea.

To create this project, I had to first generate the audio using a 3Dio Binaural microphone. This binaural microphone accurately places auditory input in 3D space, making it feel like sounds are either close or further from you. With a combination of hand sanitizer, water, my mouth and pin, I was able to generate an array of sounds to sample from. I ended up with approximately 30 minutes of audio. I then chose the most disturbing audio samples (through the feedback of my friends Bella and Lauren) and used them to create a 5 minute video.

I then recorded various reactions from unsuspecting participants using my phone’s 240 fps slow-motion camera. I told each person that they would be listening to 5 minutes of potentially distressing audio and that I would be recording their reactions. Finally, I compiled 12 of the video reactions I recorded into a 4 x 3 grid with their reactions synchronized to the audio. The video and audio were slowed down by a factor of 0.5, to make expressions more dramatic.

Overall, I am very pleased with how this project turned out. I am not someone who regularly works either with video or audio, so I know there is a lot of room for improvement in how I conducted my project. First, I think there might be a couple of points where the video isn’t perfectly synchronized to the audio.  I think this is due to me synchronizing the videos using the original audio of the video clips themselves (via a click) and joining them. Instead I should’ve used visual cues to make this easier for myself (having people clap, for instance).  I do think this is something I can revise, but I have been having issues with the size of my project, so exporting and editing has been taking a very long time (I will not use Lightworks again). In the end, I am still very pleased with the final outcome but I have learned to be more cognizant of my videography for the future!

Thank you:

Lauren, Golan, Bella, Shelly, Leo, Matthew, Qixin, Neve, Joyce, Sarah, Hima, Emmanuel, Milo, Will, Ashley, Mikey for allowing me to take footage of them!

Nica, Cassie, Bishop and Harrison for listening to the audio.

Marimonda – TypologyMachineProposal

For this project, I am interested in creating a capturing device that is triggered by a particular event or behavior. Specifically I am interested in making a camera that takes a picture/short video of what I am looking at every time I say “Hell0” and “Goodbye”. I want to collect these and present them as either a book or video installation.

Perhaps it’s a bit of curiosity to what certain words “look” like, but I’d be interested in collecting enough images to be able to fine-tune or generate these things through the course of a couple of days.

What does a greeting look like?

What does a goodbye look like?

I am still not completely sure what this project will look like and if I will stick to this specific idea, but I am really interesting with observing the interactions I have with others in the world and how these instants look like from the perspective of data.

References:

Dear Data (specifically this page on goodbye’s, Week 52)

Lauren McCarthy’s SOMEONE

 

 

marimonda – SEM

In the electron microscope, I scanned three different types of bugs and pollen. I literally put the taped magnets inside of a flower to collect different organisms and collected them in the tube without much concern. So my first surprise was actually finding out the number of bugs and pollen I ended up collecting. The second thing that interested me, was the complexity of the organisms, and the places where you zoomed so far in that you didn’t even recognize the original subject (where your brain filled in landscapes/context for the unrecognizable images).

marimonda – Reading

This essay definitely took me more than 25 minutes to read, but it was a reading that was well worth it, packed with curiosities.

In this class, I think I am one of the people less familiar with capture. I don’t have much knowledge of photography or film and I found this reading really insightful because it sent me on a deep dive that made me try to understand what it means to capture something. Or more concretely, how technology advanced  to answer our questions. I appreciate the scientific context that we got from astronomy, specifically, because it characterized the iterative photographic process for more accuracy. I found it very interesting how at the start of the essay, we distinguish photography as passive. This passivity implies inaction, lack of intention and a certain reproducibility.  And I think that the entire process of learning how to photograph was very intentional and active, the tools we have now came from very specific needs. I found this very interesting.

I think the section in the essay that spoke about photogrammetry was very interesting to me. Because I thought photogrammetry was a relatively new process, the essay alludes to modern forms of photogrammetry using software and 3D models. But photogrammetry as a concept has existed truly since the 19th century. I think I would like to learn more about traditional methods of photogrammetry and the immense labor that it takes.

 

marimonda – LookingOutwards

When considering the prompt of this LookingOutwards assignment, my first thought was looking at Brandon Ballengée’s work.

With Deformed Frogs and Fish, a Scientist-Artist Explores Ecological  Disaster and Hope | Arts & Culture| Smithsonian Magazine

Malamp: Reliquaries (2001 – present)

Disclaimer: I am not someone with a lot of knowledge about “capture” in the first place, but I think biological staining techniques were among the first capturing techniques that gave us insight into structures that were impossible to see, even with magnification.

Brandon uses a variety of biological staining techniques to see into structures of post-apocalyptic creatures. In his statement, he explains that by obscuring direct representation of the organism, he avoids representing these animals as monsters or otherwise exploiting them. But there’s nothing more intimate and revealing than showing your insides to the world, especially as a specimen.

A bit personal but I find this interesting because one of my biggest fears is seeing inside of my body, having every single part exposed. I think a lot about this, because I am very afraid of having aspects of a mutant in places I can’t see.  There’s a certain invasiveness in passive technologies, like MRIs, CTs and Xrays. So it makes me think a lot about when capture becomes invasive, unwelcome, or exposing.