This final project is an attempt at using the Looking Glass HoloPlay Unity plugin to create an interactive animation/mini installation. The animation is of a character I created using Google’s TiltBrush application (that I made for an earlier project, found here) which I exported into Blender to rig and keyframe.

My concept was to create a story where the Looking Glass was a sort of window into a box that the character was trapped in. It sulks in the box alone until the viewer interacts with it by pressing a button–this causes the character to notice the existence of the viewer and to attempt to jump and attack the viewer, only to be stopped by the constraints of the box.

To break it down, I needed one idle animation, and one “reaction” animation that would occur with the button press. In this process, I made a preliminary walk cycle animation:

To give it more character, I added some random twitching/jerking:

I did these animations as practice, but also because I thought I would use them in my final project as part of my idle animation. I tried to mess around with Blender’s Action Editor, NLA Editor, etc. to piece together multiple animations but it didn’t come out the way I wanted to.

At the end, I was able to make an idle animation + reaction animation that I liked and that worked in Unity, so the only thing left was to put it on the Looking Glass. However, I ran into an issue immediately:

[Error] Client access error (code = hpc_CLIERR_PIPEERROR): Interprocess pipe broken. Check if HoloPlay Service is still running!

Looking up this issue led me to delete and reinstall the HoloPlay plugin, restart Unity, duplicate my project, delete certain files off of my computer, and many other things that other suggested online. However, nothing worked. I did see a comment from someone who I believe worked on the Looking Glass which said that they would be fixing this problem in a new update.

Without the Looking Glass, here is my final project:


Just the idle state:

Just the reaction:


spingbing – Presentation

My goals for this past month were to become more comfortable with Unity, practice rigging, and mainly to create a project encapsulating both of these tasks by learning how to use the Looking Glass.

After seeing some very nontraditional interactive/immersive animations made in Unity, I realized I could kill two birds with one stone by learning the Unity interface while also expanding my use and knowledge of the medium. For the first couple weeks, I dove into watching tutorials on Unity, the Looking Glass, and general animation tutorials on Blender. I usually don’t let myself actually learn when I make projects, which ends up with a bunch of really janky artwork full of workarounds and just plain incorrect usage of the software. This prevents me from really learning, so given the time allotted specifically for tutorial watching, I finally gave myself the opportunity to learn.

This is a continuation of project from last semester https://courses.ideate.cmu.edu/60-428/f2021/author/spingbing/. I have finished with the animation aspect of the project and now need to take the information I got from the tutorials I watched and apply it to actually finishing the project.

To Do:

  1. Script to trigger animation on button press
  2. Set up simple box environment
  3. Fine-tuning configuration with Looking Glass package
  4. Document display


While looking up inspiration for my project, I was trying to see if I could find some art piece that was not only made using the Looking Glass, but also one that was interactive. In my search, I found this virtual reality painting simulator that showcases painting strokes made real-time made by another person using an Oculus. This is more of a drawing game/apparatus than a “finished artwork”, but I still find it inspiring because it is an interactive work made using a Looking Glass.



I am planning on using the looking glass with Unity for my final project to create a piece that is responsive to the user using the makey-makey.

For my concept, I have recently found myself interested in the immediate as well as ultra-long term effects of nuclear war. I have already produced a couple of projects around this concept in other classes regarding the far end of this timeline, but for this class I would like to address the immediate effects.

The piece will consist of:

  1. a foggy, dusty environment with particles and debris flinging around the walls of the looking glass
  2. explosions in the background which may potentially “shake” the camera view of the box, triggered by the makey-makey button
  3. a person or a couple of people who are coughing, running around, and banging on the glass as though they are trying to escape the environment they are in
    1. a default position will be to be sitting curled up in the corner looking miserable. Perhaps I will use a webcam to notice when the user comes so the person can look up at them?


One interesting fact I came across while reading Kyle McDonald’s lecture is that they found that just by simulating expressions of certain emotions, their bodies physiologically reacted as if they were truly experiencing those emotions. It goes to show that the phrase “fake it till you make it” really has some truth to it.

A link added in this lecture points to a Microsoft site which encourages users to submit facial data to enable “a seamless and highly secured user experience”. Allowing facial recognition and tracking is an interesting concept because while it does ease the use of certain technologies, it also enables the addition of a more diverse set of faces to a larger dataset, making the dataset more reliable and less biased. This is helpful for advancing inclusion of a wider range of faces which will then lower the discrimination which is currently of issue in many facial recognition softwares. However, there is also the issue where a lack of recognition can be helpful for people in situations such as policing. Having unbiased datasets for facial recognition is both a good and a bad thing depending on what the set is used for, so it is interesting to see arguments for both the benefits and disadvantages of a more advanced and robust dataset.


A work of net art that I came across during my research was Cyberfeminism by Cornelia Sollfrank. This piece of art consisted of Sollfrank generating 200+ fake female artist profiles to submit work in an early net art competition held by a museum in Germany. With the addition of her fake profiles, the museum reported that over 2/3 of the applicants were female. Even so, the top three winners were male artists. This piece was a statement to show how art was primarily judged by the profile of the artist rather than the piece itself.

I really enjoyed this piece because first, it was a very creative way to enter art into a competition – the pieces submitted were not the art, but rather, it was the profiles submitting them. Additionally, I appreciate that the profiles were generated rather than individually created by hand. I also appreciated that the statement comes across very clearly because the results speak for themselves. Sollfrank’s concept was proven and facilitated by the judges; she did not have to say anything in order for her statement to be said.


Leah (bookooBread) and I worked collaboratively on this project. Most of our time was spent brainstorming ways to create something less technical and more conceptual. Immediately we decided to make a website that would give a sense of relief/a break — a first idea was to create a website similar to https://smashthewalls.com/, a game which allows users to “break” the wall of their screen to lean more into the cathartic side of our main concept of relief. However, we decided that our prediction for what the experience would be like was not satisfactory in terms of the cathartic element as well as in terms of the telematic aspect, so we quickly moved on.

After many other short-lived ideas, we decided to pursue the calming side of relief rather than the more aggressive one we had previously been considering. This resulted in the idea of creating a quiet forest/pond scene in which the interaction would consist of user clicks which would result in ripples in the pond water. Some initial sketches:

While the idea of a pencil-drawn-esque environment with “animated” “rain” is nice, this format severely limited where the users would actually move to just the small portion of the screen where the water was. This idea is cute and could be effective, but I would rather it be an actual animation with no or at least different form of interaction.

The lack of space in the idea which included the environment in the background led to the decision to omit it completely. We then were confronted with the decision to keep the visuals hand drawn or to use code. The hand drawn ripple sketch is shown here:

While the hand drawn style is nice, it disallowed the possibility of two ripples overlapping each other. Also, if it were hand drawn and code was just used to cycle through an animation, there would be no point to use code.

Thus, the final product:


After the presentation, I think a good next step would be to add sounds (either raindrops, ambient music, or both) to make it more immersive. Another thing would be to limit the number of clicks a user can make per some time increment so that it does not crash.




The reading Against Black Inclusion in Facial Recognition was very interesting to me because it is the first time I have encountered an opinion which is against the inclusion of nonwhite faces in facial recognition software. After reading, it only makes sense to have this opinion – if the tool is made by and for the oppressor, then it makes sense to not want to be included in the software’s scope. This stuck out to me because it is easy to get caught up in the hype of modern technology and to want to have a part in it, but this reading has brought up the very important point which is to be aware of the possibility that the effects of this modern technology can be very harmful.


My artwork is a piece of software which gradually covers the face in jittery dots which are meant to act as a method to blur the face into obstruction. I used HandsFree.js to achieve this code.

Unfortunately, I did not have the time or means to do my performance. I am planning on borrowing lights from the 3rd floor lending office and using an empty room to record this piece. My plan is for the video to start at complete darkness and slowly increase the light on my face at an angle at the same rate in which my face is obstructed, creating a sort of dynamic contrast between the “uncovering” of my face with the light vs the anonymization of the filter. The video would be less than 30 seconds long.

When finished, I think this piece will be fairly strong due to its conceptual implications. My interpretation is that it has to due with performative activism or representation done by the broader media.


I used the Pixray Readymade tool because I was using colab for another task and it would not let me do them simultaneously. I typed in “nuclear mutated flower field sunny day” and this is what happened:

It is a little too saturated and primary for my personal taste, but I am impressed by the interpretation of the prompt.