By now we are able to control one particular camera. We are able to drive the NEMA motors properly and handle horizontal movement.
Included are the scripts for motion control and video processing
https://drive.google.com/file/d/1Mt1Lz62AW0p8WMhLqsUHOx9Ud8YlL-_e/view?usp=sharing Code for Video Processing
https://drive.google.com/file/d/120m2BRCIG2b0SLR_bQflnva5cOJMdAqp/view?usp=sharing Motion control code
Hardware:
At this point, fabrication of the final construction has begun. Brackets, housing, and other parts were designed and laser-cut, then implemented into the system. Edits and iterations were made according to encountered problems, such as bending and inbalance. Scale established.
SLD / DXF Files : https://drive.google.com/drive/folders/1K6em43-udjc0ojtTFieh-pTkDOsMLkYD?usp=sharing
01_Title
Audience
02_Statement
Designed to disrupt traditional piece-viewer exhibition dynamics, Audience focuses their attention on what it decides is most interesting, turning willing viewers into performers. Audience also taps into the context of surveillance and camera technology in modern society to further impact viewers.
Audience recognizes individual viewers, judges which is most interesting, and locks on to the chosen subject with all three of its cameras. The performer is now aware that the Audience attention, and resultingly the attention of other surrounding viewers, is focused on them- the goal is to highlight the reaction to this attention.
The piece in itself is quite uninteresting, what was interesting was when it is able to interact with the audience. In which case the name of the project can be viewed in two different ways. The first is that it in itself is an observer and audience to the people around it. The second is that it requires an audience to really be able to function interestingly. We found this to be a quite interesting theme, as the observed and the observing roles are quite vague in this particular scenario. What the audience makes out of this interaction can be up to their own interpretation.
03_Relation to Course Themes
Essentially, Audience is a 3-camera tree apparatus mounted in a suitcase, with each camera capable of pan and tilt motions. However, with such simple motions, Audience creates the impression of a foreign object with a purpose to research and observe, hopefully inspiring curiosity and caution within viewers.
Audience can serve as an interesting experiment demonstrating the significance of context of utilized technology, behavior, appearance, and the synthesis of the three. Furthermore, robotic sculpture in itself carries a connotation that can’t be ignored, and contributes to the impact of this particular piece in a major way. Generally speaking, while complexity adds a unique dimension to robotic sculpture, it is not essential to creating an artistic narrative / message.
04_Outcomes
Hardware:
Audience was generally successful as a physical piece. The decision to have three cameras, arguably the stand-out characteristic of the piece, was extremely important, as the impression of “multiple viewers” within the “Audience” would not have been nearly as impactful otherwise (two cameras can give off the impression of eyes, and don’t communicate a “crowd”). Stainless steel beams support the structure as it towers out from the suitcase and over viewers- the scale of the piece successfully established the intended subtle power dynamic / surveillance feeling. Laser cut braces, exposed Arduino, loose wiring, and suitcase form communicate an autonomous and “scavenged tool used in a dystopian future”, suggesting a higher power utilizing the tool, and tying into the fears associated with surveillance.
Software:
The software part is the essential driving force behind the moving motion of the cameras. For this project, the code contains mainly of two parts: the openCV python code that captures video feeds and attempts to do facial recognition, and the Arduino script that receives commands and drive the motion for the 6 motors.
For the openCV part, the program opens all three cameras and produces a script that runs facial recognition to capture all possible faces. The general behavior would be when a face is captured, the camera would try to focus on the person. The program would send the Arduino commands through a concurrent worker, which in turn drives the Nema or servo motors to turn to the correct position. Thus the camera would follow the person around.
The more interesting behavior is invoked with state assignment and randomness. Each of the cameras shifts between multiple states, Bored, Tracking, Following, Scanning. Each of the states involves different behavior, of which the most notable is following. While each camera is essentially controlled separately, it is possible for them to coordinate. When a certain tracking camera decides that the person is quite interesting, it may pull the other two cameras to follow its movement, creating the idea that you are being observed by all three cameras.
The Arduino script simply receives commands and provoke the correct motion in the six motors.
05_Video Documentation
06_Reflections and Challenges
The main shortcoming that comes with the system was in fact the ability to upscale the project to three cameras. Rather early into the development, we had a rather fluent working model with a single camera. However, the OpenCV code with three HD camera feeds consumed a large proportion of the USB bandwidth of the laptop running the system. We had to downscale the image quality and separate to different USB ports for each camera. Which in fact provided difficulties for Arduino control as it had to share a port with one of the cameras
The second challenge was with Arduino port writing in real-time with OpenCV video control. It was very difficult to make the Arduino control reflected in real-time, and the project was short-lived because of some port write capacity problems in the Arduino, which blocks any further write attempts about 2 minutes into the project being turned on.
07_Citations
Source code:
https://towardsdatascience.com/face-detection-in-2-minutes-using-opencv-python-90f89d7c0f81 OpenCV facial recognition reference
https://courses.ideate.cmu.edu/16-375/f2021/text/code/StepperWinch.html#stepperwinch Arduino script
https://courses.ideate.cmu.edu/16-375/f2021/text/code/suitcase-scripting.html#suitcase-motion-demo-py Pyserial script reference
CAD files:
08_Photo Documentation:
09_Contributions
Ethan:
Original brainstorm / ideation
Visual / artistic direction
Design research / BoM
Prototype fabrication / iteration
Final fabrication
Alex:
“Behavior” brainstorm / ideation
“Behavior” research
Software iteration
Final software
]]>Statement of Narrative
In this piece, we wanted to express the deification of a temperamental robotic cuttlefish.
We set out to create a temperamental cuttlefish in suitcase with mind of its own. It wakes up when people approach its box and squirts water at whoever disturbed its sleep.
The viewer first experiences a closed box, which opens as they approach. They then would see the cuttlefish, eyes glowing and the light from the spotlight reflecting off of the fabric, making the entire piece shine. This creates a spooky, mystical effect. The cuttlefish then waves its arms around, the servos making a cranky sound that speeds up as the user stays in front of it. Finally, they get squirted with a splash of water before the cuttlefish closes the box, going back to sleep.
Course Themes
We wanted to make our project surprisingly animate. The robot starts off in the form of a box, then turns into an animate anthropomorphized individual. We use the autonomous behavior of the robot to create a narrative ritual that the viewer unknowingly participates in by stepping up to the box. We use simple machines and laser-cut components to create animation, such as through the tentacles.
Outcomes
Viewers either seemed hesitant to come close to the piece, or went very close and tried to interact with it. The people who saw others getting squired were more wary to approach. The cuttlefish seemed intimating to most people, especially those who saw the tip of the squirt bottle. Even simple LED lights were interpreted as cameras by viewers, adding to the anthropomorphism of the cuttlefish.
Video
Citations
Concept Citations
Technical Citations
Technical Documentation
All code to run the Cuttlefish are in this Drive folder.
All CAD files and DXF’s are in this Drive folder.
Photographs:
Specific Technical and Artistic Contributions
Annie: Mechanical and electrical design, electrical assembly, physical assembly, artistic direction
Jonny: Electronics assembly, code, plot development
]]>The title “Who are you?” was to force an introspection for the guests that look at the piece. The piece was really meant for people to ponder on how they see themselves.
In the end, the main concept we wanted to enforce was
“There are not so great things about you that you should accept, love, and forgive.”
We wanted the guest to accept “the demons” they have or see in themselves.
The guest would have been enchanted by the sudden start of the machine. They see in the blue light an angel that falls and rises repeatedly.
As they come closer, the light changes to red and that angel has now become a devil.
As they have drawn nearer, they would see the mirror in the back of the apparatus. They see themselves but they also see these quotes and questions in the laser cuts.
Quotes such as:
This would cause the viewer to introspect about their own self. It tries to deliver a message that when we look close enough we will see faults but we should accept those faults.
The device is fairly simple. The whole device is made of 3 main active components
This simple device and with a simple effect had the potential to tell an intriguing narrative. It causes the viewer to be mesmerized by the effect but also allows their presence to alter what they saw.
In addition, the mirror at the back allows the visitor to be part of the piece.
Even though, there were some pitfalls. People were drawn in the piece and were amazed at how well it seemed to work.
There was some difficulty in getting the space dark enough for the zoetrope to be shown. We needed to add an enclosure that would limit the light to the piece but it added to intimate nature.
The figurines could only hold small distinct features. This caused the viewer to really focus on the small changes that were happening between state changes.
All in all, there were some design fallbacks that really ended up serving the narrative better.
From a technical perspective, we learned how zoetropes worked and how we could adjust the illusion with just a light color change. We also learned how measurements should be done for laser cutting and how trial and error play a large part.
In the end, the piece did feel personal. The fact that the quotes were not readable added to the mystery of it. People enjoyed that it wasn’t int their face.
The piece was in a way a success. It was a conversation starter and it was mesmerizing to see. There was a comment that there was a smear effect that caused people to not see the figurines as much.
We were initially scared of adjusting the duty cycle for it fearing that it might cause the figures to not stick out as much.
After some testing after the facts, we did modify the code so the duty cycle is smaller and the smearing was reduced. (Thanks Zack)
People can see the reduced smearing here
There were some comments that it would have been nice to have lighting throughout the piece. This would have been a nice feature but time and budget was a bit out of our hands
Photo Gallery from the show
https://www.facebook.com/watch/?v=401312374631932
The code can be found at
https://github.com/EbrahimKaram/WhoAreYou
You can download the code as a zip here
Cad Files of the cuts can be found in the link below.
https://drive.google.com/file/d/1fWrsJiacJ24VHIt78bk-Gvvs3DH_f-pJ/view?usp=sharing
There is a playlist of the weekly progress videos that were done for the piece in the link below
https://youtube.com/playlist?list=PLgK8ktcGDGs8UTsSelbhrGaLpSCA9uN-X
Evangeline: She designed and laser-cut all of the pieces. The mirror and the figurines of the zoetrope were mainly her efforts. There were a lot of struggles in making the Zeotrope be the right size for the piece and tremendous trials and errors.
Ebrahim: He worked on the electronics and code that brought the experience to life. This includes the strobe light and the speed rotation.
The design was done collaboratively.
]]>When we had first drafted what we wanted to accomplish with our “Magical Canvas,” we knew we wanted to invoke that idea of time passing and progressing. The idea behind our project was to demonstrate the passing of time and how change is inevitable. So, we took the idea of a blank canvas where there was a barren surface and we transformed it into a beautiful idealistic landscape. And from the perspective of the audience, that is exactly what we wanted to show: a blank piece of acrylic becoming a Claude Monet inspired landscape.
How does our project fall inline with the courses themes? We were able to use robotic movement and mechanisms to create an art piece, one that was animated in a surprising manner. By using conveyor belts, pulleys, and magnets, we were able to facilitate the motion of pop-up mechanisms to create a landscape mimicking one of Monet paintings. Using the autonomous behavior helps enhance the narrative medium especially for our project, because if we use an opaque piece of acrylic, the audience doesn’t know that there is a mechanism under the acrylic layer. Thus, the pieces of fabric seem to slide by “magic”, coming together to create a landscape. There is always this misconception that robotics are just for the practical and technical uses, but this project and all the other RfCP projects prove that autonomous behavior and robotic movement can create artwork.
Creating our final project was a journey of trial and error. In regards to the soft material manipulation, there were lots of adjustments made along the way. We had initially started with a more busy design with more pop-up elements but noticed that with what could be accomplished with our fabric ribbons, that would not be as feasible. There were many prototypes that went into coming up with the fabric ribbons and how they would be structured. Initially, paper was not meant to be used in our design at all but it proved to be an interesting medium to work with alongside the fabric, especially because it added more spring to the fabric’s free structure. We discovered that if we were to create pleats in the fabric to fold up the pop-up paper elements then it would be easier to unfold as the magnet dragged the fabric. We also learned that with the paper’s springiness it would be hard to keep the pleats on the side of the suitcase before being unravelled, so we used sewing pins to fasten the fabric and as the fabric unravelled the pins would detach. Working with the paper and the fabric taught us a lot about mixed media and how they affect one another. It was unexpected seeing how the construction paper completely changed the movement effect of the fabric and it was a challenge to work with. However, with the various prototypes, we were able to come up with a working design.
As far as fabricating the model, even that process required some trial and error to see which model design would be most appropriate. We started out by building a model which consisted of one base horizontal panel and two upright vertical panels, with one stepper motor attached to each vertical panel and a belt between the two. We realized that this wasn’t scalable because of two things. One, using two stepper motors where one was the “idler pulley” and the other actually had its motor powered was a waste of a stepper. Two, the two panels wouldn’t be that supportive, and having one panel connect the stepper and the idler pulley would be better.
Next, we built a model which consisted of a stepper and an idler pulley, with one panel connecting the two and a belt with a coupler moving between them. While the belt was functional, we discovered the panels needed to be taller in order to account for the heights of the other coupler pieces, which were larger than the one we were using, and would have to have the magnets glued on top. We also realized we needed to build a model with walls so that the panels would be better supported, and we needed to organize the panels so that there would be enough room for 4 motors, but the whole model would fit in the small space of the suitcase. This required a lot of measurement and planning with paper drawings before using Fusion360 to create the actual model and laser cutting and assembling the model.
While assembling the model, we ended up running into issues we hadn’t previously considered, like the strength of the magnets on the coupler pieces causing them to “lift” to meet the magnets on the other side of the acrylic, and the belts not being perfectly tensioned. We ended up fixing this by using rods that went through most of the coupler pieces (minus the one without holes to put the rod through), so the pieces could only slide linearly and not rise. We also raised the magnets to a level where they were touching the acrylic by having them be situated on top of mini platforms, which were then glued to the couplers.
As for the code, it was pretty simplistic. The only trial and error that was needed was to see which speeds the magnets should move at to produce the most dramatic effect, so we could hardcode a “start” and “reset” mode and have each motor start, move the magnet along the belt for some seconds at a particular speed, and then stop.
For our final exhibition, we had set up three ribbons that would be unravelled in a sequence. First, a ribbon holding a waterlily would unravel and then a ribbon with two swans, and finally for our showstopper, we had a bridge be unravelled onto the suitcase. The lighting was set to add extra reflection onto the sheer fabric and create an effect that the fabric was “water.”
We wanted to create a landscape that mimicked Claude Monet’s iconic waterlily painting:
soft material references:
We took a lot of inspiration from the other IDeATe class, Kinetic Fabrics, thanks to the help of Professor Olivia Robinson. In addition, a lot of references were made to tutorials on how to create pop-ups.
To our relief and surprise, during the final showcase, our project was able to work most of the time. What was least ideal was that we needed to manually reset and bin the fabrics before redoing our demonstration, but the audience was happy enough to wait. Everyone who saw our functioning project wanted to know how were able to achieve the movement of the fabric and were interested to hear it was solely through magnets on a conveyor belt. This machine was able to create an art piece solely through movement and it created a landscape for the audience to see.
Model Fabrication/Code – Sunjana Kulkarni
Soft Materials/Art – Ankita Bhanjois
]]>In a quite literal whirlwind of light, sound, and natural elements, we build an isolated world that can take on many forms. The choreography between elements follows a certain flow, a natural progression of the color wheel with corresponding levels of movement. By enclosing the most kinetic pieces of the project, we have separated the viewer from the action and allowed the room for wonder in how it must feel inside this boxed-off world – they should wondering, does the air feel different in there? What would it be like to be small and within this world and looking out into this mysterious world of colorful lights and changing shadows?
In the beginning of the course we discussed the different ways robotic motion can create unstimulated, random motion or on the flip side, a choreography. Our project aimed to narrate a story of changing seasons using an animation that consisted of both these motions. While our rotating servo motors, spinning fans, and programmed light choreography were the basis of our animation and story, it all behaved as the code directed it too. On the other hand, the wind spinning the reflective paper, and the way the patterns were projected on the black screens could not be accurately predicted and we relied on their random motion to give us the specific effect we desired. In short, a simple machine has the capability to have its every motion be controlled and yet the produces a totally unexpected output motion.
We can think of this relationship between controlled and random movement in real-life changing seasons as well. While, it is known that Summer starts in June followed by Fall in December, the weather of each individual day is unknown to us. The random abnormal appearance of rain, strong winds, and an appearing sun on a cold day are the elements of nature that surprise us. This by extension goes us to show how a robotic sculpture can have every movement choreographed but can also still surprise us by creating random kinetic motions.
The process we used to formulate our project idea took the path of resource confined thinking – though we knew we always had the option to order more supplies or materials, there was something appealing about making use of everything that was already available to us in the IDeATe spaces.
The successful parts of this choice:
The failures of this choice:
We learned a very valuable lesson from the approach we took here: while we made an accurate bill of materials from the beginning, any and all specifications (size, power, current/voltage ratings, ranges of motion, compatibility) of already available materials should also be recorded in this process. It would make wiring everything up an easier task in the end.
Aside from the technical lessons, there was also an artistic inquiry explored in the use of the LED strips. The choosing of colors and transitions was joint effort, pulling from our personal experiences/knowledge of colors and their relationship to seasons but also understanding how different colors play with people’s emotions.
We were successful in creating a lighting choreographing that enacted wonder and a sense of “dancing” into each season. Additionally, our color choices were clear changes from winter to spring to summer to fall. One of our biggest successes was using the LEDs to create “flashes of lightning” simulating summer thunderstorms!
However, we would have loved to spend more time exploring the transitions between seasons. We weren’t able to define the subtle differences between how winter changes to spring vs. how fall changes to winter.
Code: All fans, servos, and LEDs were programmed to work synchronously from a single Arduino source code file.
CAD files: We used all our individual CAD Models to create an assembly so that we could visualize our final piece as one unit and this allowed us to make more confined measurements as one of our constraints was the small size of our suitcase. After initial prototyping and multiple iterations given below are our final CAD designs:
Using a 4pin fan with the Arduino:
FastLED Library and Documentation:
The concept
‘Where Did Alam Go?’ is an interactive installation that depicts selective rejection of people based on their religion. The installation is inspired by a group of Rohingya Muslims who were deported because of the Citizenship Amendment Act. The act by the Indian government accepts illegal migrants who are Hindu, Sikh, Jain, Parsi, Buddhist, and Christian but those who are Muslim. The installation is an abstract representation of this unjust law. Personally this installation is trying to also subtly showcase my search for my departed loved ones, like Alam.
Performance objectives
The installation is a rejection system which rejects people in India based on their religion. The hand crank operates the conveyor belt that represents India, where people of different religions are being tried for their religion. The swinging gavel, representing the judicial system, selects specific people to hit them off the conveyor belt. They are hit and therefore “rejected” into a suitcase placed under the conveyor belt, representing their displacement and the uprooting of these innocent people from their livelihoods. The installation being driven by the guests seeds a thought of how they are an accomplice to this brutal discrimination. Their participation is intended to make them realize how we all involuntarily or voluntarily are a part of this injustice happening against our fellow human beings.
Viewer experience
Guests started with a sense of curiosity to figure out the mystery behind Alam missing and what the machine does. They were excited to interact with the installation by turning the crank, and then surprised to see the gavel knocking certain people off the conveyor belt. Once they understood the idea of rejection they felt disappointed to be the one initiating and participating in this act of discimination. They felt sad to see the condition of the forcefully displaced people in the suitcase. They left the exhibit with a thought of how they involuntarily or voluntarily are a part of the injustice happening against our fellow human beings.
Our goals
‘Where did Alam go?’ is intended to generate an awareness regarding the injustice happening against people based on their religion. The goal is to ‘Where did Alam go?’ is intended to generate awareness regarding the injustice happening against Muslims in India. The goal is to make the audience think about how we are all involuntarily accomplices in the discrimination of our fellow human beings. The hand driven crank allows the guest to interact with the installation making the experience more engaging and relatable.
One thing that we were able to accomplish in our project was making people in the audience feel complicit. Instead of having people just watch the conveyor belt with people go by, they had to participate in the action. The person had to turn the crank, causing the conveyor belt to move and eventually causing the green people to get aggressively hit by the gavel. This meant that the users were aiding in the process, which symbolized how people can be complicit with the harmful actions of the government. Several people told us that they felt they were a part of the wrongdoing because they were moving the belt forcing people to get hit. By making people part of our installation we were able to pull people into the narrative and spark emotion.
Furthermore, the action of the gavel hitting the green people had a large impact on our audience. When a green person passed by one sensor the gavel would rise slowly until it was nearly 90 degrees and then pause there until the green person was right in front and hit them aggressively into our “suitcase”. The motion of the person being struck by the gavel really resonated with people and made them wonder and ask why it was happening. With this simple motion we were able to create an emotion, understanding of and empathy for the green people who represented Rohingya Muslims in India.
Overall, I think this piece was able to bring awareness to the government policy in India that is deporting the Rohingya Muslims in a visual and impactful way. This piece effectively demonstrated selection and rejection with the nuisance of making the audience complicit. We were able to learn how our simple mechanism and coding was able to affect people in such a powerful way. I think this piece did advance our understanding of how we understand and utilize political and protest art.
The project turned out to be a success. It informed our guests of the issue we wanted to address. The gavel striking the people gave the installation surprisingly animate qualities. The interactive hand crank mechanism made the experience more engaging and impactful. Creating early prototypes and previz 3D models helped us create an installation as we imagined it. Early documentation of experience goals, emotional arc and journey maps helped us produce the impact we aimed for.
We did not have enough time to achieve some of our stretch goals like lighting the installation using sensors and LEDS to highlight different areas that would tell the story more impactfully. We should have used more time to test our final installation to ensure its smooth functioning during the show.
Soildworks CAD:
Arduino Code:
Rendering:
3D Model of the installation-Previz
Human CAD drawings for laser cut
Experience design documentation
Mimi
Both
Anjali
The Scare-case is an interactive, travel-size version of the Fridge Scare, a common jump scare technique used in horror movies where audience members are scared after they expect.
The objectives of our performance were to create an interactive experience that controlled the emotions and expectations of the visitor. We achieved interaction by encouraging the visitor to open and close the suitcase’s lid. We attempted to control the visitor’s emotions and expectations in a Fridge-Scare-esque manner, by removing the fright when expected and delivering the fright when not.
The suitcase begins closed and rattles in order to grab a visitor’s attention in both a visual and auditory way. Once the visitor is close enough, the rattling ceases, thus inviting the visitor to open the suitcase and see what spooky mysteries lie inside.
To the visitor’s surprise and relief, opening the suitcase reveals an empty interior and the lack of a scare. After convincing themselves that the experience has concluded, the visitor closes the suitcase lid in order to reset the suitcase for the next visitor.
As soon as the lid is closed, the false lid springs open as the suitcase begins to violently rattle once more! Hopefully, this elicits a jump scare from the visitor, given their lowered guard. After the scare concludes, the false lid slowly re-closes and the suitcase waits patiently for its next victim.
One theme in this course was that a robot was a machine that was “surprisingly animate.” We attempted to achieve this in our piece through the use of sensors and motors. More specifically, through the use of distance sensors, motion sensors, and motors, we were able to surprise visitors based on expected animation and unexpected animation.
The expected animation came in the initial rattling of the suitcase powered by the DC motors and triggered by the motion sensor. This rattling marks the beginning of the visitor’s experience with the machine. We’d categorize this animation as expected since there is a clear and easy-to-understand relationship between the motion sensor and the resulting rattling.
The unexpected animation came in the sudden opening of the false lid. This was an event not within the initial scope of the machine. We were able to take this surprising event and elevate the surprise even more by using sensors to detect when the lid was re-closed, and thus when the visitor was least likely to expect a new event.
Our sculpture demonstrates how lackluster animation of a robot can be wielded as a tool to lower one’s guard in order to later heighten the surprise of future animation.
The most successful part of this project I think was creating some that was “surprisingly animate”. Movement in general can have varying degrees of apparent life or energy and something that moves and stops moving in relation to the people moving around it was a strong concept to hold onto. Where there could have been more refinement with more time, might have been the building and mechanics of the pop-up false lid. if it had a quality of movement closer to the joltiness of the “rattle” movement, it might have been a more cohesive piece.
The main issue I noticed as people approached the suitcase in the gallery, was that they didn’t know what to do with the suitcase. Ideally a piece does not need the creator standing right by it explaining and telling the viewer how it is meant to be experienced. To simplify, earlier in the process, we cut out the idea to have speakers and noises emit from the suitcase. Perhaps this may have been a good way for the piece itself to encourage people to actually open the suitcase.
Laser-cut files for gears, false lid, and hinges.
https://drive.google.com/file/d/1Y5cURw3J6UZVCzGKslVZdzoijgcBHqmP/view?usp=sharing
Sue:
Ana:
The viewer is drawn in by the piece. The piece then activates to show you an animated figure that is illuminated by a blue light.
This figure transforms from a fetus into an angel. As you step in closer to observe the figure and to also get a closer look at yourself in the mirror in the back, the figure is illuminated by a red light instead and now instead transforms into a devil.
There are quotes and questions at the back for the viewer to stop and ponder.
“The most terrifying thing is to accept oneself completely”
carl Jung
Has the guest accepted themselves completely with their good and bad traits?
Are you at peace with your demons?
There are videos of the project in the link below.
https://drive.google.com/drive/folders/1ilrTLhvp2Bp4DPPHF_XD7xJffvvBUvgV?usp=sharing
We made a model that had red demonic features and blue angel-like features. These features would appear under different lights.
You might not see much with the frame rate of the phone recording.
The Trial with Evangeline. To show which sensors get triggered