Project Documentation – F16 54-498/54-798/60-446/60-746: Expanded Theater https://courses.ideate.cmu.edu/54-498/f2016 Carnegie Mellon University, IDeATe Thu, 20 Oct 2016 19:29:47 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.28 Reflections https://courses.ideate.cmu.edu/54-498/f2016/reflections/ https://courses.ideate.cmu.edu/54-498/f2016/reflections/#respond Tue, 20 Sep 2016 18:50:36 +0000 https://courses.ideate.cmu.edu/54-498/f2016/?p=11159 Created by Yasha Jain, Michael James, and Kabir Mantha

The piece

“Reflections” explores power relationships through the manipulation of angle and time. A rotatable double-sided reflective surface (commonly known as a whiteboard), inherently forces spectators into two disparate experiences. It is impossible for one person to see both sides at once. While someone tilts the whiteboard, they can only see the affect on their own experience. They are thus naturally inclined to disregard the experience of “the other” (side).

ReflectionsOverview

The piece uses two projectors, two cameras, a whiteboard, accelerometer, and signal processing to create this interaction opportunity. There are projectors directed at each side of the whiteboard, positioned so that a person may stand in front of the board but not block the light. Two cameras are mounted on top of the whiteboard, pointing in opposite directions. An iPhone is affixed to the top of the whiteboard, so that the y-axis of its accelerometer corresponds to the tilt of the board. The camera feeds and accelerometer data are processed by Max/MSP/Jitter to create images mapped with Millumin. In Max, an operator can switch between different recipes that dictate how inputs mix into outputs.

ReflectionsHands

One recipe is using the tilt of the board to control the playback of recorded video. This recipe starts with a recording period where people can move around in front of the cameras. Then in the playback period, the tilt controls the location in time of the “playhead”, allowing the interactor to move forward and backward through the recorded frames one by one or rapidly.

A second recipe layers live video with delayed video to create a double image. The tilt of the whiteboard controls how delayed the second image is.

Getting here

Before our first workday, we talked about various ideas such as projecting on a mirror ball, but nothing really stuck. When we all first got together to work, we quickly landed on the idea of projecting on two sides of a whiteboard. The way in which only one side of the whiteboard could be seen at once led to a discussion about privilege and the physical relationship between interactors and the angle of the board. While we first discussed the use of found footage, we settled on live camera because it lends itself to a more generative and performative foundation.

After playing with the basic set-up, we realized we could do powerful things if we had data about the tilt of the whiteboard. Because our smartphones are packed full of sensors, we quickly settled on using an iPhone’s accelerometer data to interpret the angle of the board. With our set-up and this tilt data in hand, we brainstormed different ways of mixing things together. One of the most exciting ideas we landed on was using the tilt of the board to move backwards and forwards through recorded video.

ReflectionsYasha

At this point, we started building a Max patch and stepped up our set-up by including the iPhone and directional lighting. We experimented with these variables over two different sessions, observing about a halfway through the way projected light bounced off the whiteboard and created murky figures on the architectural surfaces of the space. We quickly jumped on this observation, finding it to be one of the major elements of the piece.

ReflectionsKabir

At this point, we added a false wall so that both sides of the installation could experience this reflection. We also tweaked our recipes, creating one where a current frame and a past frame are composited to create a double image. We got all the pieces (laptop, projectors, cameras, Millumin, Syphon, and Max) up and running and shared the work with our class.

]]>
https://courses.ideate.cmu.edu/54-498/f2016/reflections/feed/ 0
Close (but not too close) https://courses.ideate.cmu.edu/54-498/f2016/close-but-not-too-close/ https://courses.ideate.cmu.edu/54-498/f2016/close-but-not-too-close/#respond Tue, 20 Sep 2016 14:52:08 +0000 https://courses.ideate.cmu.edu/54-498/f2016/?p=11143 By Jess Medenbach, Emily Saltz, Irene Alvarado, Caroline Hermans

Overview: Close (but not too close) explores the boundaries of personal—and technological—space. The viewer’s relationship with the projected woman changes depending on their proximity to her, leading them to work with her to find just the right distance for comfort.

Video: https://vimeo.com/183056679

Almost too close

screen-shot-2016-09-20-at-10-44-33-am

Tech: A Kinect is used to track the viewer’s distance to the video projected on a sheet. The OSC distance values are divided into four ranges, each range triggering a different sets of video clips and glitch effects to play in a Processing sketch. The OSC distance ranges are also sent to Max MSP, which triggers one of three sounds to play.

Experience: To start, a single viewer enters an enclosed space and is met by an overpowering figure of a woman in a black dress and a visible Kinect sensor. She’s not in any recognizable physical space, but surrounded by slowly wavering color tv pixels, swimming around in a psychedelic swirl of blue and green. You’re positioning yourself not just within her internal world, but also transparently in relation to the Kinect. From a distance, she’s uncomfortable, looking away distractedly over a soundtrack of choppy cell phone interference. A physical pathway dares the viewer to come closer. Walking forward, the viewer’s relationship with the woman changes. As you walk closer, she directs her stare to you, and resonant major-key drone hums. The video experience is slightly different for each viewer. At this comfortable distance, she might smile, or spread her arms to embrace you. But walk further and the viewer crosses into the “too close” territory. The woman glitches (which is done using a Processing effect), feedback sounds echo, and the red button of the Kinect stares you in the face.

Future directions: We have many ideas about how we could expand this project in the future to offer other possibilities for what it means for people to find a shared comfort space. Currently, the metaphysical aesthetics—the Lynchian woman in a black dress (who may feel more like a fantastical, symbol of a woman than an actual vulnerable person), and the artificial psychedelic background—suggest that the viewer is dealing with distance in relationships in the abstract. However, as a team we didn’t intend to make an essentialist point about there being definitive distances for inducing comfort. It would be interesting to explore how these distance triggers might change for different characters. In addition, currently she’s reacting to you, but you can only react to her in a narrow way by walking back and forward. That means once you’ve figured out what makes her comfortable/uncomfortable, there are no more surprises (though the clips change up slightly for each state). It may be valuable to think about other dimensions of movement and expression that the viewer could you to interact either with the projected figure, or even another viewer of the piece.

]]>
https://courses.ideate.cmu.edu/54-498/f2016/close-but-not-too-close/feed/ 0
Esophageal Sphincter https://courses.ideate.cmu.edu/54-498/f2016/esophageal-sphincter-5/ https://courses.ideate.cmu.edu/54-498/f2016/esophageal-sphincter-5/#respond Tue, 20 Sep 2016 05:00:09 +0000 https://courses.ideate.cmu.edu/54-498/f2016/?p=11130 By Clare Carroll, Tim Sherman, and Samir Gangwani

The prompt for our project was to recreate an exterior space in the blackbox theater. We chose to explore the unseen space of the digestive tract in our piece. We combined mouth and colon to create an other-worldly internal experience.

 

 

esophageal-sphincter-system-diagram-1

This project involved getting many different pieces of technology to communicate with each other. We wanted the Kinect to control elements of both the audio and the video, so the openFrameworks patch had to receive and process the kinect data, and then send data out to two different locations. For the video, we sent a particle simulation image out to Millumin by using Syphon, which was implemented into the OF patch. For the audio, we sent OSC messages containing the coordinates of the blobs which the kinect sensor was tracking to the Max patch responsible for outputting the Ambisonic audio. In Millumin, we overlaid the particles patch image with our found and pre-recorded videos, and sent it all to a projector hung on the ceiling. This projector projected onto a mirror, hung at an angle such that the projection was thrown onto the floor, where the audience viewed it.

 

With the Kinect, we looked for people or “blobs” around the projection. For each blob, we established an attraction point for the OF particle system. This allowed the virus-like particles to swarm toward audience members as they walked around the experience.

 

The audio is comprised of three main tracks. The first is pulled audio from a colonoscopy procedure; the original audio was cut using iZotope RX 5 Audio Editor and then edited with reverb in Ableton so that only the doctor’s voice could be heard. The second is a recording of swallowing integrated with granular feedback and delay. The last track is comprised of improvised mouth sounds (slurping, chewing, etc.) with reverb and granular synthesis. The swallow and mouth sound tracks were recorded with a Yeti mic.

 

The sounds were then processed with Max using the High Order Ambisonic (HOA) Library decoder and automation map. The doctor’s voice is automated to travel around the room, the swallow sits in the middle of the room, and the mouth sounds are controlled by OSC data from the Kinect camera. You can listen to the original audio, remixed audio, and the combined audio here.

]]>
https://courses.ideate.cmu.edu/54-498/f2016/esophageal-sphincter-5/feed/ 0
Shy Friend https://courses.ideate.cmu.edu/54-498/f2016/11122-2/ https://courses.ideate.cmu.edu/54-498/f2016/11122-2/#respond Tue, 20 Sep 2016 02:33:09 +0000 https://courses.ideate.cmu.edu/54-498/f2016/?p=11122 p1040996

By John Hwang, Joe Mertz, and Sydney Ayers

Description:

For our immersive experience we aimed to reconstruct and simulate an innocent,
playful, and familiar experience from childhood. The setting is reminiscent of a
pillow fort, decorated with pillows, blankets, bed sheets, and other cozy elements to create a comfortable and intimate space; in addition, an indoor swing is hung in the middle. Our audience members, one at a time, would sit and play on the swing, and then encounter and interact with an “imaginary friend” over a friendly game of cat and mouse. The experience hopes to re-capture the light-hearted spirit and raw imagination of childhood.

Technical:
The technical components of the experience revolved around the interaction of the
audience member and the “imaginary friend”. The “friend” was programmed (in Processing) to react to the movements of the audience member (in particular, where the audience member was facing) in the swing set, either by running away from or chasing them. Movement was determined by using compass data from a
smartphone (passed on wirelessly via the hookOSC application), which was taped
underneath the swing. We used Syphoner to send the processing sketch into Millumin. The “friend” was back-projected onto the bed sheets from
two projectors outside the space.

Video:

]]>
https://courses.ideate.cmu.edu/54-498/f2016/11122-2/feed/ 0